id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
5,455,816 | https://en.wikipedia.org/wiki/Common%20Power%20Format | The Si2 Common Power Format, or CPF is a file format for specifying power-saving techniques early in the design process. In the design of integrated circuits, saving power is a primary goal, and designers are forced to use sophisticated techniques such as clock gating, multi-voltage logic, and turning off the power entirely to inactive blocks. These techniques require a consistent implementation in the design steps of logic design, implementation, and verification. For example, if multiple different power supplies are used, then logic synthesis must insert level shifters, place and route must deal with them correctly, and other tools such as static timing analysis and formal verification must understand these components. As power became an increasingly pressing concern, each tool independently added the features needed. Although this made it possible to build low power flows, it was difficult and error prone since the same information needed to be specified several times, in several formats, to many different tools. CPF was created as a common format that many tools can use to specify power-specific data, so that power intent only need be entered once and can be used consistently by all tools. The aim of CPF is to support an automated, power-aware design infrastructure.
Associated with CPF is the Power Forward Initiative (PFI), a group of companies that collaborate to drive low-power design methodology and have contributed to the development of the CPF v1.0 specification. PFI membership spans EDA, IP, library, foundry fabs, ASIC, IDM, and equipment companies. In March 2007, CPF v1.0 was contributed to the Silicon Integration Initiative (Si2) where it was ratified by Si2’s Low Power Coalition (LPC) as a Si2 standard. The LPC controls the ongoing evolution of the CPF v1.0 standard.
Contents
Constructs expressing power domains and their power supplies:
Logical design: hierarchical modules can be specified as belonging to specific power supply domains
Physical design: explicit power/ground nets and connectivity can be specified per cell or block.
Analysis: different timing library data for cases where the same cell is used in different power domains
Power control logic
Specification of level shifter logic - special cells needed when signals traverse between blocks of different supply voltage.
Specification of isolation logic - what special logic is needed for signals that traverse between blocks that can be powered up and down independently.
Specification of state-retention logic - when blocks are switched off entirely, how is the state retained?
Specification of switch logic and control signals - how are blocks switched on and off?
Definition and verification of power modes (standby, sleep, etc.)
Mode definitions
Mode transition expressions
History and controversy
Cadence Design Systems designed the early versions of CPF, then contributed it to Si2. This was followed shortly by an alternative effort, the Unified Power Format or UPF, proposed as an IEEE standard as opposed to an Si2 standard. UPF has been driven mainly by Synopsys, Mentor Graphics and Magma. The technical differences between the two formats are relatively minor, but the political considerations are harder to overcome. Not surprisingly, the Cadence Low-Power Solution supported Si2’s CPF very early on, as well as UPF as it emerged; whereas the Synopsys, and Mentor Graphics offerings all support UPF. Magma supports both CPF and UPF.
An attempt at convergence is taking place in the Low Power Coalition at Si2.
References
External links
Download CPF specification.
Computer file formats
Power standards | Common Power Format | [
"Engineering"
] | 705 | [
"Electrical engineering",
"Power standards"
] |
5,457,285 | https://en.wikipedia.org/wiki/DSSAM%20Model | The DSSAM Model (Dynamic Stream Simulation and Assessment Model) is a computer simulation developed for the Truckee River to analyze water quality impacts from land use and wastewater management decisions in the Truckee River Basin. This area includes the cities of Reno and Sparks, Nevada as well as the Lake Tahoe Basin. The model is historically and alternatively called the Earth Metrics Truckee River Model. Since original development in 1984-1986 under contract to the U.S. Environmental Protection Agency (EPA), the model has been refined and successive versions have been dubbed DSSAM II and DSSAM III. This hydrology transport model is based upon a pollutant loading metric called Total maximum daily load (TMDL). The success of this flagship model contributed to the Agency's broadened commitment to the use of the underlying TMDL protocol in its national policy for management of most river systems in the United States.
The Truckee River has a length of over and drains an area of approximately 3120 square miles, not counting the extent of its Lake Tahoe sub-basin. The DSSAM model establishes numerous stations along the entire river extent as well as a considerable number of monitoring points inside the Great Basin's Pyramid Lake, the receiving waters of this closed hydrological system. Although the region is sparsely populated, it is important because Lake Tahoe is visited by 20 million persons per annum and Truckee River water quality affects at least two endangered species: the Cui-ui sucker fish and the Lahontan cutthroat trout.
Development history
Impetus to derive a quantitative prediction model arose from a trend of historically decreasing river flow rates coupled with jurisdictional and tribal conflicts over water rights as well as concern for river biota. When expansion of the Reno-Sparks Wastewater Treatment Plant was proposed, the EPA decided to fund a large scale research effort to create simulation software and a parallel program to collect field data in the Truckee River and Pyramid Lake. For river stations water quality measurements were made in the benthic zone as well as the topic zone; in the case of Pyramid Lake boats were used to collect grab samples at varying depths and locations. Earth Metrics conducted the software development for the first generation computer model and collected field data on water quality and flow rates in the Truckee River. After model calibration, runs were made to evaluate impacts of alternative land use controls and discharge parameters for treated effluent.
The DSSAM Model is constructed to allow dynamic decay of most pollutants; for example, total nitrogen and phosphorus are allowed to be consumed by benthic algae in each time step, and the algal communities are given a separate population dynamic in each river reach (e.g.metabolic rate based upon river temperature). Sources throughout the watershed include non-point agricultural and urban stormwater as well as a multiplicity of point source discharges of treated municipal wastewater effluent.
Subsequent to the first generation of DSSAM model development, calibration and application, later refinements were made. These augmentations to model functionality focussed on increased flexibility in modeling the diel cycle and also allowed inclusion of analyzing particulate nitrogen and phosphorus. In developing DSSAM III several changes in the model operation and scope were performed.
Applications
Numerous different uses of the model have been made including (a)analysis of public policies for urban stormwater runoff, (b) researching agricultural methods for surface runoff minimization, (c) innovative solutions for non-point source control and d)engineering aspects of treated wastewater discharge. Regarding stormwater runoff in Washoe County, the specific elements within a new xeriscape ordinance were analyzed for efficacy using the model. For the varied agricultural uses in the watershed, the model was run to understand the principal sources of adverse impact, and management practices were developed to reduce in river pollution. Use of the model has specifically been conducted to analyze survival of two endangered species found in the Truckee River and Pyramid Lake: the Cui-ui sucker fish (endangered 1967) and the Lahontan cutthroat trout (threatened 1970). When the model is used for surface runoff reaching a stream, this pollutant input can be viewed as a line source (e.g., a continuous linear source of pollution entering the waterway).
See also
Nonpoint source pollution
SWAT model
Stochastic Empirical Loading and Dilution Model
Storm Water Management Model
References
External links
U.S. Environmental Protection Agency TMDL program for the Truckee River
Final TMDL waste loads for the Truckee Basin derived from the DSSAM Model
Computer-aided engineering software
United States Environmental Protection Agency
Hydrology
Water pollution | DSSAM Model | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 938 | [
"Environmental engineering",
"Hydrology",
"Water pollution"
] |
5,457,540 | https://en.wikipedia.org/wiki/Ka/Ks%20ratio | {{DISPLAYTITLE:Ka/Ks ratio}}
In genetics, the Ka/Ks ratio, also known as ω or dN/dS ratio, is used to estimate the balance between neutral mutations, purifying selection and beneficial mutations acting on a set of homologous protein-coding genes. It is calculated as the ratio of the number of nonsynonymous substitutions per non-synonymous site (Ka), in a given period of time, to the number of synonymous substitutions per synonymous site (Ks), in the same period. The latter are assumed to be neutral, so that the ratio indicates the net balance between deleterious and beneficial mutations. Values of Ka/Ks significantly above 1 are unlikely to occur without at least some of the mutations being advantageous. If beneficial mutations are assumed to make little contribution, then Ka/Ks estimates the degree of evolutionary constraint.
Context
Selection acts on variation in phenotypes, which are often the result of mutations in protein-coding genes. The genetic code is written in DNA sequences as codons, groups of three nucleotides. Each codon represents a single amino acid in a protein chain. However, there are more codons (64) than amino acids found in proteins (20), so many codons are effectively synonyms. For example, the DNA codons TTT and TTC both code for the amino acid Phenylalanine, so a change from the third T to C makes no difference to the resulting protein. On the other hand, the codon GAG codes for Glutamic acid while the codon GTG codes for Valine, so a change from the middle A to T does change the resulting protein, for better or (more likely) worse, so the change is not a synonym. These changes are illustrated in the tables below.
The Ka/Ks ratio measures the relative rates of synonymous and nonsynonymous substitutions at a particular site.
Methods
Methods for estimating Ka and Ks use a sequence alignment of two or more nucleotide sequences of homologous genes that code for proteins (rather than being genetic switches, controlling development or the rate of activity of other genes). Methods can be classified into three groups: approximate methods, maximum-likelihood methods, and counting methods. However, unless the sequences to be compared are distantly related (in which case maximum-likelihood methods prevail), the class of method used makes a minimal impact on the results obtained; more important are the assumptions implicit in the chosen method.
Approximate methods
Approximate methods involve three basic steps: (1) counting the number of synonymous and nonsynonymous sites in the two sequences, or estimating this number by multiplying the sequence length by the proportion of each class of substitution;
(2) counting the number of synonymous and nonsynonymous substitutions; and (3) correcting for multiple substitutions.
These steps, particularly the latter, require simplistic assumptions to be made if they are to be achieved computationally; for reasons discussed later, it is impossible to exactly determine the number of multiple substitutions.
Maximum-likelihood methods
The maximum-likelihood approach uses probability theory to complete all three steps simultaneously. It estimates critical parameters, including the divergence between sequences and the transition/transversion ratio, by deducing the most likely values to produce the input data.
Counting methods
In order to quantify the number of substitutions, one may reconstruct the ancestral sequence and record the inferred changes at sites (straight counting – likely to provide an underestimate); fitting the substitution rates at sites into predetermined categories (Bayesian approach; poor for small data sets); and generating an individual substitution rate for each codon (computationally expensive). Given enough data, all three of these approaches will tend to the same result.
Interpreting results
The Ka/Ks ratio is used to infer the direction and magnitude of natural selection acting on protein coding genes. A ratio greater than 1 implies positive or Darwinian selection (driving change); less than 1 implies purifying or stabilizing selection (acting against change); and a ratio of exactly 1 indicates neutral (i.e. no) selection. However, a combination of positive and purifying selection at different points within the gene or at different times along its evolution may cancel each other out. The resulting averaged value can mask the presence of one of the selections and lower the seeming magnitude of another selection.
Of course, it is necessary to perform a statistical analysis to determine whether a result is significantly different from 1, or whether any apparent difference may occur as a result of a limited data set. The appropriate statistical test for an approximate method involves approximating dN − dS with a normal approximation, and determining whether 0 falls within the central region of the approximation. More sophisticated likelihood techniques can be used to analyse the results of a Maximum Likelihood analysis, by performing a chi-squared test to distinguish between a null model (Ka/Ks = 1) and the observed results.
Utility
The Ka/Ks ratio is a more powerful test of the neutral model of evolution than many others available in population genetics as it requires fewer assumptions.
Complications
There is often a systematic bias in the frequency at which various nucleotides are swapped, as certain mutations are more probable than others. For instance, some lineages may swap C to T more frequently than they swap C to A. In the case of the amino acid Asparagine, which is coded by the codons AAT or AAC, a high C->T exchange rate will increase the proportion of synonymous substitutions at this codon, whereas a high C→A exchange rate will increase the rate of non-synonymous substitutions. Because it is rather common for transitions (T↔C & A↔G) to be favoured over transversions (other changes), models must account for the possibility of non-homogeneous rates of exchange. Some simpler approximate methods, such as those of Miyata & Yasunaga and Nei & Gojobori, neglect to take these into account, which generates a faster computational time at the expense of accuracy; these methods will systematically overestimate N and underestimate S.
Further, there may be a bias in which certain codons are preferred in a gene, as a certain combination of codons may improve translational efficiency. A 2022 study reported that synonymous mutations in representative yeast genes are mostly strongly non-neutral, which calls into question the assumptions underlying use of the Ka/Ks ratio.
In addition, as time progresses, it is possible for a site to undergo multiple modifications. For instance, a codon may switch from AAA→AAC→AAT→AAA. There is no way of detecting multiple substitutions at a single site, thus the estimate of the number of substitutions is always an underestimate. In addition, in the example above two non-synonymous and one synonymous substitution occurred at the third site; however, because substitutions restored the original sequence, there is no evidence of any substitution. As the divergence time between two sequences increases, so too does the amount of multiple substitutions. Thus "long branches" in a dN/dS analysis can lead to underestimates of both dN and dS, and the longer the branch, the harder it is to correct for the introduced noise. Of course, the ancestral sequence is usually unknown, and two lineages being compared will have been evolving in parallel since their last common ancestor. This effect can be mitigated by constructing the ancestral sequence; the accuracy of this sequence is enhanced by having a large number of sequences descended from that common ancestor to constrain its sequence by phylogenetic methods.
Methods that account for biases in codon usage and transition/transversion rates are substantially more reliable than those that do not.
Limitations
Although the Ka/Ks ratio is a good indicator of selective pressure at the sequence level, evolutionary change can often take place in the regulatory region of a gene which affects the level, timing or location of gene expression. Ka/Ks analysis will not detect such change. It will only calculate selective pressure within protein coding regions. In addition, selection that does not cause differences at an amino acid level—for instance, balancing selection—cannot be detected by these techniques.
Another issue is that heterogeneity within a gene can make a result hard to interpret. For example, if Ka/Ks = 1, it could be due to relaxed selection, or to a chimera of positive and purifying selection at the locus. A solution to this limitation would be to apply Ka/Ks analysis across many species at individual codons.
The Ka/Ks method requires a rather strong signal in order to detect selection.
In order to detect selection between lineages, then the selection, averaged over all sites in the sequence, must produce a Ka/Ks greater than one—quite a feat if regions of the gene are strongly conserved.
In order to detect selection at specific sites, then the Ka/Ks ratio must be greater than one when averaged over all included lineages at that site—implying that the site must be under selective pressure in all sampled lineages.
This limitation can be moderated by allowing the Ka/Ks rate to take multiple values across sites and across lineages; the inclusion of more lineages also increases the power of a sites-based approach.
Further, the method lacks the capability to distinguish between positive and negative nonsynonymous substitutions. Some amino acids are chemically similar to one another, whereas other substitutions may place an amino acid with wildly different properties to its precursor. In most situations, a smaller chemical change is more likely to allow the protein to continue to function, and a large chemical change is likely to disrupt the chemical structure and cause the protein to malfunction. However, incorporating this into a model is not straightforward as the relationship between a nucleotide substitution and the effects of the modified chemical properties is very difficult to determine.
An additional concern is that the effects of time must be incorporated into an analysis, if the lineages being compared are closely related; this is because it can take a number of generations for natural selection to "weed out" deleterious mutations from a population, especially if their effect on fitness is weak. This limits the usefulness of the Ka/Ks ratio for comparing closely related populations.
Individual codon approach
Additional information can be gleaned by determining the Ka/Ks ratio at specific codons within a gene sequence. For instance, the frequency-tuning region of an opsin may be under enhanced selective pressure when a species colonises and adapts to new environment, whereas the region responsible for initializing a nerve signal may be under purifying selection. In order to detect such effects, one would ideally calculate the Ka/Ks ratio at each site. However this is computationally expensive and in practise, a number of Ka/Ks classes are established, and each site is assigned to the best-fitting class.
The first step in identifying whether positive selection acts on sites is to compare a test where the Ka/Ks ratio is constrained to be < 1 in all sites to one where it may take any value, and see if permitting Ka/Ks to exceed 1 in some sites improves the fit of the model. If this is the case, then sites fitting into the class where Ka/Ks > 1 are candidates to be experiencing positive selection. This form of test can either identify sites that further laboratory research can examine to determine possible selective pressure; or, sites believed to have functional significance can be assigned into different Ka/Ks classes before the model is run.
Notes
References
Further reading
External links
KaKs_Calculator
Free online server tool that calculates KaKs ratios among multiple sequences
SeqinR: A free and open biological sequence analysis package for the R language that includes KaKs calculation
Molecular evolution
Genetics
Statistical ratios | Ka/Ks ratio | [
"Chemistry",
"Biology"
] | 2,434 | [
"Evolutionary processes",
"Genetics",
"Molecular evolution",
"Molecular biology"
] |
5,458,969 | https://en.wikipedia.org/wiki/Fraunhofer%20Institute%20for%20Applied%20Optics%20and%20Precision%20Engineering | The Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), also referred to as the Fraunhofer IOF, is an institute of the Fraunhofer Society for the Advancement of Applied Research (FHG). The institute is based in Jena. Its activities are attributed to applied research and development in the branch of natural sciences in the field of optics and precision engineering. The institute was founded in 1992.
Research and development
Building upon the experience of the Jena region in the field of surface and thin film technologies for optics, the Fraunhofer IOF conducts research and development in the area of optical systems aiming at enhancing the control of light – from its generation and manipulation to its actual use. The combination of competences in the areas of optics and precision engineering is particularly important.
The focuses also result in the department structure:
Opto-mechanical System Design
Micro and Nano-structured Optics
Opto-mechatronical Components and Systems
Precision Optical Components and Systems
Functional Optical Surfaces and Layers
Laser- and Fiber Technology
Imaging and Sensing
Emerging Technologies
see also: thin film technology, surface physics, microstructure technology, nanotechnology, micro-optics, measurement technology, quantum technology
CMN-Optics
In July 2006, the Fraunhofer IOF opened the Center for Advanced Micro- and Nano-Optics (CMN-Optics). The core of the facility is the SB350-OS electron beam lithography system. This device, also known as an "electron beam recorder", allows minimal structure sizes in the range of 50 nm with a high accuracy on substrate sizes up to 300 mm. The center is operated jointly with the Institute for Applied Physics (IAP) of the Friedrich Schiller University of Jena. The facility is also used by the Institute for Photonic Technologies (IPHT), Jena. The facility cost twelve million euros and was financed by the European Union, the Free State of Thuringia and the Fraunhofer Society.
Cooperations
In 2003, the Fraunhofer Society concluded a cooperation agreement with the Friedrich Schiller University of Jena. It is the basis for collaboration between Fraunhofer IOF staff and the staff of the Institute of Applied Physics at the University of Jena. The aim of the cooperation is to provide practical training for the students, to improve the implementation of research results into practice and to share the high-quality equipment and infrastructures of both institutions.
Infrastructure
At the end of 2020, Fraunhofer IOF had almost 330 employees, most of whom are scientists and technicians.
The operating budget for the Fraunhofer IOF was EUR 51,5 million in the 2020 financial year.
The Fraunhofer IOF has been headed by Andreas Tünnermann since 2003, who is also Director of the Institute of Applied Physics at the Friedrich Schiller University in Jena.
The institute has excellently equipped laboratories on an area of 3830 m² plus 1115 m² clean room (ISO class 1 - ISO class 7), a mechanics workshop meeting the highest demands as well as a test field for extensive testing and demonstration purposes.
In the years 2002 and 2013 expansion facilities were built on the Beutenberg Campus in Jena.
In 2017 a fiber technology center was inaugurated at Fraunhofer IOF, which includes new special laboratories for the production of active and passive micro- and nanostructured optical fibers and one of the world's most powerful fiber drawing towers.
Awards
German Future Prize 2007
In cooperation with the semi-conductor manufacturer company Osram Opto Semiconductors from Regensburg, researchers from the Fraunhofer Institute in Jena, headed by Andreas Bräuer, received the German Future Prize worth EUR 250,000 on December 6. Their innovation consisted of improved chips, packages and a special optical system that enable more powerful light-emitting diodes.
German Future Prize 2013
Stefan Nolte from the Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) and Friedrich Schiller University Jena were together with Jens König (Robert Bosch GmbH) and Dirk Sutter (Trumpf Lasers) awarded for their work with ultra-short pulse lasers on December 4, 2013.
German Future Prize 2020
For their project "EUV Lithography - New Light for the Digital Age", the team of experts led by Sergiy Yulin (Fraunhofer IOF), Peter Kürz (ZEISS Semiconductor Manufacturing Technology (SMT) division), and Michael Kösters (TRUMPF Lasersystems for Semiconductor Manufacturing) was awarded the prize for technology and innovation.
References
External links
Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) - Official website
1992 establishments in Germany
Engineering research institutes
Fraunhofer Society
Laboratories in Germany
Organizations established in 1992
Robotics organizations | Fraunhofer Institute for Applied Optics and Precision Engineering | [
"Engineering"
] | 975 | [
"Engineering research institutes"
] |
2,156,176 | https://en.wikipedia.org/wiki/Delay-insensitive%20minterm%20synthesis | Within digital electronics, the DIMS (delay-insensitive minterm synthesis) system is an asynchronous design methodology making the least possible timing assumptions. Assuming only the quasi-delay-insensitive delay model the generated designs need little if any timing hazard testing. The basis for DIMS is the use of two wires to represent each bit of data. This is known as a dual-rail data encoding. Parts of the system communicate using the early four-phase asynchronous protocol.
The construction of DIMS logic gates comprises generating every possible minterm using a row of C-elements and then gathering the outputs of these using OR gates which generate the true and false output signals. With two dual-rail inputs the gate would be composed of four two-input C-elements. A three input gate uses eight three-input C-elements.
Latches are constructed using two C-elements to store the data and an OR gate to acknowledge the input once the data has been latched by attaching as its inputs the data output wires. The acknowledge from the forward stage is inverted and passed to the C-elements to allow them to reset once the computation has completed. This latch design is known as the 'half latch'. Other asynchronous latches provide a higher data capacity and levels of decoupling.
DIMS designs are large and slow but they have the advantage of being very robust.
Further reading
Jens Sparsø, Steve Furber: "Principles of Asynchronous Circuit Design"; Kluwer, Dordrecht (2001); chapter 5.5.1.
Digital electronics | Delay-insensitive minterm synthesis | [
"Engineering"
] | 337 | [
"Electronic engineering",
"Digital electronics"
] |
2,158,403 | https://en.wikipedia.org/wiki/Quantum%20weirdness | Quantum weirdness encompasses the aspects of quantum mechanics that challenge and defy human physical intuition.
Human physical intuition is based on macroscopic physical phenomena as are experienced in everyday life, which can mostly be adequately described by the Newtonian mechanics of classical physics. Early 20th-century models of atomic physics, such as the Rutherford–Bohr model, represented subatomic particles as little balls occupying well-defined spatial positions, but it was soon found that the physics needed at a subatomic scale, which became known as "quantum mechanics", implies many aspects for which the models of classical physics are inadequate. These aspects include:
quantum entanglement;
quantum nonlocality, referred to by Einstein as "spooky action at a distance"; see also EPR paradox;
quantum superposition, presented in dramatic form in the thought experiment known as Schrödinger's cat;
the uncertainty principle;
wave–particle duality;
the probabilistic nature of wave function collapse, decried by Einstein, saying, "God does not play dice".
See also
Bell's theorem
Interpretations of quantum mechanics
Quantum tunneling
Renninger negative-result experiment
Wheeler's delayed-choice experiment
References
Further reading
Book reviews
Articles
Quantum mechanics | Quantum weirdness | [
"Physics"
] | 252 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
2,158,646 | https://en.wikipedia.org/wiki/Double%20electron%20capture | Double electron capture is a decay mode of an atomic nucleus. For a nuclide (A, Z) with a number of nucleons A and atomic number Z, double electron capture is only possible if the mass of the nuclide (A, Z−2) is lower.
In this mode of decay, two of the orbital electrons are captured via the weak interaction by two protons in the nucleus, forming two neutrons (Two neutrinos are emitted in the process). Since the protons are changed to neutrons, the number of neutrons increases by two, while the number of protons Z decreases by two, and the atomic mass number A remains unchanged. As a result, by reducing the atomic number by two, double electron capture transforms the nuclide into a different element.
Example:
{| border="0"
|- style="height:2em;"
| ||+ ||2 ||→ || ||+ ||2
|}
Rarity
In most cases this decay mode is masked by other, more probable modes involving fewer particles, such as single electron capture. When all other modes are “forbidden” (strongly suppressed) double electron capture becomes the main mode of decay. There exist 34 naturally occurring nuclei that are believed to undergo double electron capture, but the process has been confirmed by observation in the decay of only three nuclides: , , and .
One reason is that the probability of double electron capture is stupendously small; the half-lives for this mode lie well above 10 years. A second reason is that the only detectable particles created in this process are X-rays and Auger electrons that are emitted by the excited atomic shell. In the range of their energies (~1–10 keV), the background is usually high. Thus, the experimental detection of double electron capture is more difficult than that for double beta decay.
Double electron capture can be accompanied by the excitation of the daughter nucleus. Its de-excitation, in turn, is accompanied by an emission of photons with energies of hundreds of keV.
Modes with positron emission
If the mass difference between the mother and daughter atoms is more than two masses of an electron (1.022 MeV), the energy released in the process is enough to allow another mode of decay, called electron capture with positron emission. It occurs along with double electron capture, their branching ratio depending on nuclear properties.
When the mass difference is more than four electron masses (2.044 MeV), the third mode, called double positron decay, is allowed. Only six naturally occurring nuclides (78Kr, 96Ru, 106Cd, 124Xe, 130Ba, and 136Ce) plus the non-primordial 148Gd and 154Dy are energetically allowed to decay via these three modes simultaneously.
Neutrinoless double electron capture
The above-described process with the capture of two electrons and emission of two neutrinos (two-neutrino double electron capture) is allowed by the Standard Model of particle physics: No conservation laws (including lepton number conservation) are violated. However, if the lepton number is not conserved, or equivalently the neutrino is its own antiparticle, another kind of process can occur: the so-called neutrinoless double electron capture. In this case, two electrons are captured by nucleus, but neutrinos are not emitted. The energy released in this process is carried away by an internal bremsstrahlung gamma quantum.
Example:
{| border="0"
|- style="height:2em;"
| ||+ ||2 ||→ ||
|}
This mode of decay has never been observed experimentally, and would contradict the Standard Model if it were observed.
See also
Double beta decay
Neutrinoless double beta decay
Beta decay
Neutrino
Particle radiation
Radioactive isotope
References
External links
Nuclear physics
Radioactivity | Double electron capture | [
"Physics",
"Chemistry"
] | 826 | [
"Radioactivity",
"Nuclear physics"
] |
2,159,714 | https://en.wikipedia.org/wiki/Saturation%20%28magnetic%29 | Seen in some magnetic materials, saturation is the state reached when an increase in applied external magnetic field H cannot increase the magnetization of the material further, so the total magnetic flux density B more or less levels off. (Though, magnetization continues to increase very slowly with the field due to paramagnetism.) Saturation is a characteristic of ferromagnetic and ferrimagnetic materials, such as iron, nickel, cobalt and their alloys. Different ferromagnetic materials have different saturation levels.
Description
Saturation is most clearly seen in the magnetization curve (also called BH curve or hysteresis curve) of a substance, as a bending to the right of the curve (see graph at right). As the H field increases, the B field approaches a maximum value asymptotically, the saturation level for the substance. Technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is several orders of magnitude smaller than the ferromagnetic rate seen below saturation.
The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: or the relative permeability , where is the vacuum permeability. The permeability of ferromagnetic materials is not constant, but depends on H. In saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one.
Different materials have different saturation levels. For example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6–2.2teslas (T), whereas ferrites saturate at 0.2–0.5T. Some amorphous alloys saturate at 1.2–1.3T. Mu-metal saturates at around 0.8T.
Explanation
Ferromagnetic materials (like iron) are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. Before an external magnetic field is applied to the material, the domains' magnetic fields are oriented in random directions, effectively cancelling each other out, so the net external magnetic field is negligibly small. When an external magnetizing field H is applied to the material, it penetrates the material and aligns the domains, causing their tiny magnetic fields to turn and align parallel to the external field, adding together to create a large magnetic field B which extends out from the material. This is called magnetization. The stronger the external magnetic field H, the more the domains align, yielding a higher magnetic flux density B. Eventually, at a certain external magnetic field, the domain walls have moved as far as they can, and the domains are as aligned as the crystal structure allows them to be, so there is negligible change in the domain structure on increasing the external magnetic field above this. The magnetization remains nearly constant, and is said to have saturated. The domain structure at saturation depends on the temperature.
Effects and uses
Saturation puts a practical limit on the maximum magnetic fields achievable in ferromagnetic-core electromagnets and transformers of around 2 T, which puts a limit on the minimum size of their cores. This is one reason why high power motors, generators, and utility transformers are physically large; to conduct the large amounts of magnetic flux necessary for high power production, they must have large magnetic cores. In applications in which the weight of magnetic cores must be kept to a minimum, such as transformers and electric motors in aircraft, a high saturation alloy such as Permendur is often used.
In electronic circuits, transformers and inductors with ferromagnetic cores operate nonlinearly when the current through them is large enough to drive their core materials into saturation. This means that their inductance and other properties vary with changes in drive current. In linear circuits this is usually considered an unwanted departure from ideal behavior. When AC signals are applied, this nonlinearity can cause the generation of harmonics and intermodulation distortion. To prevent this, the level of signals applied to iron core inductors must be limited so they don't saturate. To lower its effects, an air gap is created in some kinds of transformer cores. The saturation current, the current through the winding required to saturate the magnetic core, is given by manufacturers in the specifications for many inductors and transformers.
On the other hand, saturation is exploited in some electronic devices. Saturation is employed to limit current in saturable-core transformers, used in arc welding, and ferroresonant transformers which serve as voltage regulators. When the primary current exceeds a certain value, the core is pushed into its saturation region, limiting further increases in secondary current. In a more sophisticated application, saturable core inductors and magnetic amplifiers use a DC current through a separate winding to control an inductor's impedance. Varying the current in the control winding moves the operating point up and down on the saturation curve, controlling the alternating current through the inductor. These are used in variable fluorescent light ballasts, and power control systems.
Saturation is also exploited in fluxgate magnetometers and fluxgate compasses.
In some audio applications, saturable transformers or inductors are deliberately used to introduce distortion into an audio signal. Magnetic saturation generates odd-order harmonics, typically introducing third and fifth harmonic distortion to the lower and mid frequency range.
See also
Magnetic reluctance
Permendur/Hiperco
References
Magnetic hysteresis
Audio effects | Saturation (magnetic) | [
"Physics",
"Materials_science"
] | 1,170 | [
"Physical phenomena",
"Hysteresis",
"Magnetic hysteresis"
] |
2,160,429 | https://en.wikipedia.org/wiki/Dynamical%20billiards | A dynamical billiard is a dynamical system in which a particle alternates between free motion (typically as a straight line) and specular reflections from a boundary. When the particle hits the boundary it reflects from it without loss of speed (i.e. elastic collisions). Billiards are Hamiltonian idealizations of the game of billiards, but where the region contained by the boundary can have shapes other than rectangular and even be multidimensional. Dynamical billiards may also be studied on non-Euclidean geometries; indeed, the first studies of billiards established their ergodic motion on surfaces of constant negative curvature. The study of billiards which are kept out of a region, rather than being kept in a region, is known as outer billiard theory.
The motion of the particle in the billiard is a straight line, with constant energy, between reflections with the boundary (a geodesic if the Riemannian metric of the billiard table is not flat). All reflections are specular: the angle of incidence just before the collision is equal to the angle of reflection just after the collision. The sequence of reflections is described by the billiard map that completely characterizes the motion of the particle.
Billiards capture all the complexity of Hamiltonian systems, from integrability to chaotic motion, without the difficulties of integrating the equations of motion to determine its Poincaré map. Birkhoff showed that a billiard system with an elliptic table is integrable.
Equations of motion
The Hamiltonian for a particle of mass m moving freely without friction on a surface is:
where is a potential designed to be zero inside the region in which the particle can move, and infinity otherwise:
This form of the potential guarantees a specular reflection on the boundary. The kinetic term guarantees that the particle moves in a straight line, without any change in energy. If the particle is to move on a non-Euclidean manifold, then the Hamiltonian is replaced by:
where is the metric tensor at point . Because of the very simple structure of this Hamiltonian, the equations of motion for the particle, the Hamilton–Jacobi equations, are nothing other than the geodesic equations on the manifold: the particle moves along geodesics.
Notable billiards and billiard classes
Hadamard's billiards
Hadamard's billiards concern the motion of a free point particle on a surface of constant negative curvature, in particular, the simplest compact Riemann surface with negative curvature, a surface of genus 2 (a two-holed donut). The model is exactly solvable, and is given by the geodesic flow on the surface. It is the earliest example of deterministic chaos ever studied, having been introduced by Jacques Hadamard in 1898.
Artin's billiard
Artin's billiard considers the free motion of a point particle on a surface of constant negative curvature, in particular, the simplest non-compact Riemann surface, a surface with one cusp. It is notable for being exactly solvable, and yet not only ergodic but also strongly mixing. It is an example of an Anosov system. This system was first studied by Emil Artin in 1924.
Dispersing and semi-dispersing billiards
Let M be complete smooth Riemannian manifold without boundary, maximal sectional curvature of which is not greater than K and with the injectivity radius . Consider a collection of n geodesically convex subsets (walls) , , such that their boundaries are smooth submanifolds of codimension one. Let
, where denotes the interior of the set . The set will be called the billiard table.
Consider now a particle that moves inside the set B with unit speed along a geodesic until
it reaches one of the sets Bi (such an event is called a collision) where it reflects according to the law “the angle of incidence is equal to the angle of reflection” (if it reaches one of the sets , , the trajectory is not defined after that moment). Such dynamical system is called semi-dispersing billiard. If the walls are strictly convex, then the billiard is called dispersing. The naming is motivated by observation that a locally parallel beam of trajectories disperse after a collision with strictly convex part of a wall, but remain locally parallel after a collision with a flat section of a wall.
Dispersing boundary plays the same role for billiards as negative curvature does for geodesic flows causing the exponential instability of the dynamics. It is precisely this dispersing mechanism that gives dispersing billiards their strongest chaotic properties, as it was established by Yakov G. Sinai. Namely, the billiards are ergodic, mixing, Bernoulli, having a positive Kolmogorov-Sinai entropy and an exponential decay of correlations.
Chaotic properties of general semi-dispersing billiards are not understood that well, however, those of one important type of semi-dispersing billiards, hard ball gas were studied in some details since 1975 (see next section).
General results of Dmitri Burago and Serge Ferleger on the uniform estimation on the number of collisions in non-degenerate semi-dispersing billiards allow to establish finiteness of its topological entropy and no more than exponential growth of periodic trajectories. In contrast, degenerate semi-dispersing billiards may have infinite topological entropy.
Lorentz gas, a.k.a. Sinai billiard
The table of the Lorentz gas (also known as Sinai billiard) is a square with a disk removed from its center; the table is flat, having no curvature. The billiard arises from studying the behavior of two interacting disks bouncing inside a square, reflecting off the boundaries of the square and off each other. By eliminating the center of mass as a configuration variable, the dynamics of two interacting disks reduces to the dynamics in the Sinai billiard.
The billiard was introduced by Yakov G. Sinai as an example of an interacting Hamiltonian system that displays physical thermodynamic properties: almost all (up to a measure zero) of its possible trajectories are ergodic and it has a positive Lyapunov exponent.
Sinai's great achievement with this model was to show that the classical Boltzmann–Gibbs ensemble for an ideal gas is essentially the maximally chaotic Hadamard billiards.
Bouncing ball billiard
A particle is subject to a constant force (e.g. the gravity of the Earth) and scatters inelastically on a periodically corrugated vibrating floor. When the floor is made of arc or circles - in a certain intervall of frequencies - one can give a semi-analytic estimates to the rate of exponential separation of the trajectories.
Bunimovich stadium
The table called the Bunimovich stadium is a rectangle capped by semicircles, a shape called a stadium. Until it was introduced by Leonid Bunimovich, billiards with positive Lyapunov exponents were thought to need convex scatters, such as the disk in the Sinai billiard, to produce the exponential divergence of orbits. Bunimovich showed that by considering the orbits beyond the focusing point of a concave region it was possible to obtain exponential divergence.
Magnetic billiards
Magnetic billiards represent billiards where a charged particle is propagating under the presence of a perpendicular magnetic field. As a result, the particle trajectory changes from a straight line into an arc of a circle. The radius of this circle is inversely proportional to the magnetic field strength. Such billiards have been useful in real world applications of billiards, typically modelling nanodevices (see Applications).
Generalized billiards
Generalized billiards (GB) describe a motion of a mass point (a particle) inside a closed domain with the piece-wise smooth boundary . On the boundary the velocity of point is transformed as the particle underwent the action of generalized billiard law. GB were introduced by Lev D. Pustyl'nikov in the general case, and, in the case when is a parallelepiped in connection with the justification of the second law of thermodynamics. From the physical point of view, GB describe a gas consisting of finitely many particles moving in a vessel, while the walls of the vessel heat up or cool down. The essence of the generalization is the following. As the particle hits the boundary , its velocity transforms with the help of a given function , defined on the direct product (where is the real line, is a point of the boundary and is time), according to the following law. Suppose that the trajectory of the particle, which moves with the velocity , intersects at the point at time . Then at time the particle acquires the velocity , as if it underwent an elastic push from the infinitely-heavy plane , which is tangent to at the point , and at time moves along the normal to at with the velocity . We emphasize that the position of the boundary itself is fixed, while its action upon the particle is defined through the function .
We take the positive direction of motion of the plane to be towards the interior of . Thus if the derivative , then the particle accelerates after the impact.
If the velocity , acquired by the particle as the result of the above reflection law, is directed to the interior of the domain , then the particle will leave the boundary and continue moving in until the next collision with . If the velocity is directed towards the outside of , then the particle remains on at the point until at some time the interaction with the boundary will force the particle to leave it.
If the function does not depend on time ; i.e., , the generalized billiard coincides with the classical one.
This generalized reflection law is very natural. First, it reflects an obvious fact that the walls of the vessel with gas are motionless. Second the action of the wall on the particle is still the classical elastic push. In the essence, we consider infinitesimally moving boundaries with given velocities.
It is considered the reflection from the boundary both in the framework of classical mechanics (Newtonian case) and the theory of relativity (relativistic case).
Main results: in the Newtonian case the energy of particle is bounded, the Gibbs entropy is a constant, (in Notes) and in relativistic case the energy of particle, the Gibbs entropy, the entropy with respect to the phase volume grow to infinity, (in Notes), references to generalized billiards.
Quantum chaos
The quantum version of the billiards is readily studied in several ways. The classical Hamiltonian for the billiards, given above, is replaced by the stationary-state Schrödinger equation or, more precisely,
where is the Laplacian. The potential that is infinite outside the region but zero inside it translates to the Dirichlet boundary conditions:
As usual, the wavefunctions are taken to be orthonormal:
Curiously, the free-field Schrödinger equation is the same as the Helmholtz equation,
with
This implies that two and three-dimensional quantum billiards can be modelled by the classical resonance modes of a radar cavity of a given shape, thus opening a door to experimental verification. (The study of radar cavity modes must be limited to the transverse magnetic (TM) modes, as these are the ones obeying the Dirichlet boundary conditions).
The semi-classical limit corresponds to which can be seen to be equivalent to , the mass increasing so that it behaves classically.
As a general statement, one may say that whenever the classical equations of motion are integrable (e.g. rectangular or circular billiard tables), then the quantum-mechanical version of the billiards is completely solvable. When the classical system is chaotic, then the quantum system is generally not exactly solvable, and presents numerous difficulties in its quantization and evaluation. The general study of chaotic quantum systems is known as quantum chaos.
A particularly striking example of scarring on an elliptical table is given by the observation of the so-called quantum mirage.
Applications
Billiards, both quantum and classical, have been applied in several areas of physics to model quite diverse real world systems. Examples include ray-optics, lasers, acoustics, optical fibers (e.g. double-clad fibers ), or quantum-classical correspondence. One of their most frequent application is to model particles moving inside nanodevices, for example quantum dots, pn-junctions, antidot superlattices, among others. The reason for this broadly spread effectiveness of billiards as physical models resides on the fact that in situations with small amount of disorder or noise, the movement of e.g. particles like electrons, or light rays, is very much similar to the movement of the point-particles in billiards. In addition, the energy conserving nature of the particle collisions is a direct reflection of the energy conservation of Hamiltonian mechanics.
Software
Open source software to simulate billiards exist for various programming languages. From most recent to oldest, existing software are: DynamicalBilliards.jl (Julia), Bill2D (C++) and Billiard Simulator (Matlab). The animations present on this page were done with DynamicalBilliards.jl.
See also
Fermi–Ulam model (billiards with oscillating walls)
Lubachevsky–Stillinger algorithm of compression simulates hard spheres colliding not only with the boundaries but also among themselves while growing in sizes
Arithmetic billiards
Illumination problem
Notes
References
Sinai's billiards
(in English, Sov. Math Dokl. 4 (1963) pp. 1818–1822).
Ya. G. Sinai, "Dynamical Systems with Elastic Reflections", Russian Mathematical Surveys, 25, (1970) pp. 137–191.
V. I. Arnold and A. Avez, Théorie ergodique des systèms dynamiques, (1967), Gauthier-Villars, Paris. (English edition: Benjamin-Cummings, Reading, Mass. 1968). (Provides discussion and references for Sinai's billiards.)
D. Heitmann, J.P. Kotthaus, "The Spectroscopy of Quantum Dot Arrays", Physics Today (1993) pp. 56–63. (Provides a review of experimental tests of quantum versions of Sinai's billiards realized as nano-scale (mesoscopic) structures on silicon wafers.)
S. Sridhar and W. T. Lu, "Sinai Billiards, Ruelle Zeta-functions and Ruelle Resonances: Microwave Experiments", (2002) Journal of Statistical Physics, Vol. 108 Nos. 5/6, pp. 755–766.
Linas Vepstas, Sinai's Billiards, (2001). (Provides ray-traced images of Sinai's billiards in three-dimensional space. These images provide a graphic, intuitive demonstration of the strong ergodicity of the system.)
N. Chernov and R. Markarian, "Chaotic Billiards", 2006, Mathematical survey and monographs nº 127, AMS.
Strange billiards
T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A28, page 5033ff, 1995. PDF-Document
Bunimovich stadium
Flash animation illustrating the chaotic Bunimovich Stadium
Generalized billiards
M. V. Deryabin and L. D. Pustyl'nikov, "Generalized relativistic billiards", Reg. and Chaotic Dyn. 8(3), pp. 283–296 (2003).
M. V. Deryabin and L. D. Pustyl'nikov, "On Generalized Relativistic Billiards in External Force Fields", Letters in Mathematical Physics, 63(3), pp. 195–207 (2003).
M. V. Deryabin and L. D. Pustyl'nikov, "Exponential attractors in generalized relativistic billiards", Comm. Math. Phys. 248(3), pp. 527–552 (2004).
External links
Scholarpedia entry on Dynamical Billiards (Leonid Bunimovich)
Introduction to dynamical systems using billiards, Max Planck Institute for the Physics of Complex Systems
Dynamical systems
Cue sports | Dynamical billiards | [
"Physics",
"Mathematics"
] | 3,429 | [
"Mechanics",
"Dynamical systems"
] |
11,735,693 | https://en.wikipedia.org/wiki/Complex%20quadratic%20polynomial | A complex quadratic polynomial is a quadratic polynomial whose coefficients and variable are complex numbers.
Properties
Quadratic polynomials have the following properties, regardless of the form:
It is a unicritical polynomial, i.e. it has one finite critical point in the complex plane, Dynamical plane consist of maximally 2 basins: basin of infinity and basin of finite critical point ( if finite critical point do not escapes)
It can be postcritically finite, i.e. the orbit of the critical point can be finite, because the critical point is periodic or preperiodic.
It is a unimodal function,
It is a rational function,
It is an entire function.
Forms
When the quadratic polynomial has only one variable (univariate), one can distinguish its four main forms:
The general form: where
The factored form used for the logistic map:
which has an indifferent fixed point with multiplier at the origin
The monic and centered form,
The monic and centered form has been studied extensively, and has the following properties:
It is the simplest form of a nonlinear function with one coefficient (parameter),
It is a centered polynomial (the sum of its critical points is zero).
it is a binomial
The lambda form is:
the simplest non-trivial perturbation of unperturbated system
"the first family of dynamical systems in which explicit necessary and sufficient conditions are known for when a small divisor problem is stable"
Conjugation
Between forms
Since is affine conjugate to the general form of the quadratic polynomial it is often used to study complex dynamics and to create images of Mandelbrot, Julia and Fatou sets.
When one wants change from to :
When one wants change from to , the parameter transformation is
and the transformation between the variables in and is
With doubling map
There is semi-conjugacy between the dyadic transformation (the doubling map) and the quadratic polynomial case of c = –2.
Notation
Iteration
Here denotes the n-th iterate of the function :
so
Because of the possible confusion with exponentiation, some authors write for the nth iterate of .
Parameter
The monic and centered form can be marked by:
the parameter
the external angle of the ray that lands:
at c in Mandelbrot set on the parameter plane
on the critical value:z = c in Julia set on the dynamic plane
so :
Examples:
c is the landing point of the 1/6 external ray of the Mandelbrot set, and is (where i^2=-1)
c is the landing point the 5/14 external ray and is with
Map
The monic and centered form, sometimes called the Douady-Hubbard family of quadratic polynomials, is typically used with variable and parameter :
When it is used as an evolution function of the discrete nonlinear dynamical system
it is named the quadratic map:
The Mandelbrot set is the set of values of the parameter c for which the initial condition z0 = 0 does not cause the iterates to diverge to infinity.
Critical items
Critical points
complex plane
A critical point of is a point on the dynamical plane such that the derivative vanishes:
Since
implies
we see that the only (finite) critical point of is the point .
is an initial point for Mandelbrot set iteration.
For the quadratic family the critical point z = 0 is the center of symmetry of the Julia set Jc, so it is a convex combination of two points in Jc.
Extended complex plane
In the Riemann sphere polynomial has 2d-2 critical points. Here zero and infinity are critical points.
Critical value
A critical value of is the image of a critical point:
Since
we have
So the parameter is the critical value of .
Critical level curves
A critical level curve the level curve which contain critical point. It acts as a sort of skeleton of dynamical plane
Example : level curves cross at saddle point, which is a special type of critical point.
Critical limit set
Critical limit set is the set of forward orbit of all critical points
Critical orbit
The forward orbit of a critical point is called a critical orbit. Critical orbits are very important because every attracting periodic orbit attracts a critical point, so studying the critical orbits helps us understand the dynamics in the Fatou set.
This orbit falls into an attracting periodic cycle if one exists.
Critical sector
The critical sector is a sector of the dynamical plane containing the critical point.
Critical set
Critical set is a set of critical points
Critical polynomial
so
These polynomials are used for:
finding centers of these Mandelbrot set components of period n. Centers are roots of n-th critical polynomials
finding roots of Mandelbrot set components of period n (local minimum of )
Misiurewicz points
Critical curves
Diagrams of critical polynomials are called critical curves.
These curves create the skeleton (the dark lines) of a bifurcation diagram.
Spaces, planes
4D space
One can use the Julia-Mandelbrot 4-dimensional (4D) space for a global analysis of this dynamical system.
In this space there are two basic types of 2D planes:
the dynamical (dynamic) plane, -plane or c-plane
the parameter plane or z-plane
There is also another plane used to analyze such dynamical systems w-plane:
the conjugation plane
model plane
2D Parameter plane
The phase space of a quadratic map is called its parameter plane. Here:
is constant and is variable.
There is no dynamics here. It is only a set of parameter values. There are no orbits on the parameter plane.
The parameter plane consists of:
The Mandelbrot set
The bifurcation locus = boundary of Mandelbrot set with
root points
Bounded hyperbolic components of the Mandelbrot set = interior of Mandelbrot set with internal rays
exterior of Mandelbrot set with
external rays
equipotential lines
There are many different subtypes of the parameter plane.
See also :
Boettcher map which maps exterior of Mandelbrot set to the exterior of unit disc
multiplier map which maps interior of hyperbolic component of Mandelbrot set to the interior of unit disc
2D Dynamical plane
"The polynomial Pc maps each dynamical ray to another ray doubling the angle (which we measure in full turns, i.e. 0 = 1 = 2π rad = 360°), and the dynamical rays of any polynomial "look like straight rays" near infinity. This allows us to study the Mandelbrot and Julia sets combinatorially, replacing the dynamical plane by the unit circle, rays by angles, and the quadratic polynomial by the doubling modulo one map." Virpi KaukoOn the dynamical plane one can find:
The Julia set
The Filled Julia set
The Fatou set
Orbits
The dynamical plane consists of:
Fatou set
Julia set
Here, is a constant and is a variable.
The two-dimensional dynamical plane can be treated as a Poincaré cross-section of three-dimensional space of continuous dynamical system.
Dynamical z-planes can be divided into two groups:
plane for (see complex squaring map)
planes (all other planes for )
Riemann sphere
The extended complex plane plus a point at infinity
the Riemann sphere
Derivatives
First derivative with respect to c
On the parameter plane:
is a variable
is constant
The first derivative of with respect to c is
This derivative can be found by iteration starting with
and then replacing at every consecutive step
This can easily be verified by using the chain rule for the derivative.
This derivative is used in the distance estimation method for drawing a Mandelbrot set.
First derivative with respect to z
On the dynamical plane:
is a variable;
is a constant.
At a fixed point ,
At a periodic point z0 of period p the first derivative of a function
is often represented by and referred to as the multiplier or the Lyapunov characteristic number. Its logarithm is known as the Lyapunov exponent. Absolute value of multiplier is used to check the stability of periodic (also fixed) points.
At a nonperiodic point, the derivative, denoted by , can be found by iteration starting with
and then using
This derivative is used for computing the external distance to the Julia set.
Schwarzian derivative
The Schwarzian derivative (SD for short) of f is:
See also
Misiurewicz point
Periodic points of complex quadratic mappings
Mandelbrot set
Julia set
Milnor–Thurston kneading theory
Tent map
Logistic map
References
External links
Monica Nevins and Thomas D. Rogers, "Quadratic maps as dynamical systems on the p-adic numbers"
Wolf Jung : Homeomorphisms on Edges of the Mandelbrot Set. Ph.D. thesis of 2002
More about Quadratic Maps : Quadratic Map
Complex dynamics
Fractals
Polynomials | Complex quadratic polynomial | [
"Mathematics"
] | 1,822 | [
"Functions and mappings",
"Complex dynamics",
"Mathematical analysis",
"Polynomials",
"Mathematical objects",
"Fractals",
"Mathematical relations",
"Algebra",
"Dynamical systems"
] |
11,736,149 | https://en.wikipedia.org/wiki/Distributed%20multipole%20analysis | In computational chemistry, distributed multipole analysis (DMA) is a compact and accurate way of describing the spatial distribution of electric charge within a molecule.
Multipole expansion
The DMA method was devised by Prof. Anthony Stone of Cambridge University to describe the charge distribution of a molecule in terms of a multipole expansion around a number of centers. The idea of using a multi-center multipole expansion was earlier proposed by Robert Rein. Typically, the centers correspond to the atoms constituting the molecule, though this is not a requirement. A multipole series, consisting of a charge, dipole, quadrupole and higher terms is located at each center. Importantly, the radius of convergence of this multipole series is sufficiently small that the relevant series will be convergent when describing two molecules in van der Waals contact.
The DMA series are derived from ab initio or density functional theory calculations using Gaussian basis sets. If the molecular orbitals are written as linear combinations of atomic basis functions the electron density takes the form of a sum of products of the basis functions, called density matrix elements. Boys (1950) showed that the product of two spherical Gaussian functions, centered at different points, can be expressed as a single Gaussian at an intermediate point known as the overlap center.
If a basis of Gaussian functions is used, the product of two s functions is spherically symmetric and can be represented completely just by a point charge at the ‘overlap center’ of the two Gaussian functions. The product of an s orbital and a p orbital has only charge and dipole components, and the product of two p functions has charge, dipole and quadrupole components.
If the overlap center is not at an atom, one can move the origin of the multipole expansion to the nearest distributed multipole site, re-expressing the series to account for the change of origin. The multipole expansion will no longer terminate, but the higher terms will be small. One may take the sites wherever one chooses, but they will usually be at the atoms. For small molecules one may wish to use additional sites at the centers of bonds; for larger molecules one may use a single site to describe a group of atoms such as a methyl group. The DMA procedure is exact and very fast, but for modern large basis sets with diffuse basis functions it has to be modified somewhat. When the basis functions have exponents that are small, the product function extends over several atoms, and it is better to calculate the distributed multipoles by numerical quadrature over a grid of points. The grid can be defined so that each point is associated with a particular site, and the multipoles for each site are obtained by quadrature over the points belonging to that site.
This description then includes at each site:
Charges, describing electronegativity effects in a chemically intuitive way;
Dipoles, arising from overlap of s and p orbitals and describing lone pairs and other atomic distortions;
Quadrupoles, arising from the overlap of p orbitals, and associated with pi bonds, for example;
Octopoles and hexadecapoles can be included if very high accuracy is required.
The DMA describes the potential at points outside the molecule with an accuracy which is essentially that of the wavefunction, so that its use entails no loss of precision. The DMA description gives the electrostatic energy of interaction between two molecules. It does not account for charge overlap effects and hence excludes the penetration energy.
Comparison to other methods
DMA is inherently much more accurate than the commonly used partial charge methodologies for calculating intermolecular interaction energies, since it captures anisotropy of the atom-atom contributions to electrostatic interaction. It may therefore seem surprising that it has not been more widely used in molecular simulation. Possible reasons for this are:
Its non-inclusion in popular simulation codes;
The need to keep track of the orientation of a local axis system for each molecule;
The conformation-dependence of the DMA. As a consequence of its accuracy, the DMA captures features of the molecular charge distribution that depend strongly on molecular conformation. Thus, in a DMA-based simulation, the multipoles would have to be recalculated whenever a molecule underwent a conformational change.
Applications
DMA has found extensive use in crystal structure prediction for small organic molecules, where significant progress can often be made while using rigid molecular structures. It has also been used to develop force fields for molecular simulations, such as the AMOEBA force field.
References
Quantum chemistry
Computational chemistry
Theoretical chemistry | Distributed multipole analysis | [
"Physics",
"Chemistry"
] | 937 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
11,736,738 | https://en.wikipedia.org/wiki/Fizeau%20interferometer | A Fizeau interferometer is an interferometric arrangement whereby two reflecting surfaces are placed facing each other. As seen in Fig 1, the rear-surface reflected light from the transparent first reflector is combined with front-surface reflected light from the second reflector to form interference fringes.
The term Fizeau interferometer also refers to an interferometric arrangement used by Hippolyte Fizeau in a famous 1851 experiment that seemingly supported the partial ether-drag hypothesis of Augustin Jean Fresnel, but which ultimately played an instrumental role in bringing about a crisis in physics that led to Einstein's development of the theory of special relativity. See Fizeau experiment.
Applications
Fizeau interferometers are commonly used for measuring the shape of an optical surface: Typically, a fabricated lens or mirror is compared to a reference piece having the desired shape. In Fig. 1, the Fizeau interferometer is shown as it might be set up to test an optical flat. A precisely figured reference flat is placed on top of the flat being tested, separated by narrow spacers. The reference flat is slightly beveled (only a fraction of a degree of beveling is necessary) to prevent the rear surface of the flat from producing interference fringes. A collimated beam of monochromatic light illuminates the two flats, and a beam splitter allows the fringes to be viewed on-axis.
The reference piece is sometimes realized by a diffractive optical element (computer-generated hologram or CGH), as this can be manufactured by high accuracy lithographic methods. Fig. 2 illustrates the use of CGHs in testing. Unlike the figure, actual CGHs have line spacing on the order of 1 to 10 μm. When laser light is passed through the CGH, the zero-order diffracted beam experiences no wavefront modification. The wavefront of the first-order diffracted beam, however, is modified to match the desired shape of the test surface. In the illustrated Fizeau interferometer test setup, the zero-order diffracted beam is directed towards the spherical reference surface, and the first-order diffracted beam is directed towards the test surface in such a way that the two reflected beams combine to form interference fringes.
Fizeau interferometers are also used in fiber optic sensors for measuring pressure, temperature, strain, etc.
Fizeau's ether-drag experiment
Significance
In 1851, Fizeau used an entirely different form of interferometer to measure the effect of movement of a medium upon the speed of light, as seen in Fig. 3.
According to the theories prevailing at the time, light traveling through a moving medium would be dragged along by the medium, so the measured speed of the light would be a simple sum of its speed through the medium plus the speed of the medium.
Fizeau indeed detected a dragging effect, but the magnitude of the effect that he observed was far lower than expected. His results seemingly supported the partial ether-drag hypothesis of Fresnel, a situation that was disconcerting to most physicists.
Over half a century passed before a satisfactory explanation of Fizeau's unexpected measurement was developed with the advent of Einstein's theory of special relativity.
Experimental setup
Light reflected from the tilted beam splitter is made parallel using a lens and split by slits into two beams, which traverse a tube carrying water moving with velocity v. Each beam travels a different leg of the tube, is reflected at the mirror at left, and returns through the opposite leg of the tube. Thus, both beams travel the same path, but one in the direction of flow of the water, and the other opposing the flow. The two beams are recombined at the detector, forming an interference pattern that depends upon any difference in time traveling the two paths.
The interference pattern can be analyzed to determine the speed of light traveling along each leg of the tube.
See also
Hippolyte Fizeau
List of types of interferometers
References
External links
Some typical measurement setups from the booklet of interferometer manufacturer Zygo Corp.
Interferometers
Observational astronomy | Fizeau interferometer | [
"Astronomy",
"Technology",
"Engineering"
] | 866 | [
"Interferometers",
"Observational astronomy",
"Astronomical sub-disciplines",
"Measuring instruments"
] |
11,737,468 | https://en.wikipedia.org/wiki/Probabilistic%20bisimulation | In theoretical computer science, probabilistic bisimulation is an extension of the concept of bisimulation for fully probabilistic transition systems first described by K.G. Larsen and A. Skou.
A discrete probabilistic transition system is a triple
where gives the probability of starting in the state s, performing the action a and ending up in the state t. The set of states is assumed to be countable. There is no attempt to assign probabilities to actions. It is assumed that the actions are chosen nondeterministically by an adversary or by the environment. This type of system is fully probabilistic, there is no other indeterminacy.
The definition of a probabilistic bisimulation on a system S is an equivalence relation R on the state space St, such that for every pair s,t in St with sRt and for every action a in Act and for every equivalence class C of R
Two states are said to be probabilistically bisimilar if there is some such R relating them.
When applied to Markov chains, probabilistic bisimulation is the same concept as lumpability.
Probabilistic bisimulation extends naturally to weighted bisimulation.
References
Theoretical computer science | Probabilistic bisimulation | [
"Mathematics"
] | 262 | [
"Theoretical computer science",
"Applied mathematics"
] |
11,737,650 | https://en.wikipedia.org/wiki/Molecular%20drive | Molecular drive is a term coined by Gabriel Dover in 1982 to describe evolutionary processes that change the genetic composition of a population through DNA turnover mechanisms. Molecular drive operates independently of natural selection and genetic drift.
The best-known such process is the concerted evolution of genes present in many tandem copies, such as those for ribosomal RNAs or silk moth egg shell chorion proteins, in sexually reproducing species. The concept has been proposed to extend to the diversification of multigene families. The mechanisms involved include gene conversion, unequal crossing-over, transposition, slippage replication and RNA-mediated exchanges. Because mutations changing the sequence of one copy are less common than deletions, duplications and replacement of one copy by another, the copies gradually come to resemble each other much more than they would if they had been evolving independently.
Concerted evolution can be unbiased, in which case every version has an equal probability of being the one that replaces the others. However, if the molecular events have any bias favouring one version of the sequence over others, that version will dominate the process and eventually replace the others. The name 'molecular drive' reflects the similarity of the process with what was originally the better-known process of meiotic drive.
Molecular drive can also act in bacteria, where parasexual processes such as natural transformation cause DNA turnover.
TRAM
According to Dover, TRAM is a genetic system that has features of non-mendelian inheritance Turnover, copy number and functional Redundancy And Modulatory. To date all regulatory regions (promoters) and genes that have been examined in detail at the molecular level, have TRAM characteristics. As such, part of their evolutionary history will have been influenced by the molecular drive process.
Adoptation
According to Dover, Adoptation is an evolved feature of an organism that contributes to its viability and reproduction (established by molecular drive) and that adopts some previously inaccessible component of the environment.
References
Evolutionary biology
Molecular evolution
Non-Darwinian evolution | Molecular drive | [
"Chemistry",
"Biology"
] | 408 | [
"Evolutionary biology",
"Evolutionary processes",
"Molecular evolution",
"Molecular biology",
"Non-Darwinian evolution",
"Biology theories"
] |
11,738,155 | https://en.wikipedia.org/wiki/Spatiotemporal%20Epidemiological%20Modeler | The Spatiotemporal Epidemiological Modeler (STEM) is free software available through the Eclipse Foundation. Originally developed by IBM Research, STEM is a framework and development tool designed to help scientists create and use spatial and temporal models of infectious disease. STEM uses a component software architecture based on the OSGi standard. The Eclipse Equinox platform is a reference implementation of that standard. By using a component software architecture, all of the components or elements required for a disease model, including the code and the data are available as software building blocks that can be independently exchanged, extended, reused, or replaced. These building blocks or plug-ins are called eclipse "plug-ins" or "extensions". STEM plug-ins contain denominator data for administrative regions of interest. The regions are indexed by standard (ISO3166) codes.
STEM currently includes a large number of plug-ins for the 244 countries and dependent areas defined by the Geographic Coding Standard maintained by the International Organization for Standardization. These plug-ins contain global data including geographic data, population data, demographics, and basic models of disease. The disease models distributed with STEM include epidemiological compartment models. Other plug-ins describe relationships between regions including nearest-neighbor or adjacency relationships as well as information about transportation, such as connections by roads and a model of air transportation.
Relationships between regions can then be included in models of how a disease spreads from place to place. To accomplish this, STEM represents the world as a "graph". The nodes in the graph correspond to places or regions, and the edges in the graph describe relationships or connections between regions. Both the nodes and the edges can be labeled or "decorated" with a variety of denominator data and models. This graphical representation is implemented using the Eclipse Modeling Framework (EMF). Since a model can be built up using separate subgraphs, STEM enables model composition. Predefined subgraphs defining different countries can be assembled with a drag and drop interface. New disease vectors can simply be added to existing models by augmenting the model with a new set of edges. The architecture also supports collaboration as users can not only create new models and compose new scenarios but also exchange these models and scenarios as reusable components and thereby build on each other's work. As an open source project, users are encouraged to create their own plug-ins (both data and models) and, if appropriate, to contribute their work back to the project.
References
External links
STEM
Free science software
Free health care software
Epidemiology
Public health and biosurveillance software
IBM software
Space and time | Spatiotemporal Epidemiological Modeler | [
"Physics",
"Environmental_science"
] | 536 | [
"Physical quantities",
"Time",
"Space",
"Epidemiology",
"Spacetime",
"Environmental social science",
"Space and time"
] |
11,739,239 | https://en.wikipedia.org/wiki/Randle%20cycle | The Randle cycle, also known as the glucose fatty-acid cycle, is a metabolic process involving the cross inhibition of glucose and fatty acids for substrates. It is theorized to play a role in explaining type 2 diabetes and insulin resistance.
It was named for Philip Randle, who described it in 1963.
Cycle
The Randle cycle is a biochemical mechanism involving the competition between glucose and fatty acids for their oxidation and uptake in muscle and adipose tissue. The cycle controls fuel selection and adapts the substrate supply and demand in normal tissues. This cycle adds a nutrient-mediated fine tuning on top of the more coarse hormonal control on fuel metabolism. This adaptation to nutrient availability applies to the interaction between adipose tissue and muscle. Hormones that control adipose tissue lipolysis affect circulating concentrations of fatty acids; these in turn control the fuel selection in muscle. Mechanisms involved in the Randle Cycle include allosteric control, reversible phosphorylation and the expression of key enzymes. The energy balance from meals composed of differing macronutrient composition is identical, but the glucose and fat balances that contribute to the overall energy balance change reciprocally with meal composition.
Glucose is spared and rerouted
Fasted state
When fasting, the activation of lipolysis provides fatty acids as the preferred fuel source for respiration. In the liver β-oxidation of fatty acids fulfills the local energy needs and may lead to ketogenesis (creating ketone bodies out of fatty acids.) The ketone bodies are then used to meet the demands of tissues other than the liver. This inhibition of glucose oxidation at the level of pyruvate dehydrogenase preserves pyruvate and lactate, both of which are gluconeogenic precursors.
Fed state
The glucose fatty acid cycle is also observed in the fed state after a high-fat meal or during exercise. This is when plasma concentrations of fatty acids or ketone bodies are increased. The glucose that is not oxidized is then rerouted to glycogen. This rerouting to glycogen explains the rapid resynthesis of muscle glycogen after exercise as well as the increased glycogen content in muscles found in starvation or diabetes. This mechanism replenishes the intermediates of the citric acid cycle.
Inhibition of glycolytic pathway
The impairment of glucose metabolism by fatty acid oxidation is mediated by the short-term inhibition of several glycolytic processes. The extent of inhibition increases along the glycolytic pathway, being most severe at the level of pyruvate dehydrogenase and less severe at the level of glucose uptake and 6-phosphofructo-1-kinase (PFK-1). This sequence occurs because of the initial event, triggered by fatty acid oxidation, is an increase in the mitochondrial ratios of [acetyl-CoA]/[CoA] and [NADH]/[NAD+]. These both serve to inhibit pyruvate dehydrogenase activity. It has been proposed that these changes lead to an accumulation of cytosolic citrate, which in turn inhibits PFK-1, followed by an increase in glucose 6-phosphate, which eventually inhibits hexokinase.
Hemodynamic stress
Hemodynamic stress overrides fatty acid inhibition of glucose metabolism. During this time there is a decrease in substrate supply and an increase in the substrate demand. This leads to an activation of AMP-activated protein kinase (AMPK) as the AMP concentration rises in intracellular fluids and the ATP concentration decreases. The stress-induced activation of AMPK provides an immediate metabolic adaption and protects the heart from ischemic stress.
Fatty acid oxidation inhibition by malonyl-CoA
Malonyl-CoA signals glucose utilization and it controls the entry and oxidation of long-chain fatty acids (LCFA) in the mitochondria. Circulating glucose in the liver stimulates its uptake. Glucose oxidation produces citrate which can be converted to malonyl-CoA by acetyl-CoA carboxylase. Malonyl-CoA inhibits the carnitine palmitoyltransferase (CPT) that controls the entry and oxidation of LCFA. The glucose-derived malonyl-CoA prevents the oxidation of fatty acids and favors fatty acid esterification.
Cytosolic events controlling fatty acid oxidation
Malonyl-CoA concentration
The concentration of malonyl-CoA depends on the balance between acetyl-CoA carboxylase (ACC) and malonyl-CoA decarboxylase (MCD). AMP-activated protein kinase (AMPK) is reported to phosphorylate and inactivate liver ACC. This in turn decreases malonyl-CoA concentrations which stimulates fatty acid oxidation and ketogenesis by glucagon in the liver. AMPK phosphorylates and inactivates ACC in the liver and other tissues.
Integration of AMPK and ACC in the glucose-fatty acid cycle
Inhibition of fatty acid oxidation requires that ACC is active. Both AMPK and MCD are inactive and glucose uptake is stimulated. The LCFAs are then rerouted to esterification. These conditions exist in tissues rich in oxygen, in which AMPK is inactive and glucose inactivates the AMPK (researched in skeletal muscle).
The inhibition of MCD suppresses the oxidation of fatty acids and stimulates glucose oxidation. In a study on MCD deficient mice there was no difference in the oxidation of fatty acids and glucose in the heart under aerobic conditions. It is theorized that the overexpression of fatty acids being used makes up for the lack of MCD.
Fatty acid uptake
Long chain fatty acid uptake is mediated by several transporters, including FAT (fatty acid translocase)/CD36. CD36 deletion rescues lipotoxic cardiomyopathy. FAT/CD36 may be controlled by insulin and AMPK. Increased transport coupled to the formation of the CoA derivatives and the resulting AMPK activation should ensure efficient fatty acid uptake and metabolism.
Mitochondrial events controlling fuel selection
Fatty acids are preferentially oxidized because of the inactivation of PDH by fatty acid oxidation inhibiting glucose oxidation. This suggests that mitochondrial metabolism may control fuel selection. Cellular respiration is stimulated by fatty acids and this relates to an increase in the mitochondrial NADH to NAD+ ratio, suggesting that energy provision overtakes energy consumption. Switching from glucose to fatty acid oxidation leads to a bigger proportion of electrons being transported to complex 2 rather than complex 1 of the respiratory chain. This difference leads to a less efficient oxidative phosphorylation. By oxidizing fatty acids, mitochondria increase their respiration while increasing the production of ROS.
Fatty acids and insulin
Fatty acids may act directly upon the pancreatic β-cell to regulate glucose-stimulated insulin secretion. This effect is biphasic. Initially fatty acids potentiate the effects of glucose. After prolonged exposure to high fatty acid concentrations this changes to an inhibition. Randle suggested that the term fatty acid syndrome would be appropriate to apply to the biochemical syndrome resulting from the high concentration of fatty acids and the relationship to abnormalities of carbohydrate metabolism, including starvation, diabetes and Cushing’s syndrome.
References
Metabolism | Randle cycle | [
"Chemistry",
"Biology"
] | 1,522 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
11,740,178 | https://en.wikipedia.org/wiki/Primes%20in%20arithmetic%20progression | In number theory, primes in arithmetic progression are any sequence of at least three prime numbers that are consecutive terms in an arithmetic progression. An example is the sequence of primes (3, 7, 11), which is given by for .
According to the Green–Tao theorem, there exist arbitrarily long arithmetic progressions in the sequence of primes. Sometimes the phrase may also be used about primes which belong to an arithmetic progression which also contains composite numbers. For example, it can be used about primes in an arithmetic progression of the form , where a and b are coprime which according to Dirichlet's theorem on arithmetic progressions contains infinitely many primes, along with infinitely many composites.
For integer k ≥ 3, an AP-k (also called PAP-k) is any sequence of k primes in arithmetic progression. An AP-k can be written as k primes of the form a·n + b, for fixed integers a (called the common difference) and b, and k consecutive integer values of n. An AP-k is usually expressed with n = 0 to k − 1. This can always be achieved by defining b to be the first prime in the arithmetic progression.
Properties
Any given arithmetic progression of primes has a finite length. In 2004, Ben J. Green and Terence Tao settled an old conjecture by proving the Green–Tao theorem: The primes contain arbitrarily long arithmetic progressions. It follows immediately that there are infinitely many AP-k for any k.
If an AP-k does not begin with the prime k, then the common difference is a multiple of the primorial k# = 2·3·5·...·j, where j is the largest prime ≤ k.
Proof: Let the AP-k be a·n + b for k consecutive values of n. If a prime p does not divide a, then modular arithmetic says that p will divide every pth term of the arithmetic progression. (From H.J. Weber, Cor.10 in ``Exceptional Prime Number Twins, Triplets and Multiplets," arXiv:1102.3075[math.NT]. See also Theor.2.3 in ``Regularities of Twin, Triplet and Multiplet Prime Numbers," arXiv:1103.0447[math.NT], Global J.P.A.Math 8(2012), in press.) If the AP is prime for k consecutive values, then a must therefore be divisible by all primes p ≤ k.
This also shows that an AP with common difference a cannot contain more consecutive prime terms than the value of the smallest prime that does not divide a.
If k is prime then an AP-k can begin with k and have a common difference which is only a multiple of (k−1)# instead of k#. (From H. J. Weber, ``Less Regular Exceptional and Repeating Prime Number Multiplets," arXiv:1105.4092[math.NT], Sect.3.) For example, the AP-3 with primes {3, 5, 7} and common difference 2# = 2, or the AP-5 with primes {5, 11, 17, 23, 29} and common difference 4# = 6. It is conjectured that such examples exist for all primes k. , the largest prime for which this is confirmed is k = 19, for this AP-19 found by Wojciech Iżykowski in 2013:
19 + 4244193265542951705·17#·n, for n = 0 to 18.
It follows from widely believed conjectures, such as Dickson's conjecture and some variants of the prime k-tuple conjecture, that if p > 2 is the smallest prime not dividing a, then there are infinitely many AP-(p−1) with common difference a. For example, 5 is the smallest prime not dividing 6, so there is expected to be infinitely many AP-4 with common difference 6, which is called a sexy prime quadruplet. When a = 2, p = 3, it is the twin prime conjecture, with an "AP-2" of 2 primes (b, b + 2).
Minimal primes in AP
We minimize the last term.
Largest known primes in AP
For prime q, q# denotes the primorial 2·3·5·7·...·q.
, the longest known AP-k is an AP-27. Several examples are known for AP-26. The first to be discovered was found on April 12, 2010, by Benoît Perichon on a PlayStation 3 with software by Jarosław Wróblewski and Geoff Reynolds, ported to the PlayStation 3 by Bryan Little, in a distributed PrimeGrid project:
43142746595714191 + 23681770·23#·n, for n = 0 to 25. (23# = 223092870)
By the time the first AP-26 was found the search was divided into 131,436,182 segments by PrimeGrid and processed by 32/64bit CPUs, Nvidia CUDA GPUs, and Cell microprocessors around the world.
Before that, the record was an AP-25 found by Raanan Chermoni and Jarosław Wróblewski on May 17, 2008:
6171054912832631 + 366384·23#·n, for n = 0 to 24. (23# = 223092870)
The AP-25 search was divided into segments taking about 3 minutes on Athlon 64 and Wróblewski reported "I think Raanan went through less than 10,000,000 such segments" (this would have taken about 57 cpu years on Athlon 64).
The earlier record was an AP-24 found by Jarosław Wróblewski alone on January 18, 2007:
468395662504823 + 205619·23#·n, for n = 0 to 23.
For this Wróblewski reported he used a total of 75 computers: 15 64-bit Athlons, 15 dual core 64-bit Pentium D 805, 30 32-bit Athlons 2500, and 15 Durons 900.
The following table shows the largest known AP-k with the year of discovery and the number of decimal digits in the ending prime. Note that the largest known AP-k may be the end of an AP-(k+1). Some record setters choose to first compute a large set of primes of form c·p#+1 with fixed p, and then search for AP's among the values of c that produced a prime. This is reflected in the expression for some records. The expression can easily be rewritten as a·n + b.
Consecutive primes in arithmetic progression Consecutive primes in arithmetic progression refers to at least three consecutive primes which are consecutive terms in an arithmetic progression. Note that unlike an AP-k, all the other numbers between the terms of the progression must be composite. For example, the AP-3 {3, 7, 11} does not qualify, because 5 is also a prime.
For an integer k ≥ 3, a CPAP-k''' is k consecutive primes in arithmetic progression. It is conjectured there are arbitrarily long CPAP's. This would imply infinitely many CPAP-k for all k. The middle prime in a CPAP-3 is called a balanced prime. The largest known has 15004 digits.
The first known CPAP-10 was found in 1998 by Manfred Toplic in the distributed computing project CP10 which was organized by Harvey Dubner, Tony Forbes, Nik Lygeros, Michel Mizony and Paul Zimmermann. This CPAP-10 has the smallest possible common difference, 7# = 210. The only other known CPAP-10 as of 2018 was found by the same people in 2008.
If a CPAP-11 exists then it must have a common difference which is a multiple of 11# = 2310. The difference between the first and last of the 11 primes would therefore be a multiple of 23100. The requirement for at least 23090 composite numbers between the 11 primes makes it appear extremely hard to find a CPAP-11. Dubner and Zimmermann estimate it would be at least 1012 times harder than a CPAP-10.
Minimal consecutive primes in AP
The first occurrence of a CPAP-k is only known for k ≤ 6 .
Largest known consecutive primes in AP
The table shows the largest known case of k consecutive primes in arithmetic progression, for k = 3 to 10.xd is a d''-digit number used in one of the above records to ensure a small factor in unusually many of the required composites between the primes.
x106 = 115376 22283279672627497420 78637565852209646810 56709682233916942487 50925234318597647097 08315833909447378791
x153 = 9656383640115 03965472274037609810 69585305769447451085 87635040605371157826 98320398681243637298 57205796522034199218 09817841129732061363 55565433981118807417 = x253 % 379#
x253 = 1617599298905 320471304802538356587398499979 836255156671030473751281181199 911312259550734373874520536148 519300924327947507674746679858 816780182478724431966587843672 408773388445788142740274329621 811879827349575247851843514012 399313201211101277175684636727
See also
Cunningham chain
Szemerédi's theorem
PrimeGrid
Problems involving arithmetic progressions
Notes
References
Chris Caldwell, The Prime Glossary: arithmetic sequence, The Top Twenty: Arithmetic Progressions of Primes and The Top Twenty: Consecutive Primes in Arithmetic Progression, all from the Prime Pages.
Jarosław Wróblewski, How to search for 26 primes in arithmetic progression?
P. Erdős and P. Turán, On some sequences of integers, J. London Math. Soc. 11 (1936), 261–264.
Prime numbers | Primes in arithmetic progression | [
"Mathematics"
] | 2,311 | [
"Prime numbers",
"Mathematical objects",
"Numbers",
"Number theory"
] |
11,741,990 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20inhibitor%20protein | A cyclin-dependent kinase inhibitor protein (also known as CKIs, CDIs, or CDKIs) is a protein that inhibits the enzyme cyclin-dependent kinase (CDK) and Cyclin activity by stopping the cell cycle if there are unfavorable conditions, therefore, acting as tumor suppressors. Cell cycle progression is stopped by Cyclin-dependent kinase inhibitor protein at the G1 phase. CKIs are vital proteins within the control system that point out whether the processes of DNA synthesis, mitosis, and cytokines control one another. When a malfunction hinders the successful completion of DNA synthesis in the G1 phase, it triggers a signal that delays or halts the progression to the S phase. Cyclin-dependent kinase inhibitor proteins are essential in the regulation of the cell cycle. If cell mutations surpass the cell cycle checkpoints during cell cycle regulation, it can result in various types of cancer.
CKI Inactivation Process
Cyclin-dependent kinase inhibitor proteins work by inactivating the CDKs through degradation. The typical inactivation mechanism of the CDK/Cyclin complex is based on binding a CDK inhibitor to the CDK cyclin complex and a partial conformational rotation of the CDK. The cyclin is thus forced to release the T loop and detach from the CDK. Then, the CDK inhibitor initiates a small helix into the cleft, blocking the cleft and blocking the active site of the CDK. Eventually, it releases the ATP out of the aperture of the CDK and deactivates it. Cyclin-dependent kinase inhibitor proteins use ATP as a phosphate contributor to phosphorylate serine and threonine residues.
Human cells contain many different cyclins that bind to different CDKs. CDKs and cyclins appear and activate at specific cell cycle phases. Seven cyclin-dependent kinase inhibitor proteins have been identified. They are p15, p16, p18, p19, p21, p27, and p57. These cyclin-dependent kinase inhibitor proteins emerge only in their specific cell cycle phase. Each Cyclin/CDK complex is specific to the part of the cell cycle phase. Each CDK and cyclin can be identified based on the location of the cell cycle. CKIs fall into two categories; those that inhibit CDK1, CDK2, and CDK5 and those that inhibit CDK4 and CDK6. These checkpoints' cell cycle blocks at both the G1/S and G2/M checkpoints are consistent with the inhibition profiles of the enzymes.
Discovery
The discovery of Cyclin-dependent kinase inhibitor proteins in 1990 opened the door in how we think about cell cycle control. It has steered to various other fields of study such as developmental biology, cell biology and cancer research. The discovery of the first CKIs in yeast (Far1) and P21 in mammals has led to research on family of molecules. Further research has demonstrates that Cdks, cyclins and CKIs play essential roles in processes such as transcription, epigenetic regulation, metabolism, stem cell self-renewal, neuronal functions and spermatogenesis.
In mammals, p27, a cyclin-dependent kinase inhibitor protein, helps control CDK activity in G1. Also, the INK4 proteins help stop the G1-CDK activity when they encounter anti-proliferative signals within the environment. CKIs help promote the specific inhibitory signals that contain the cell from entering the S phase. In budding yeast, SIC 1 and Roughex, RUX, in Drosophila possess the same contributions that contribute to the stability of G1 cells. They are expressed in higher numbers in G1 cells to make sure that no S or M CDKs are in the cell.
Structure
In the cyclin-dependent kinase (CDK) family, or CDK, Cyclin, and CKIs, serine/threonine kinases play an integral role in regulating the eukaryotic cell cycle. The structure of CDK2-CyclinA and p27 is determined by crystallography, demonstrating that the inhibitor of p27 stretches at the top of the Cyclin-CDK complex. The amino terminal of p27 has an RXL motif exhibiting a hydrophobic patch of cyclin A. The carboxyl-terminal end of the p27 fragment interacts with the beta sheet of CDKs, causing interference with the structure; p27 slides into the ATP-binding site of CDK2 and inhibits ATP binding.
Clinical significance
Role in cancer: Cyclin-dependent kinase inhibitor (CKI) mutants are frequent in human cancers. The function of CKI is to stop cell growth when there are mistakes due to DNA damage. Once a cell is stopped at a checkpoint due to DNA damage, either the damage is repaired or the cell is induced to perform apoptosis. However, if CKI’s mutations don’t stop the cell, Cyclin D is transcribed. It moves into the cytoplasm and eventually activates a specific cyclin-dependent kinase (CDK). The active cyclin/CDK complex then phosphorylates proteins, activates them, and sends the cell into the next phase of the cell cycle. Since the cell with damaged DNA is not stopped, the cell eventually moves out of the G1 checkpoint and prepares for DNA synthesis. When there is uncontrolled cell growth, it can lead to cancer cells due to the inactivation of the CKIs.
Associated gene and target
References
External links
Protein domains | Cyclin-dependent kinase inhibitor protein | [
"Biology"
] | 1,176 | [
"Protein domains",
"Protein classification"
] |
11,743,104 | https://en.wikipedia.org/wiki/Infinite%20alleles%20model | The infinite alleles model is a mathematical model for calculating genetic mutations. The Japanese geneticist Motoo Kimura and American geneticist James F. Crow (1964) introduced the infinite alleles model, an attempt to determine for a finite diploid population what proportion of loci would be homozygous. This was, in part, motivated by assertions by other geneticists that more than 50 percent of Drosophila loci were heterozygous, a claim they initially doubted. In order to answer this question they assumed first, that there were a large enough number of alleles so that any mutation would lead to a different allele (that is the probability of back mutation to the original allele would be low enough to be negligible); and second, that the mutations would result in a number of different outcomes from neutral to deleterious.
They determined that in the neutral case, the probability that an individual would be homozygous, F, was:
where u is the mutation rate, and Ne is the effective population size. The effective number of alleles n maintained in a population is defined as the inverse of the homozygosity, that is
which is a lower bound for the actual number of alleles in the population.
If the effective population is large, then a large number of alleles can be maintained. However, this result only holds for the neutral case, and is not necessarily true for the case when some alleles are subject to selection, i.e. more or less fit than others, for example when the fittest genotype is a heterozygote (a situation often referred to as overdominance or heterosis).
In the case of overdominance, because Mendel's second law (the law of segregation) necessarily results in the production of homozygotes (which are by definition in this case, less fit), this means that population will always harbor a number of less fit individuals, which leads to a decrease in the average fitness of the population. This is sometimes referred to as genetic load, in this case it is a special kind of load known as segregational load. Crow and Kimura showed that at equilibrium conditions, for a given strength of selection (s), that there would be an upper limit to the number of fitter alleles (polymorphisms) that a population could harbor for a particular locus. Beyond this number of alleles, the selective advantage of presence of those alleles in heterozygous genotypes would be cancelled out by continual generation of less fit homozygous genotypes.
These results became important in the formation of the neutral theory, because neutral (or nearly neutral) alleles create no such segregational load, and allow for the accumulation of a great deal of polymorphism. When Richard Lewontin and J. Hubby published their groundbreaking results in 1966 which showed high levels of genetic variation in Drosophila via protein electrophoresis, the theoretical results from the infinite alleles model were used by Kimura and others to support the idea that this variation would have to be neutral (or result in excess segregational load).
References
See also
Infinite sites model
Evolutionary biology
Population genetics
Mathematical and theoretical biology | Infinite alleles model | [
"Mathematics",
"Biology"
] | 665 | [
"Evolutionary biology",
"Applied mathematics",
"Mathematical and theoretical biology"
] |
11,744,449 | https://en.wikipedia.org/wiki/Scattering%20from%20rough%20surfaces | Surface roughness scattering or interface roughness scattering is the elastic scattering of particles against a rough solid surface or imperfect interface between two different materials. This effect has been observed in classical systems, such as microparticle scattering, as well as quantum systems, where it arises electronic devices, such as field effect transistors and quantum cascade lasers.
Classical description
In the classical mechanics framework, a rough surface, such as a machined metal surface, randomizes the probability distribution function governing the incoming particles, leading to net momentum loss of the particle flux.
Quantum description
In the quantum mechanical framework, this scattering is most noticeable in confined systems, in which the energies for charge carriers are determined by the locations of interfaces. An example of such a system is a quantum well, which may be constructed from a sandwich of different layers of semiconductor. Variations in the thickness of these layers therefore causes the energy of particles to be dependent on their in-plane location in the layer. Classification of the roughness at a given position, , is complex, but as in the classical models, it has been modeled as a Gaussian distribution by some researchers
This assumption may be formulated in terms of the ensemble average for some given characteristic height, , and correlation length, , such that
Types of Scattering
Selective Scattering : In selective Scattering scattering depends upon the wavelength of light.
Mie scattering : Mie theory can describe how electromagnetic waves interact with homogeneously spherical particles. However, a theory for homogeneous spheres will completely fail to predict polarization effects. When the size of the molecules is greater than the wavelength of light, the result is a non-uniform scattering of light.
Lambertian Scattering: This type of scattering occurs when a surface has microscopic irregularities that scatter light perfectly uniformly in all directions, causing it to appear equally bright from all viewing angles.
Subsurface Scattering: This type of scattering occurs when light scatters within a material before exiting the surface at a different point.
Isotropic crystal scattering (aka powder diffraction): This type of scattering occurs when every crystalline orientation is represented equally in a powdered sample. Powder X-ray diffraction (PXRD) operates under the assumption that the sample is randomly arranged such that each plane will be represented in the signal.
Notes
Scattering | Scattering from rough surfaces | [
"Physics",
"Chemistry",
"Materials_science"
] | 459 | [
"Condensed matter physics",
"Scattering",
"Particle physics",
"Nuclear physics"
] |
11,746,968 | https://en.wikipedia.org/wiki/Dialyte%20lens | A dialyte lens (sometimes called a dialyt) is a compound lens design that corrects optical aberrations where the lens elements are widely air-spaced. The design is used to save on the amount of glass used for specific elements or where elements can not be cemented because they have dissimilar curvatures. The word dialyte means "parted", "loose" or "separated" in Greek.
Design
In its simplest form, a dialyte can be formed by separating the elements in a cemented achromatic doublet of positive and negative lenses, although the powers of the individual elements must be increased to compensate.
Applications
Telescopes
The idea of widely separating the color correcting elements of a lens dates back to W. F. Hamilton's 1814 catadioptric Hamiltonian telescope and Alexander Rogers' 1828 proposals for a dialytic refractor. The goal was to combine a large crown glass objective with a much smaller flint glass downstream to make an achromatic lens since flint glass at that time was very expensive. Dialyte designs were also used in the Schupmann medial telescope designed by in German optician Ludwig Schupmann near the end of the 19th century, in John Wall's 1999 "Zerochromat" retrofocally corrected dialytic refractor and the Russian made "TAL Apolar125" telescope which uses 6 elements arranged in three widely separated groups.
Photography
There are many types of dialyte camera lenses. One popular design is perfectly symmetric, which provides good correction for many aberrations. This consists of two air-spaced achromatic doublets arranged back-to-back around a central stop, or four air spaced lens elements in total: the outer pair is biconvex and the inner pair is biconcave; one example is the Celor. The Swiss mathematician Emil von Höegh, who had designed the popular Dagor anastigmat lens for Goerz in 1892, continued to refine that design, resulting in the Goerz Dagor Type B lens of 1899, later renamed to Celor and Syntor.
The Aviar lens (Taylor Hobson) designed by Arthur Warmisham (1917) is similar but is considered to have a different origin, from the splitting of the central biconcave element of the Cooke triplet. The resulting two biconcave elements are closer together than in the Dialyte/Celor design.
Enlarging
Since the aberrations remain constant over a wide range of object distances, and is favourable for fairly wide apertures, this design proved useful for enlarging lenses.
See also
List of telescope types.
References
Photographic lenses
Optics
Lens designers
Camera lenses by year of introduction | Dialyte lens | [
"Physics",
"Chemistry"
] | 557 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
13,247,184 | https://en.wikipedia.org/wiki/SMS%20%28hydrology%20software%29 | SMS (Surface-water Modeling System) is a complete program for building and simulating surface water models from Aquaveo. It features 1D and 2D modeling and a unique conceptual model approach. Currently supported models include ADCIRC, CMS-FLOW2D, FESWMS, TABS, TUFLOW, BOUSS-2D, CGWAVE, STWAVE, CMS-WAVE (WABED), GENESIS, PTM, and WAM.
Version 9.2 introduced the use of XMDF (eXtensible Model Data Format), which is a compatible extension of HDF5. XMDF files are smaller and allow faster access times than ASCII files.
The Watershed Modeling System (WMS) is a proprietary water modeling software application used to develop watershed computer simulations. The software provides tools to automate various basic and advanced delineations, calculations, and modeling processes. It supports river hydraulic and storm drain models, lumped parameter, regression, 2D hydrologic modeling of watersheds, and can be used to model both water quantity and water quality. , supported models include HEC-1, HEC-RAS, HEC-HMS, TR-20, TR-55, NFF, Rational, MODRAT, HSPF, CE-QUAL-W2, GSSHA, SMPDBK, and other models.
History
SMS was initially developed by the Engineering Computer Graphics Laboratory at Brigham Young University (later renamed in September, 1998 to Environmental Modeling Research Laboratory or EMRL) in the late 1980s on Unix workstations. The development of SMS was funded primarily by The United States Army Corps of Engineers and is still known as the Department of Defense Surface-water Modeling System or DoD SMS. It was later ported to Windows platforms in the mid 1990s and support for HP-UX, IRIX, OSF/1, and Solaris platforms was discontinued.
In April 2007, the main software development team at EMRL entered private enterprise as Aquaveo LLC, and continue to develop SMS and other software products, such as WMS (Watershed Modeling System) and GMS (Groundwater Modeling System).
WMS was initially developed by the Engineering Computer Graphics Laboratory at Brigham Young University in the early 1990s on Unix workstations. James Nelson, Norman Jones, and Woodruff Miller wrote a 1992 paper titled "Algorithm for Precise Drainage-Basin Delineation" that was published in the March 1994 issue of the Journal of Hydraulic Engineering. The paper described an algorithm that could be used to describe the flow of water in a drainage basin, thereby defining the drainage basin.
The development of WMS was funded primarily by The United States Army Corps of Engineers (COE). In 1997, WMS was used by the COE to model runoff in the Sava River basin in Bosnia. The software was sold commercially by Environmental Modeling Systems.
It was later ported to Windows platforms in the mid 1990s. WMS 6.0 (2000) was the last supported version for HP-UX, IRIX, OSF/1, and Solaris platforms. Development of WMS was done by the Environmental Modeling Research Laboratory (EMRL) at Brigham Young University (BYU) until April 2007, when the main software development team at EMRL incorporated as Aquaveo. Royalties from the software are paid to the engineering department at BYU.
The planners of the 2002 Winter Olympics, held in Salt Lake City, Utah, used WMS software to simulate terrorist attacks on water infrastructure such as the Jordanelle Reservoir.
Examples of SMS Implementation
SMS modeling was used to "determine flooded areas in case of failure or revision of a weir in combination with a coincidental 100-year flood event" (Gerstner, Belzner, and Thorenz, p. 975). Furthermore, "concerning the water level calculations in case of failure of a weir, the provided the with those two-dimensional depth-averaged hydrodynamic models, which are covering the whole Bavarian part of the river Main. The models were created with the software Surface-Modeling System (SMS) of Aquaveo LLC" (Gerstner, Belzner, and Thorenz, 976).
This article "describes the mathematical formulation, numerical implementation, and input specifications of rubble mound structures in the Coastal Modeling System (CMS) operated through the Surface-water Modeling System (SMS)" (Li et al., 1). Describing the input specifications, the authors write, "Working with the SMS interface, users can specify rubble mound structures in the CMS by creating datasets for different structure parameters. Five datasets are required for this application" (Li et al., p. 3) and "users should refer to Aquaveo (2010) for generating a XMDF dataset (*.h5 file) under the SMS" (Li et al., p. 5).
This study examined the "need of developing mathematical models for determining and predicting water quality of 'river-type' systems. It presents a case study for determining the pollutant dispersion for a section of the River Prut, Ungheni town, which was filled with polluted water with oil products from its tributary river Delia" (Marusic and Ciufudean, p. 177). "The obtained numerical models were developed using the program Surface-water Modeling System (SMS) v.10.1.11, which was designed by experts from Aquaveo company. The hydrodynamics of the studied sector, obtained using the SMS module named RMA2 [13], served as input for the RMA module 4, which determined the pollutant dispersion" (Marusic and Ciufudean, p. 178–179).
This study focused on finding "recommendations for optimization" of the "Chusovskoy water intake located in the confluence zone of two rivers with essentially different hydrochemical regimes and in the backwater zone of the " (Lyubimova et al., p. 1). "A two-dimensional (in a horizontal plane) model for the examined region of the water storage basin was constructed by making use of the software product SMS v.10 of the American company AQUAVEO LLC" (Lyubimova et al., p. 2). Evaluations of the SMS-derived, two-dimensional model as well as a three-dimensional model yielded the discovery that "the selective water intake from the near-surface layers can essentially reduce hardness of potable water consumed by the inhabitants of Perm" (Lyubimova et al., p. 6).
References
External links
US Army Corps of Engineers – DoD SMS white paper
SMS Documentation Wiki
Scientific simulation software
Hydrology software | SMS (hydrology software) | [
"Environmental_science"
] | 1,379 | [
"Hydrology",
"Hydrology software"
] |
13,250,438 | https://en.wikipedia.org/wiki/Protein%20mass%20spectrometry | Protein mass spectrometry refers to the application of mass spectrometry to the study of proteins. Mass spectrometry is an important method for the accurate mass determination and characterization of proteins, and a variety of methods and instrumentations have been developed for its many uses. Its applications include the identification of proteins and their post-translational modifications, the elucidation of protein complexes, their subunits and functional interactions, as well as the global measurement of proteins in proteomics. It can also be used to localize proteins to the various organelles, and determine the interactions between different proteins as well as with membrane lipids.
The two primary methods used for the ionization of protein in mass spectrometry are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). These ionization techniques are used in conjunction with mass analyzers such as tandem mass spectrometry. In general, the proteins are analyzed either in a "top-down" approach in which proteins are analyzed intact, or a "bottom-up" approach in which protein are first digested into fragments. An intermediate "middle-down" approach in which larger peptide fragments are analyzed may also sometimes be used.
History
The application of mass spectrometry to study proteins became popularized in the 1980s after the development of MALDI and ESI. These ionization techniques have played a significant role in the characterization of proteins. (MALDI) Matrix-assisted laser desorption ionization was coined in the late 1980s by Franz Hillenkamp and Michael Karas. Hillenkamp, Karas and their fellow researchers were able to ionize the amino acid alanine by mixing it with the amino acid tryptophan and irradiated with a pulse 266 nm laser. Though important, the breakthrough did not come until 1987. In 1987, Koichi Tanaka used the "ultra fine metal plus liquid matrix method" and ionized biomolecules the size of 34,472 Da protein carboxypeptidase-A.
In 1968, Malcolm Dole reported the first use of electrospray ionization with mass spectrometry. Around the same time MALDI became popularized, John Bennett Fenn was cited for the development of electrospray ionization. Koichi Tanaka received the 2002 Nobel Prize in Chemistry alongside John Fenn, and Kurt Wüthrich "for the development of methods for identification and structure analyses of biological macromolecules." These ionization methods have greatly facilitated the study of proteins by mass spectrometry. Consequently, protein mass spectrometry now plays a leading role in protein characterization.
Methods and approaches
Techniques
Mass spectrometry of proteins requires that the proteins in solution or solid state be turned into an ionized form in the gas phase before they are injected and accelerated in an electric or magnetic field for analysis. The two primary methods for ionization of proteins are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In electrospray, the ions are created from proteins in solution, and it allows fragile molecules to be ionized intact, sometimes preserving non-covalent interactions. In MALDI, the proteins are embedded within a matrix normally in a solid form, and ions are created by pulses of laser light. Electrospray produces more multiply-charged ions than MALDI, allowing for measurement of high mass protein and better fragmentation for identification, while MALDI is fast and less likely to be affected by contaminants, buffers and additives.
Whole-protein mass analysis is primarily conducted using either time-of-flight (TOF) MS, or Fourier transform ion cyclotron resonance (FT-ICR). These two types of instrument are preferable here because of their wide mass range, and in the case of FT-ICR, its high mass accuracy. Electrospray ionization of a protein often results in generation of multiple charged species of 800 < m/z < 2000 and the resultant spectrum can be deconvoluted to determine the protein's average mass to within 50 ppm or better using TOF or ion-trap instruments.
Mass analysis of proteolytic peptides is a popular method of protein characterization, as cheaper instrument designs can be used for characterization. Additionally, sample preparation is easier once whole proteins have been digested into smaller peptide fragments. The most widely used instrument for peptide mass analysis are the MALDI-TOF instruments as they permit the acquisition of peptide mass fingerprints (PMFs) at high pace (1 PMF can be analyzed in approx. 10 sec). Multiple stage quadrupole-time-of-flight and the quadrupole ion trap also find use in this application.
Tandem mass spectrometry (MS/MS) is used to measure fragmentation spectra and identify proteins at high speed and accuracy. Collision-induced dissociation is used in mainstream applications to generate a set of fragments from a specific peptide ion. The fragmentation process primarily gives rise to cleavage products that break along peptide bonds. Because of this simplicity in fragmentation, it is possible to use the observed fragment masses to match with a database of predicted masses for one of many given peptide sequences. Tandem MS of whole protein ions has been investigated recently using electron capture dissociation and has demonstrated extensive sequence information in principle but is not in common practice.
Approaches
In keeping with the performance and mass range of available mass spectrometers, two approaches are used for characterizing proteins. In the first, intact proteins are ionized by either of the two techniques described above, and then introduced to a mass analyzer. This approach is referred to as "top-down" strategy of protein analysis as it involves starting with the whole mass and then pulling it apart. The top-down approach however is mostly limited to low-throughput single-protein studies due to issues involved in handling whole proteins, their heterogeneity and the complexity of their analyses.
In the second approach, referred to as the "bottom-up" MS, proteins are enzymatically digested into smaller peptides using a protease such as trypsin. Subsequently, these peptides are introduced into the mass spectrometer and identified by peptide mass fingerprinting or tandem mass spectrometry. Hence, this approach uses identification at the peptide level to infer the existence of proteins pieced back together with de novo repeat detection. The smaller and more uniform fragments are easier to analyze than intact proteins and can be also determined with high accuracy, this "bottom-up" approach is therefore the preferred method of studies in proteomics. A further approach that is beginning to be useful is the intermediate "middle-down" approach in which proteolytic peptides larger than the typical tryptic peptides are analyzed.
Protein and peptide fractionation
Proteins of interest are usually part of a complex mixture of multiple proteins and molecules, which co-exist in the biological medium. This presents two significant problems. First, the two ionization techniques used for large molecules only work well when the mixture contains roughly equal amounts of material, while in biological samples, different proteins tend to be present in widely differing amounts. If such a mixture is ionized using electrospray or MALDI, the more abundant species have a tendency to "drown" or suppress signals from less abundant ones. Second, mass spectrum from a complex mixture is very difficult to interpret due to the overwhelming number of mixture components. This is exacerbated by the fact that enzymatic digestion of a protein gives rise to a large number of peptide products.
In light of these problems, the methods of one- and two-dimensional gel electrophoresis and high performance liquid chromatography are widely used for separation of proteins. The first method fractionates whole proteins via two-dimensional gel electrophoresis. The first-dimension of 2D gel is isoelectric focusing (IEF). In this dimension, the protein is separated by its isoelectric point (pI) and the second-dimension is SDS-polyacrylamide gel electrophoresis (SDS-PAGE). This dimension separates the protein according to its molecular weight. Once this step is completed in-gel digestion occurs. In some situations, it may be necessary to combine both of these techniques. Gel spots identified on a 2D Gel are usually attributable to one protein. If the identity of the protein is desired, usually the method of in-gel digestion is applied, where the protein spot of interest is excised, and digested proteolytically. The peptide masses resulting from the digestion can be determined by mass spectrometry using peptide mass fingerprinting. If this information does not allow unequivocal identification of the protein, its peptides can be subject to tandem mass spectrometry for de novo sequencing. Small changes in mass and charge can be detected with 2D-PAGE. The disadvantages with this technique are its small dynamic range compared to other methods, some proteins are still difficult to separate due to their acidity, basicity, hydrophobicity, and size (too large or too small).
The second method, high performance liquid chromatography is used to fractionate peptides after enzymatic digestion. Characterization of protein mixtures using HPLC/MS is also called shotgun proteomics and MuDPIT (Multi-Dimensional Protein Identification Technology). A peptide mixture that results from digestion of a protein mixture is fractionated by one or two steps of liquid chromatography. The eluent from the chromatography stage can be either directly introduced to the mass spectrometer through electrospray ionization, or laid down on a series of small spots for later mass analysis using MALDI.
Applications
Protein identification
There are two main ways MS is used to identify proteins. Peptide mass fingerprinting uses the masses of proteolytic peptides as input to a search of a database of predicted masses that would arise from digestion of a list of known proteins. If a protein sequence in the reference list gives rise to a significant number of predicted masses that match the experimental values, there is some evidence that this protein was present in the original sample. Purification steps therefore limit the throughput of the peptide mass fingerprinting approach. Alternatively, peptides can be fragmented with MS/MS to more definitively identify them.
MS is also the preferred method for the identification of post-translational modifications in proteins versus other approaches such as antibody-based methods.
De novo (peptide) sequencing
De novo peptide sequencing for mass spectrometry is typically performed without prior knowledge of the amino acid sequence. It is the process of assigning amino acids from peptide fragment masses of a protein. De novo sequencing has proven successful for confirming and expanding upon results from database searches.
As de novo sequencing is based on mass and some amino acids have identical masses (e.g. leucine and isoleucine), accurate manual sequencing can be difficult. Therefore, it may be necessary to utilize a sequence homology search application to work in tandem between a database search and de novo sequencing to address this inherent limitation.
Database searching has the advantage of quickly identifying sequences, provided they have already been documented in a database. Other inherent limitations of database searching include sequence modifications/mutations (some database searches do not adequately account for alterations to the 'documented' sequence, thus can miss valuable information), the unknown (if a sequence is not documented, it will not be found), false positives, and incomplete and corrupted data.
An annotated peptide spectral library can also be used as a reference for protein/peptide identification. It offers the unique strength of reduced search space and increased specificity. The limitations include spectra not included in the library will not be identified, spectra collected from different types of mass spectrometers can have quite distinct features, and reference spectra in the library may contain noise peaks, which may lead to false positive identifications. A number of different algorithmic approaches have been described to identify peptides and proteins from tandem mass spectrometry (MS/MS), peptide de novo sequencing and sequence tag-based searching.
Antigen presentation
Antigen presentation is the first step in educating the immune system to recognize new pathogens. To this end, antigen presenting cells expose protein fragments via MHC molecules to the immune system. Not all protein fragments bind, however, to the MHC molecules of a certain individual. Using mass spectrometry, the true spectrum of molecules presented to the immune system can be determined.
Protein quantitation
Multiple methods allow for the quantitation of proteins by mass spectrometry, and recent advances have enabled quantifying thousands of proteins in single cells. Protein quantification by mass spectrometry benefits from efficient sampling (counting) of many ions per protein compared to other methods. Quantifications can be performed by label-free methods and by multiplexed methods, which use isotopic mass tags as labels. Multiplexed methods can improve both quantitative accuracy and throughput.
Typically, stable (e.g. non-radioactive) heavier isotopes of carbon (13C) or nitrogen (15N) are incorporated into one sample while the other one is labeled with corresponding light isotopes (e.g. 12C and 14N). The two samples are mixed before the analysis. Peptides derived from the different samples can be distinguished due to their mass difference. The ratio of their peak intensities corresponds to the relative abundance ratio of the peptides (and proteins). The first generation of methods for isotope labeling included SILAC (stable isotope labeling by amino acids in cell culture), trypsin-catalyzed 18O labeling, ICAT (isotope coded affinity tagging), and iTRAQ (isobaric tags for relative and absolute quantitation). The more recent generation of multiplexing methods include tandem mass tags (TMT) for DDA data and mTRAQ for multiplexed DIA (plexDIA).
"Semi-quantitative" mass spectrometry can be performed without labeling of samples. Typically, this is done with MALDI analysis (in linear mode). The peak intensity, or the peak area, from individual molecules (typically proteins) is here correlated to the amount of protein in the sample. However, the individual signal depends on the primary structure of the protein, on the complexity of the sample, and on the settings of the instrument. Other types of "label-free" quantitative mass spectrometry, uses the spectral counts (or peptide counts) of digested proteins as a means for determining relative protein amounts.
Protein structure determination
Characteristics indicative of the 3-dimensional structure of proteins can be probed with mass spectrometry in various ways. Comparing charge state distributions can give information about the structure of a protein. A wide variety of high charge states indicates disorder of the protein, whereas more compact, folded proteins result in lower charge states. By using chemical crosslinking to couple parts of the protein that are close in space, but far apart in sequence, information about the overall structure can be inferred. By following the exchange of amide protons with deuterium from the solvent, it is possible to probe the solvent accessibility of various parts of the protein. Hydrogen-deuterium exchange mass spectrometry has been used to study proteins and their conformations for over 20 years. This type of protein structural analysis can be suitable for proteins that are challenging for other structural methods. Another interesting avenue in protein structural studies is laser-induced covalent labeling. In this technique, solvent-exposed sites of the protein are modified by hydroxyl radicals. Its combination with rapid mixing has been used in protein folding studies.
Proteogenomics
In what is now commonly referred to as proteogenomics, peptides identified with mass spectrometry are used for improving gene annotations (for example, gene start sites) and protein annotations. Parallel analysis of the genome and the proteome facilitates discovery of post-translational modifications and proteolytic events, especially when comparing multiple species.
References
Mass spectrometry
Proteomics | Protein mass spectrometry | [
"Physics",
"Chemistry"
] | 3,330 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
13,251,197 | https://en.wikipedia.org/wiki/Ion%20cyclotron%20resonance | Ion cyclotron resonance is a phenomenon related to the movement of ions in a magnetic field. It is used for accelerating ions in a cyclotron, and for measuring the masses of an ionized analyte in mass spectrometry, particularly with Fourier transform ion cyclotron resonance mass spectrometers. It can also be used to follow the kinetics of chemical reactions in a dilute gas mixture, provided these involve charged species.
Definition of the resonant frequency
An ion in a static and uniform magnetic field will move in a circle due to the Lorentz force. The angular frequency of this cyclotron motion for a given magnetic field strength B is given by
where z is the number of positive or negative charges of the ion, e is the elementary charge and m is the mass of the ion. An electric excitation signal having a frequency f will therefore resonate with ions having a mass-to-charge ratio m/z given by
The circular motion may be superimposed with a uniform axial motion, resulting in a helix, or with a uniform motion perpendicular to the field (e.g., in the presence of an electrical or gravitational field) resulting in a cycloid.
Ion cyclotron resonance heating
Ion cyclotron resonance heating (or ICRH) is a technique in which electromagnetic waves with frequencies corresponding to the ion cyclotron frequency is used to heat up a plasma. The ions in the plasma absorb the electromagnetic radiation and as a result of this, increase in kinetic energy. This technique is commonly used in the heating of tokamak plasmas.
In the solar wind
On March 8, 2013, NASA released an article according to which ion cyclotron waves were identified by its solar probe spacecraft called WIND as the main cause for the heating of the solar wind as it rises from the Sun's surface. Before this discovery, it was unclear why the solar wind particles would heat up instead of cool down, when speeding away from the Sun's surface.
See also
Cyclotron resonance
Electron cyclotron resonance
References
Condensed matter physics
Electric and magnetic fields in matter
Ion source
Scientific techniques
Plasma phenomena | Ion cyclotron resonance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 444 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Plasma physics",
"Plasma phenomena",
"Ion source",
"Electric and magnetic fields in matter",
"Phases of matter",
"Materials science",
"Mass spectrometry",
"Condensed matter physics",
"Matter"
] |
10,764,126 | https://en.wikipedia.org/wiki/NeuroArm | NeuroArm is an engineering research surgical robot specifically designed for neurosurgery. It is the first image-guided, MR-compatible surgical robot that has the capability to perform both microsurgery and stereotaxy.
IMRIS, Inc. acquired NeuroArm assets in 2010, and the company is working to develop a next generation of the technology for worldwide commercialization. It will be integrated with the VISIUS(TM) Surgical Theatre under the name SYMBIS(TM) Surgical System.
Design
NeuroArm was designed to be image-guided and can perform procedures inside an MRI. NeuroArm includes two remote detachable manipulators on a mobile base, a workstation and a system control cabinet. For biopsy-stereotaxy, either the left or right arm is transferred to a stereotactic platform that attaches to the MR bore. The procedure is performed with image-guidance, as MR images are acquired in near real-time. The end-effectors interface with surgical tools which are based on standard neurosurgical instruments.
End-effectors are equipped with three-dimensional force-sensors, providing the sense of touch. The surgeon seated at the workstation controls the robot using force feedback hand controllers. The workstation recreates the sight and sensation of microsurgery by displaying the surgical site and 3D MRI displays, with superimposed tools. NeuroArm enables remote manipulation of the surgical tools from a control room adjacent to the surgical suite. It was designed to function within the environment of 1.5 and 3.0 tesla intraoperative MRI systems. As neuroArm is MR-compatible, stereotaxy can be performed inside the bore of the magnet with near real-time image guidance. NeuroArm possesses the dexterity to perform microsurgery, outside of the MRI system.
Telerobotic operations both inside and outside the magnet are performed using specialized tool sets based on standard neurosurgical instruments, adapted to the end effectors. Using these, NeuroArm is able to cut and manipulate soft tissue, dissect tissue planes, suture, biopsy, electrocauterize, aspirate and irrigate.
History
The project began in 2002 when Daryl, B.J., and Don Seaman provided $2 million to fund the design efforts. Dr. Sutherland and his group established a collaboration with the Canadian space engineering company MacDonald Dettwiler and Associates (MDA). Close collaboration between MDA's robotic engineers and University of Calgary physicians, nurses, and scientists contributed to the design and development of NeuroArm. Official launch of the project was on April 17, 2007.
NeuroArm was designed to take full advantage of the imaging environment provided by intraoperative MRI. The ability to couple near real-time, high resolution images to robotic technologies provides the surgeon with image guidance, precision, accuracy, and dexterity. MDA's engineers were immersed in the operating room to study typical tool and surgeon motions in order to use biomimicry for effective design of the computer-assisted surgical device. The OR environment, personnel, surgical rhythm and instrumentation remain unchanged. The surgeon, sitting at the workstation, is provided a virtual environment that recreates the sight, sound, and touch of surgery. Functions like tremor filtering and motion scaling were applied to increase precision and accuracy while functions like no-go zones and linear lock were applied to enhance safety. Surgical tools near the patient's head are incapable of fully independent movement and are slaved to the surgeon’s movement at all times. Pre-planned automatic motions are used to move the robot arms away from the patient's head for manual tool exchange, and then return them to the original position and orientation.
On May 12, 2008, the first image-guided MR-compatible robotic neurosurgical procedure was performed at University of Calgary by Dr. Garnette Sutherland using the NeuroArm.
References
External links
Project neuroArm
Seaman Family MR Research Centre
SYMBIS Homepage on IMRIS Website
Videos
Video in press release for NeuroArm unveiling, University of Calgary, April 17, 2007
Related patents
Canadian Patent 2246369 Surgical procedure with magnetic resonance imaging
US Patent 5,735,278 (at USPTO) Surgical procedure with magnetic resonance imaging
US Patent 5,735,278 (at Google) Surgical procedure with magnetic resonance imaging
Neurosurgery
Surgical robots
Biomedical engineering | NeuroArm | [
"Engineering",
"Biology"
] | 914 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
10,768,456 | https://en.wikipedia.org/wiki/Software%20system | A software system is a system of intercommunicating components based on software forming part of a computer system (a combination of hardware and software). It "consists of a number of separate programs, configuration files, which are used to set up these programs, system documentation, which describes the structure of the system, and user documentation, which explains how to use the system".
A software system differs from a computer program or software. While a computer program is generally a set of instructions (source, or object code) that perform a specific task, a software system is more or an encompassing concept with many more components such as specification, test results, end-user documentation, maintenance records, etc.
The use of the term software system is at times related to the application of systems theory approaches in the context of software engineering. A software system consists of several separate computer programs and associated configuration files, documentation, etc., that operate together. The concept is used in the study of large and complex software, because it focuses on the major components of software and their interactions. It is also related to the field of software architecture.
Software systems are an active area of research for groups interested in software engineering in particular and systems engineering in general. Academic journals like the Journal of Systems and Software (published by Elsevier) are dedicated to the subject.
The ACM Software System Award is an annual award that honors people or an organization "for developing a system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both". It has been awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM.
Categories
Major categories of software systems include those based on application software development, programming software, and system software although the distinction can sometimes be difficult. Examples of software systems include operating systems, computer reservations systems, air traffic control systems, military command and control systems, telecommunication networks, content management systems, database management systems, expert systems, embedded systems, etc.
See also
ACM Software System Award
Common layers in an information system logical architecture
Computer program
Computer program installation
Experimental software engineering
Software bug
Software architecture
System software
Systems theory
Systems Science
Systems Engineering
Software Engineering
References
Systems engineering
Software engineering terminology | Software system | [
"Technology",
"Engineering"
] | 451 | [
"Software engineering",
"Systems engineering",
"Computing terminology",
"Software engineering terminology"
] |
10,773,039 | https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20rule | The Eötvös rule, named after the Hungarian physicist Loránd (Roland) Eötvös (1848–1919) enables the prediction of the surface tension of an arbitrary liquid pure substance at all temperatures. The density, molar mass and the critical temperature of the liquid have to be known. At the critical point the surface tension is zero.
The first assumption of the Eötvös rule is:
1. The surface tension is a linear function of the temperature.
This assumption is approximately fulfilled for most known liquids. When plotting the surface tension versus the temperature a fairly straight line can be seen which has a surface tension of zero at the critical temperature.
The Eötvös rule also gives a relation of the surface tension behaviour of different liquids in respect to each other:
2. The temperature dependence of the surface tension can be plotted for all liquids in a way that the data collapses to a single master curve. To do so either the molar mass, the density, or the molar volume of the corresponding liquid has to be known.
More accurate versions are found on the main page for surface tension.
The Eötvös rule
If V is the molar volume and Tc the critical temperature of a liquid the surface tension γ is given by
where k is a constant valid for all liquids, with a value of 2.1×10−7 J/(K·mol2/3).
More precise values can be gained when considering that the line normally passes the temperature axis 6 K before the critical point:
The molar volume V is given by the molar mass M and the density ρ
The term is also referred to as the "molar surface tension" γmol :
A useful representation that prevents the use of the unit mol−2/3 is given by the Avogadro constant NA :
As John Lennard-Jones and Corner showed in 1940 by means of the statistical mechanics the constant k′ is nearly equal to the Boltzmann constant.
Water
For water, the following equation is valid between 0 and 100 °C.
History
As a student, Eötvös started to research surface tension and developed a new method for its determination. The Eötvös rule was first found phenomenologically and published in 1886. In 1893 William Ramsay and Shields showed an improved version considering that the line normally passes the temperature axis 6 K before the critical point. John Lennard-Jones and Corner published (1940) a derivation of the equation by means of statistical mechanics. In 1945 E. A. Guggenheim gave a further improved variant of the equation.
References
Physical chemistry
Thermodynamic equations | Eötvös rule | [
"Physics",
"Chemistry"
] | 525 | [
"Applied and interdisciplinary physics",
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"nan",
"Physical chemistry"
] |
10,777,584 | https://en.wikipedia.org/wiki/Keetch%E2%80%93Byram%20drought%20index | The Keetch–Byram drought index (known as KBDI), created by John Keetch and George Byram in 1968 for the United States Department of Agriculture's Forest Service, is a measure of drought conditions. It is commonly used for the purpose of predicting the likelihood and severity of wildfire. It is calculated based on rainfall, air temperature, and other meteorological factors.
The KBDI is an estimate of the soil moisture deficit, which is the amount of water necessary to bring the soil moisture to its full capacity. A high soil moisture deficit means there is little water available for evaporation or plant transpiration. This occurs in conditions of extended drought, and has significant effects on fire behaviour.
In the United States, it is expressed as a range from 0 to 800, referring to hundredths of an inch of deficit in water availability; in countries that use the metric system, it is expressed from 0 to 200, referring to millimetres.
See also
National Fire Danger Rating System
Palmer drought index
Standardised Precipitation Evapotranspiration Index
McArthur Forest Fire Danger Index
1988 revision of the paper, "A drought index for forest fire control.". http://www.srs.fs.fed.us/pubs/rp/rp_se273.pdf
References
Droughts
Eponymous indices
Hazard scales
Hydrology
Wildfires
Meteorological indices | Keetch–Byram drought index | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 281 | [
"Hydrology",
"Environmental engineering"
] |
10,778,792 | https://en.wikipedia.org/wiki/South%20American%20Energy%20Council | The South American Energy Council is a body set up to co-ordinate the regional energy policy of the Union of South American Nations (UNASUR).
History
Its establishment was agreed at the first South American Energy Summit, which took place on April 16–17 2007 on Isla Margarita in the Venezuelan state of Nueva Esparta. It was officially created in May 2010 during the UNASUR's Extraordinary Summit in Los Cardales, Argentina.
In 2012, the Council started to draft a South American Energy Treaty. Before drafting this treaty, the Council coordinated the creation of an energy balance and a 15-point strategy.
See also
South American Organization of Gas Producers and Exporters
References
Union of South American Nations
International energy organizations
Organizations established in 2007
Energy policy
Energy in Argentina
Energy in Bolivia
Energy in Brazil
Energy in Chile
Energy in Colombia
Energy in Ecuador
Energy in Guyana
Energy in Paraguay
Energy in Peru
Energy in Suriname
Energy in Uruguay
Energy in Venezuela
Energy in South America
2007 establishments in South America | South American Energy Council | [
"Engineering",
"Environmental_science"
] | 197 | [
"International energy organizations",
"Environmental social science",
"Energy organizations",
"Energy policy"
] |
569,705 | https://en.wikipedia.org/wiki/Cell-mediated%20immunity | Cellular immunity, also known as cell-mediated immunity, is an immune response that does not rely on the production of antibodies. Rather, cell-mediated immunity is the activation of phagocytes, antigen-specific cytotoxic T-lymphocytes, and the release of various cytokines in response to an antigen.
History
In the late 19th century Hippocratic tradition medicine system, the immune system was imagined into two branches: humoral immunity, for which the protective function of immunization could be found in the humor (cell-free bodily fluid or serum) and cellular immunity, for which the protective function of immunization was associated with cells. CD4 cells or helper T cells provide protection against different pathogens. Naive T cells, which are immature T cells that have yet to encounter an antigen, are converted into activated effector T cells after encountering antigen-presenting cells (APCs). These APCs, such as macrophages, dendritic cells, and B cells in some circumstances, load antigenic peptides onto the major histocompatibility complex (MHC) of the cell, in turn presenting the peptide to receptors on T cells. The most important of these APCs are highly specialized dendritic cells; conceivably operating solely to ingest and present antigens. Activated effector T cells can be placed into three functioning classes, detecting peptide antigens originating from various types of pathogen: The first class being 1) Cytotoxic T cells, which kill infected target cells by apoptosis without using cytokines, 2) Th1 cells, which primarily function to activate macrophages, and 3) Th2 cells, which primarily function to stimulate B cells into producing antibodies.
In another ideology, the innate immune system and the adaptive immune system each comprise both humoral and cell-mediated components. Some cell-mediated components of the innate immune system include myeloid phagocytes, innate lymphoid cells (NK cells) and intraepithelial lymphocytes.
Synopsis
Cellular immunity protects the body through:
T-cell mediated immunity or T-cell immunity: activating antigen-specific cytotoxic T cells that are able to induce apoptosis in body cells displaying epitopes of foreign antigen on their surface, such as virus-infected cells, cells with intracellular bacteria, and cancer cells displaying tumor antigens;
Macrophage and natural killer cell action: enabling the destruction of pathogens via recognition and secretion of cytotoxic granules (for natural killer cells) and phagocytosis (for macrophages); and
Stimulating cells to secrete a variety of cytokines that influence the function of other cells involved in adaptive immune responses and innate immune responses.
Cell-mediated immunity is directed primarily at microbes that survive in phagocytes and microbes that infect non-phagocytic cells. It is most effective in removing virus-infected cells, but also participates in defending against fungi, protozoans, cancers, and intracellular bacteria. It also plays a major role in transplant rejection.
Type 1 immunity is directed primarily at viruses, bacteria, and protozoa and is responsible for activating macrophages, turning them into potent effector cells. This is achieved by the secretion of interferon gamma and TNF.
Overview
CD4+ T-helper cells may be differentiated into two main categories:
TH1 cells which produce interferon gamma and lymphotoxin alpha,
TH2 cells which produce IL-4, IL-5, and IL-13.
A third category called T helper 17 cells (TH17) were also discovered which are named after their secretion of Interleukin 17.
CD8+ cytotoxic T-cells may also be categorized as:
Tc1 cells,
Tc2 cells.
Similarly to CD4+ TH cells, a third category called TC17 were discovered that also secrete IL-17.
As for the ILCs, they[Clarification needed.] may be classified into three main categories
ILC1 which secrete type 1 cytokines,
ILC2 which secrete type 2 cytokines,
ILC3 which secrete type 17 cytokines.
Development of cells
All type 1 cells begin their development from the common lymphoid progenitor (CLp) which then differentiates to become the common innate lymphoid progenitor (CILp) and the t-cell progenitor (Tp) through the process of lymphopoiesis.
Common innate lymphoid progenitors may then be differentiated into a natural killer progenitor (NKp) or a common helper like innate lymphoid progenitor (CHILp). NKp cells may then be induced to differentiate into natural killer cells by IL-15. CHILp cells may be induced to differentiate into ILC1 cells by IL-15, into ILC2 cells by IL-7 or ILC3 cells by IL-7 as well.
T-cell progenitors may differentiate into naïve CD8+ cells or naïve CD4+ cells. Naïve CD8+ cells may then further differentiate into TC1 cells upon IL-12 exposure, [IL-4] can induce the differentiation into TC2 cells and IL-1 or IL-23 can induce the differentiation into TC17 cells. Naïve CD4+ cells may differentiate into TH1 cells upon IL-12 exposure, TH2 upon IL-4 exposure or TH17 upon IL-1 or IL-23 exposure.
Type 1 immunity
Type 1 immunity makes use of the type 1 subset for each of these cell types. By secreting interferon gamma and TNF, TH1, TC1, and group 1 ILCS activate macrophages, converting them to potent effector cells. It provides defense against intracellular bacteria, protozoa, and viruses. It is also responsible for inflammation and autoimmunity with diseases such as rheumatoid arthritis, multiple sclerosis, and inflammatory bowel disease all being implicated in type 1 immunity. Type 1 immunity consists of these cells:
CD4+ TH1 cells
CD8+ cytotoxic T cells (Tc1)
T-Bet+ interferon gamma producing group 1 ILCs(ILC1 and Natural killer cells)
CD4+ TH1 Cells
It has been found in both mice and humans that the signature cytokines for these cells are interferon gamma and lymphotoxin alpha. The main cytokine for differentiation into TH1 cells is IL-12 which is produced by dendritic cells in response to the activation of pattern recognition receptors. T-bet is a distinctive transcription factor of TH1 cells. TH1 cells are also characterized by the expression of chemokine receptors which allow their movement to sites of inflammation. The main chemokine receptors on these cells are CXCR3A and CCR5. Epithelial cells and keratinocytes are able to recruit TH1 cells to sites of infection by releasing the chemokines CXCL9, CXCL10 and CXCL11 in response to interferon gamma. Additionally, interferon gamma secreted by these cells seems to be important in downregulating tight junctions in the epithelial barrier.
CD8+ TC1 Cells
These cells generally produce interferon gamma. Interferon gamma and IL-12 promote differentiation toward TC1 cells. T-bet activation is required for both interferon gamma and cytolytic potential. CCR5 and CXCR3 are the main chemokine receptors for this cell.
Group 1 ILCs
Groups 1 ILCs are defined to include ILCs expressing the transcription factor T-bet and were originally thought to only include natural killer cells. Recently, there have been a large amount of NKp46+ cells that express certain master [transcription factor]s that allow them to be designated as a distinct lineage of natural killer cells termed ILC1s. ILC1s are characterized by the ability to produce interferon gamma, TNF, GM-CSF and IL-2 in response to cytokine stimulation but have low or no cytotoxic ability.
See also
Immune system
Humoral immunity (vs. cell-mediated immunity)
Immunity
References
Bibliography
Cell-mediated immunity (Encyclopædia Britannica)
Chapter 8:T Cell-Mediated Immunity Immunobiology: The Immune System in Health and Disease. 5th edition.
The 3 major types of innate and adaptive cell-mediated effector immunity
functions%20in%20steady-state%20homeostasis%20and%20during%20immune%20challenge. Innate lymphocytes-lineage, localization and timing of differentiation
Further reading
Cell-mediated immunity: How T cells recognize and respond to foreign antigens
Immunology
Helper
Human cells
Phagocytes
Cell biology
Immune system
Lymphatic system
Infectious diseases
Cell signaling | Cell-mediated immunity | [
"Chemistry",
"Biology"
] | 1,868 | [
"Cell biology",
"Immune system",
"Signal transduction",
"Immunology",
"Cytokines",
"Organ systems",
"Apoptosis"
] |
570,111 | https://en.wikipedia.org/wiki/Electronic%20control%20unit | An electronic control unit (ECU), also known as an electronic control module (ECM), is an embedded system in automotive electronics that controls one or more of the electrical systems or subsystems in a car or other motor vehicle.
Modern vehicles have many ECUs, and these can include some or all of the following: engine control module (ECM), powertrain control module (PCM), transmission control module (TCM), brake control module (BCM or EBCM), central control module (CCM), central timing module (CTM), general electronic module (GEM), body control module (BCM), and suspension control module (SCM). These ECUs together are sometimes referred to collectively as the car's computer though technically they are all separate computers, not a single one. Sometimes an assembly incorporates several individual control modules (a PCM often controls both the engine and the transmission).
Some modern motor vehicles have up to 150 ECUs. Embedded software in ECUs continues to increase in line count, complexity, and sophistication. Managing the increasing complexity and number of ECUs in a vehicle has become a key challenge for original equipment manufacturers (OEMs).
Types
Generic industry controller namingIs the naming of controllers where the logical thought of the controller's name implies the system the controller is responsible for controlling
Generic powertrainThe generic powertrain pertains to a vehicle's emission system and is the only regulated controller name.
Other controllersAll other controller names are decided upon by the individual OEM. The engine controller may have several different names, such as "DME", "Enhanced Powertrain", "PGM-FI" and many others.
Door control unit (DCU)
Engine control unit (ECU)not to be confused with electronic control unit, the generic term for all these devices
Electric power steering control unit (PSCU)Generally this will be integrated into the EPS power pack.
Human–machine interface (HMI)
Powertrain control module (PCM)Sometimes the functions of the engine control unit and transmission control module (TCM) are combined into a single unit called the Powertrain Control Module.
Seat control unit
Speed control unit (SCU)
Telematic control unit (TCU)
Transmission control module (TCM)
Brake control module (BCM; ABS or ESC)
Battery management system (BMS)
Key elements
Core
Microcontroller
Memory
SRAM
EEPROM
Flash
Inputs
Supply Voltage and Ground
Digital inputs
Analog inputs
Outputs
Actuator drivers (e.g. injectors, relays, valves)
H bridge drivers for servomotors
Logic outputs
Communication links
Housing
Bus Transceivers, e.g. for K-Line, CAN, Ethernet
Embedded Software
Boot Loader
Metadata for ECU and Software Identification, Version Management, Checksums
Functional Software Routines
Configuration Data
Design and development
The development of an ECU involves both hardware and software required to perform the functions expected from that particular module. Automotive ECU's are being developed following the V-model. Recently the trend is to dedicate a significant amount of time and effort to develop safe modules by following standards like ISO 26262. It is rare that a module is developed fully from scratch. The design is generally iterative and improvements are made to both the hardware and software. The development of most ECUs is carried out by Tier 1 suppliers based on specifications provided by the OEM.
Testing and validation
As part of the development cycle, manufacturers perform detailed FMEAs and other failure analyses to catch failure modes that can lead to unsafe conditions or driver annoyance. Extensive testing and validation activities are carried out as part of the Production part approval process to gain the confidence of the hardware and software. On-board diagnostics or OBD help provide specific data related to which system or component failed or caused a failure during run time and help perform repairs.
Modifications
Some people may wish to modify their ECU so as to be able to add or change functionality. However modern ECUs come equipped with protection locks to prevent users from modifying the circuit or exchange chips. The protection locks are a form of digital rights management (DRM), the circumventing of which is illegal in certain jurisdictions. In the United States for example, the DMCA criminalizes circumvention of DRM, though an exemption does apply that allows circumvention the owner of a motorized land vehicle if it is required to allow diagnosis, repair or lawful modification (ie. that does not violate applicable law such as emissions regulations).
References
Power control
Engine technology
Fuel injection systems
Engine control systems
Engine components
Onboard computers
Auto parts | Electronic control unit | [
"Physics",
"Technology",
"Engineering"
] | 952 | [
"Self-driving cars",
"Physical quantities",
"Engines",
"Engine technology",
"Power (physics)",
"Automotive engineering",
"Engine components",
"Power control"
] |
570,172 | https://en.wikipedia.org/wiki/Airspeed | In aviation, airspeed is the speed of an aircraft relative to the air it is flying through (which itself is usually moving relative to the ground due to wind). It is difficult to measure the exact airspeed of the aircraft (true airspeed), but other measures of airspeed, such as indicated airspeed and Mach number give useful information about the capabilities and limitations of airplane performance. The common measures of airspeed are:
Indicated airspeed (IAS), what is read on an airspeed gauge connected to a pitot-static system.
Calibrated airspeed (CAS), indicated airspeed adjusted for pitot system position and installation error.
True airspeed (TAS) is the actual speed the airplane is moving through the air. In conjunction with winds aloft it is used for navigation.
Equivalent airspeed (EAS) is true airspeed times root density ratio. It is a useful way of calculating aerodynamic loads and airplane performance at low speeds when the flow can be considered incompressible.
Mach number is a measure of how fast the airplane is flying relative to the speed of sound.
The measurement and indication of airspeed is ordinarily accomplished on board an aircraft by an airspeed indicator (ASI) connected to a pitot-static system. The pitot-static system comprises one or more pitot probes (or tubes) facing the on-coming air flow to measure pitot pressure (also called stagnation, total or ram pressure) and one or more static ports to measure the static pressure in the air flow. These two pressures are compared by the ASI to give an IAS reading. Airspeed indicators are designed to give true airspeed at sea level pressure and standard temperature. As the aircraft climbs into less dense air, its true airspeed is greater than the airspeed indicated on the ASI.
Calibrated airspeed is typically within a few knots of indicated airspeed, while equivalent airspeed decreases slightly from CAS as aircraft altitude increases or at high speeds.
Units
Airspeed is commonly given in knots (kn). Since 2010, the International Civil Aviation Organization (ICAO) recommends using kilometers per hour (km/h) for airspeed (and meters per second for wind speed on runways), but allows using the de facto standard of knots, and has no set date on when to stop.
Depending on the country of manufacture or which era in aviation history, airspeed indicators on aircraft instrument panels have been configured to read in knots, kilometers per hour, miles per hour. In high altitude flight, the Mach number is sometimes used for reporting airspeed.
Indicated airspeed
Indicated airspeed (IAS) is the airspeed indicator reading (ASIR) uncorrected for instrument, position, and other errors. From current EASA definitions: Indicated airspeed means the speed of an aircraft as shown on its pitot static airspeed indicator calibrated to reflect standard atmosphere adiabatic compressible flow at sea level uncorrected for airspeed system errors.
An airspeed indicator is a differential pressure gauge with the pressure reading expressed in units of speed, rather than pressure. The airspeed is derived from the difference between the ram air pressure from the pitot tube, or stagnation pressure, and the static pressure. The pitot tube is mounted facing forward; the static pressure is frequently detected at static ports on one or both sides of the aircraft. Sometimes both pressure sources are combined in a single probe, a pitot-static tube. The static pressure measurement is subject to error due to inability to place the static ports at positions where the pressure is true static pressure at all airspeeds and attitudes. The correction for this error is the position error correction (PEC) and varies for different aircraft and airspeeds. Further errors of 10% or more are common if the airplane is flown in "uncoordinated" flight.
Uses of indicated airspeed
Indicated airspeed is a better measure of power required and lift available than true airspeed. Therefore, IAS is used for controlling the aircraft during taxiing, takeoff, climb, descent, approach or landing. Target speeds for best rate of climb, best range, and best endurance are given in terms of indicated speed. The airspeed structural limit, beyond which the forces on panels may become too high or wing flutter may occur, is often given in terms of IAS.
Calibrated airspeed
Calibrated airspeed (CAS) is indicated airspeed corrected for instrument errors, position error (due to incorrect pressure at the static port) and installation errors.
Calibrated airspeed values less than the speed of sound at standard sea level (661.4788 knots) are calculated as follows:
minus position and installation error correction.
where
is the calibrated airspeed,
is speed of sound at standard sea level
is the ratio of specific heats (1.4 for air)
is the impact pressure, the difference between total pressure and static pressure
is the static air pressure at standard sea level
This expression is based on the form of Bernoulli's equation applicable to isentropic compressible flow. CAS is the same as true air speed at sea level standard conditions, but becomes smaller relative to true airspeed as we climb into lower pressure and cooler air. Nevertheless, it remains a good measure of the forces acting on the airplane, meaning stall speeds can be called out on the airspeed indicator. The values for and are consistent with the ISA i.e. the conditions under which airspeed indicators are calibrated.
True airspeed
The true airspeed (TAS; also KTAS, for knots true airspeed) of an aircraft is the speed of the aircraft relative to the air in which it is flying. The true airspeed and heading of an aircraft constitute its velocity relative to the atmosphere.
Uses of true airspeed
The true airspeed is important information for accurate navigation of an aircraft. To maintain a desired ground track whilst flying in a moving airmass, the pilot of an aircraft must use knowledge of wind speed, wind direction, and true air speed to determine the required heading. See wind triangle.
TAS is the appropriate speed to use when calculating the range of an airplane. It is the speed normally listed on the flight plan, also used in flight planning, before considering the effects of wind.
Measurement of true airspeed
True airspeed is calculated from calibrated airspeed as follows
where
is true airspeed
is the temperature ratio, namely local over standard sea level temperature,
Some airspeed indicators include a TAS scale, which is set by entering outside air temperature and pressure altitude. Alternatively, TAS can be calculated using an E6B flight calculator or equivalent, given inputs of CAS, outside air temperature (OAT) and pressure altitude.
Equivalent airspeed
Equivalent airspeed (EAS) is defined as the airspeed at sea level in the International Standard Atmosphere at which the (incompressible) dynamic pressure is the same as the dynamic pressure at the true airspeed (TAS) and altitude at which the aircraft is flying. That is, it is defined by the equation
where
is equivalent airspeed
is true airspeed
is the density of air at the altitude at which the aircraft is currently flying;
is the density of air at sea level in the International Standard Atmosphere (1.225 kg/m3 or 0.00237 slug/ft3).
Stated differently,
where
is the density ratio, that is
Uses of equivalent airspeed
EAS is a measure of airspeed that is a function of incompressible dynamic pressure. Structural analysis is often in terms of incompressible dynamic pressure, so equivalent airspeed is a useful speed for structural testing. The significance of equivalent airspeed is that, at Mach numbers below the onset of wave drag, all of the aerodynamic forces and moments on an aircraft are proportional to the square of the equivalent airspeed. Thus, the handling and 'feel' of an aircraft, and the aerodynamic loads upon it, at a given equivalent airspeed, are very nearly constant and equal to those at standard sea level irrespective of the actual flight conditions.
At standard sea level pressure, CAS and EAS are equal. Up to about 200 knots CAS and 10,000 ft (3,000 m) the difference is negligible, but at higher speeds and altitudes CAS diverges from EAS due to compressibility.
Mach number
Mach number is defined as
where
is true airspeed
is the local speed of sound
Both the Mach number and the speed of sound can be computed using measurements of impact pressure, static pressure and outside air temperature.
Uses of Mach number
For aircraft that fly close to, but below the speed of sound (i.e. most civil jets) the compressibility speed limit is given in terms of Mach number. Beyond this speed, Mach buffet or stall or tuck may occur.
See also
ICAO recommendations on use of the International System of Units
Acronyms and abbreviations in avionics
Flight instruments
Ground speed
Maneuvering speed
V speeds
References
Bibliography
External links
Calculators
Aerodynamics | Airspeed | [
"Physics",
"Chemistry",
"Engineering"
] | 1,851 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
571,100 | https://en.wikipedia.org/wiki/Penthouse%20apartment | A penthouse is an apartment or unit traditionally on the highest floor of an apartment building, condominium, hotel, or tower. Penthouses are typically differentiated from other apartments by luxury features. The term 'penthouse' originally referred, and sometimes still does refer, to a separate smaller 'house' that was constructed on the roof of an apartment building. Architecturally it refers specifically to a structure on the roof of a building that is set back from its outer walls. These structures do not have to occupy the entire roof deck. Recently, luxury high rise apartment buildings have begun to designate multiple units on the entire top residential floor or multiple higher residential floors including the top floor as penthouse apartments, and outfit them to include ultra-luxury fixtures, finishes, and designs which are different from all other residential floors of the building. These penthouse apartments are not typically set back from the building's outer walls, but are instead flush with the rest of the building and simply differ in size, luxury, and consequently price. High-rise buildings can also have structures known as mechanical penthouses that enclose machinery or equipment such as the drum mechanisms for an elevator.
Etymology
The name penthouse is derived from , an Old French word meaning "attached building" or "appendage". The modern spelling is influenced by a 16th-century folk etymology that combines the Middle French word for "slope" () with the English noun house (the meaning at that time was "attached building with a sloping roof or awning").
Development
European designers and architects long recognized the potential in creating living spaces that could make use of rooftops and such setbacks. Penthouses first appeared in US cities in the 1920s with the exploitation of roof spaces for upscale property. The first recognized development was atop the Plaza Hotel overlooking Central Park in New York City in 1923. Its success caused a rapid development of similar luxury penthouse apartments in most major cities in the United States in the following years.
The popularity of penthouses stemmed from the setbacks allowing for significantly larger private outdoor terrace spaces than traditional cantilevered balconies. Due to the desirability of having outdoor space, buildings began to be designed with setbacks that could accommodate the development of apartments and terraces on their uppermost levels.
Modern penthouses may or may not have terraces. Upper floor space may be divided among several apartments, or a single apartment may occupy an entire floor. Penthouses often have their own private access where access to any roof, terrace, and any adjacent setback is exclusively controlled.
Design
Penthouses can also differentiate themselves by luxurious amenities such as high-end appliances, finest materials fitting, luxurious flooring system, and more.
Features not found in the majority of apartments in the building may include a private entrance or elevator, or higher/vaulted ceilings. In buildings consisting primarily of single level apartments, penthouse apartments may be distinguished by having two or more levels. They may also have such features as a terrace, fireplace, more floor area, oversized windows, multiple master suites, den/office space, hot-tubs, and more. They might be equipped with luxury kitchens featuring stainless steel appliances, granite counter-tops, breakfast bar/island, and more.
Penthouse residents often have fine views of the city skyline. Access to a penthouse apartment is usually provided by a separate elevator. Residents can also access a number of building services, such as pickup and delivery of everything from dry cleaning to dinner; reservations to restaurants and events made by building staffers; and other concierge services.
Penthouse apartments can also be situated on the corner of a building, providing 90° or more views of the surrounding skyline.
Cultural references
Penthouse apartments are considered to be at the top of their markets, and are generally the most expensive, with expansive views, large living spaces, and top-of-the-line amenities. Accordingly, they are often associated with a luxury lifestyle. Publisher Bob Guccione named his magazine Penthouse, with the trademark phrase "Life on top".
See also
Basement apartment
Luxury apartment
Roof garden
Notes
References
External links
Apartment types
Houses | Penthouse apartment | [
"Technology"
] | 819 | [
"Structural system",
"Houses"
] |
571,274 | https://en.wikipedia.org/wiki/Drug%20discovery | In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new candidate medications are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery, as with penicillin. More recently, chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that had a desirable therapeutic effect in a process known as classical pharmacology. After sequencing of the human genome allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compounds libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy.
Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, the process of drug development can continue. If successful, clinical trials are developed.
Modern drug discovery is thus usually a capital-intensive process that involves large investments by pharmaceutical industry corporations as well as national governments (who provide grants and loan guarantees). Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity was about US$1.8 billion. In the 21st century, basic discovery research is funded primarily by governments and by philanthropic organizations, while late-stage development is funded primarily by pharmaceutical companies or venture capitalists. To be allowed to come to market, drugs must undergo several successful phases of clinical trials, and pass through a new drug approval process, called the New Drug Application in the United States.
Discovering drugs that may be a commercial success, or a public health success, involves a complex interaction between investors, industry, academia, patent laws, regulatory exclusivity, marketing, and the need to balance secrecy with communication. Meanwhile, for disorders whose rarity means that no large commercial success or public health effect can be expected, the orphan drug funding process ensures that people who experience those disorders can have some hope of pharmacotherapeutic advances.
History
The idea that the effect of a drug in the human body is mediated by specific interactions of the drug molecule with biological macromolecules, (proteins or nucleic acids in most cases) led scientists to the conclusion that individual chemicals are required for the biological activity of the drug. This made for the beginning of the modern era in pharmacology, as pure chemicals, instead of crude extracts of medicinal plants, became the standard drugs. Examples of drug compounds isolated from crude preparations are morphine, the active agent in opium, and digoxin, a heart stimulant originating from Digitalis lanata. Organic chemistry also led to the synthesis of many of the natural products isolated from biological sources.
Historically, substances, whether crude extracts or purified chemicals, were screened for biological activity without knowledge of the biological target. Only after an active substance was identified was an effort made to identify the target. This approach is known as classical pharmacology, forward pharmacology, or phenotypic drug discovery.
Later, small molecules were synthesized to specifically target a known physiological/pathological pathway, avoiding the mass screening of banks of stored compounds. This led to great success, such as the work of Gertrude Elion and George H. Hitchings on purine metabolism, the work of James Black on beta blockers and cimetidine, and the discovery of statins by Akira Endo. Another champion of the approach of developing chemical analogues of known active substances was Sir David Jack at Allen and Hanbury's, later Glaxo, who pioneered the first inhaled selective beta2-adrenergic agonist for asthma, the first inhaled steroid for asthma, ranitidine as a successor to cimetidine, and supported the development of the triptans.
Gertrude Elion, working mostly with a group of fewer than 50 people on purine analogues, contributed to the discovery of the first anti-viral; the first immunosuppressant (azathioprine) that allowed human organ transplantation; the first drug to induce remission of childhood leukemia; pivotal anti-cancer treatments; an anti-malarial; an anti-bacterial; and a treatment for gout.
Cloning of human proteins made possible the screening of large libraries of compounds against specific targets thought to be linked to specific diseases. This approach is known as reverse pharmacology and is the most frequently used approach today.
In the 2020s, qubit and quantum computing started to be used to reduce the time needed to drug discovery.
Targets
A "target" is produced within the pharmaceutical industry. Generally, the "target" is the naturally existing cellular or molecular structure involved in the pathology of interest where the drug-in-development is meant to act. However, the distinction between a "new" and "established" target can be made without a full understanding of just what a "target" is. This distinction is typically made by pharmaceutical companies engaged in the discovery and development of therapeutics. In an estimate from 2011, 435 human genome products were identified as therapeutic drug targets of FDA-approved drugs.
"Established targets" are those for which there is a good scientific understanding, supported by a lengthy publication history, of both how the target functions in normal physiology and how it is involved in human pathology. This does not imply that the mechanism of action of drugs that are thought to act through a particular established target is fully understood. Rather, "established" relates directly to the amount of background information available on a target, in particular functional information. In general, "new targets" are all those targets that are not "established targets" but which have been or are the subject of drug discovery efforts. The majority of targets selected for drug discovery efforts are proteins, such as G-protein-coupled receptors (GPCRs) and protein kinases.
Screening and design
The process of finding a new drug against a chosen target for a particular disease usually involves high-throughput screening (HTS), wherein large libraries of chemicals are tested for their ability to modify the target. For example, if the target is a novel GPCR, compounds will be screened for their ability to inhibit or stimulate that receptor (see antagonist and agonist): if the target is a protein kinase, the chemicals will be tested for their ability to inhibit that kinase.
Another function of HTS is to show how selective the compounds are for the chosen target, as one wants to find a molecule which will interfere with only the chosen target, but not other, related targets. To this end, other screening runs will be made to see whether the "hits" against the chosen target will interfere with other related targets – this is the process of cross-screening. Cross-screening is useful because the more unrelated targets a compound hits, the more likely that off-target toxicity will occur with that compound once it reaches the clinic.
It is unlikely that a perfect drug candidate will emerge from these early screening runs. One of the first steps is to screen for compounds that are unlikely to be developed into drugs; for example compounds that are hits in almost every assay, classified by medicinal chemists as "pan-assay interference compounds", are removed at this stage, if they were not already removed from the chemical library. It is often observed that several compounds are found to have some degree of activity, and if these compounds share common chemical features, one or more pharmacophores can then be developed. At this point, medicinal chemists will attempt to use structure–activity relationships (SAR) to improve certain features of the lead compound:
increase activity against the chosen target
reduce activity against unrelated targets
improve the druglikeness or ADME properties of the molecule.
This process will require several iterative screening runs, during which, it is hoped, the properties of the new molecular entities will improve, and allow the favoured compounds to go forward to in vitro and in vivo testing for activity in the disease model of choice.
Amongst the physicochemical properties associated with drug absorption include ionization (pKa), and solubility; permeability can be determined by PAMPA and Caco-2. PAMPA is attractive as an early screen due to the low consumption of drug and the low cost compared to tests such as Caco-2, gastrointestinal tract (GIT) and Blood–brain barrier (BBB) with which there is a high correlation.
A range of parameters can be used to assess the quality of a compound, or a series of compounds, as proposed in the Lipinski's Rule of Five. Such parameters include calculated properties such as cLogP to estimate lipophilicity, molecular weight, polar surface area and measured properties, such as potency, in-vitro measurement of enzymatic clearance etc. Some descriptors such as ligand efficiency (LE) and lipophilic efficiency (LiPE) combine such parameters to assess druglikeness.
While HTS is a commonly used method for novel drug discovery, it is not the only method. It is often possible to start from a molecule which already has some of the desired properties. Such a molecule might be extracted from a natural product or even be a drug on the market which could be improved upon (so-called "me too" drugs). Other methods, such as virtual high throughput screening, where screening is done using computer-generated models and attempting to "dock" virtual libraries to a target, are also often used.
Another method for drug discovery is de novo drug design, in which a prediction is made of the sorts of chemicals that might (e.g.) fit into an active site of the target enzyme. For example, virtual screening and computer-aided drug design are often used to identify new chemical moieties that may interact with a target protein. Molecular modelling and molecular dynamics simulations can be used as a guide to improve the potency and properties of new drug leads.
There is also a paradigm shift in the drug discovery community to shift away from HTS, which is expensive and may only cover limited chemical space, to the screening of smaller libraries (maximum a few thousand compounds). These include fragment-based lead discovery (FBDD) and protein-directed dynamic combinatorial chemistry. The ligands in these approaches are usually much smaller, and they bind to the target protein with weaker binding affinity than hits that are identified from HTS. Further modifications through organic synthesis into lead compounds are often required. Such modifications are often guided by protein X-ray crystallography of the protein-fragment complex. The advantages of these approaches are that they allow more efficient screening and the compound library, although small, typically covers a large chemical space when compared to HTS.
Phenotypic screens have also provided new chemical starting points in drug discovery. A variety of models have been used including yeast, zebrafish, worms, immortalized cell lines, primary cell lines, patient-derived cell lines and whole animal models. These screens are designed to find compounds which reverse a disease phenotype such as death, protein aggregation, mutant protein expression, or cell proliferation as examples in a more holistic cell model or organism. Smaller screening sets are often used for these screens, especially when the models are expensive or time-consuming to run. In many cases, the exact mechanism of action of hits from these screens is unknown and may require extensive target deconvolution experiments to ascertain. The growth of the field of chemoproteomics has provided numerous strategies to identify drug targets in these cases.
Once a lead compound series has been established with sufficient target potency and selectivity and favourable drug-like properties, one or two compounds will then be proposed for drug development. The best of these is generally called the lead compound, while the other will be designated as the "backup". These decisions are generally supported by computational modelling innovations.
Nature as source
Traditionally, many drugs and other chemicals with biological activity have been discovered by studying chemicals that organisms create to affect the activity of other organisms for survival.
Despite the rise of combinatorial chemistry as an integral part of lead discovery process, natural products still play a major role as starting material for drug discovery. A 2007 report found that of the 974 small molecule new chemical entities developed between 1981 and 2006, 63% were natural derived or semisynthetic derivatives of natural products. For certain therapy areas, such as antimicrobials, antineoplastics, antihypertensive and anti-inflammatory drugs, the numbers were higher.
Natural products may be useful as a source of novel chemical structures for modern techniques of development of antibacterial therapies.
Plant-derived
Many secondary metabolites produced by plants have potential therapeutic medicinal properties. These secondary metabolites contain, bind to, and modify the function of proteins (receptors, enzymes, etc.). Consequently, plant derived natural products have often been used as the starting point for drug discovery.
History
Until the Renaissance, the vast majority of drugs in Western medicine were plant-derived extracts. This has resulted in a pool of information about the potential of plant species as important sources of starting materials for drug discovery. Botanical knowledge about different metabolites and hormones that are produced in different anatomical parts of the plant (e.g. roots, leaves, and flowers) are crucial for correctly identifying bioactive and pharmacological plant properties. Identifying new drugs and getting them approved for market has proved to be a stringent process due to regulations set by national drug regulatory agencies.
Jasmonates
Jasmonates are important in responses to injury and intracellular signals. They induce apoptosis and protein cascade via proteinase inhibitor, have defense functions, and regulate plant responses to different biotic and abiotic stresses. Jasmonates also have the ability to directly act on mitochondrial membranes by inducing membrane depolarization via release of metabolites.
Jasmonate derivatives (JAD) are also important in wound response and tissue regeneration in plant cells. They have also been identified to have anti-aging effects on human epidermal layer. It is suspected that they interact with proteoglycans (PG) and glycosaminoglycan (GAG) polysaccharides, which are essential extracellular matrix (ECM) components to help remodel the ECM. The discovery of JADs on skin repair has introduced newfound interest in the effects of these plant hormones in therapeutic medicinal application.
Salicylates
Salicylic acid (SA), a phytohormone, was initially derived from willow bark and has since been identified in many species. It is an important player in plant immunity, although its role is still not fully understood by scientists. They are involved in disease and immunity responses in plant and animal tissues. They have salicylic acid binding proteins (SABPs) that have shown to affect multiple animal tissues. The first discovered medicinal properties of the isolated compound was involved in pain and fever management. They also play an active role in the suppression of cell proliferation. They have the ability to induce death in lymphoblastic leukemia and other human cancer cells. One of the most common drugs derived from salicylates is aspirin, also known as acetylsalicylic acid, with anti-inflammatory and anti-pyretic properties.
Animal-derived
Some drugs used in modern medicine have been discovered in animals or are based on compounds found in animals. For example, the anticoagulant drugs, hirudin and its synthetic congener, bivalirudin, are based on saliva chemistry of the leech, Hirudo medicinalis. Used to treat type 2 diabetes, exenatide was developed from saliva compounds of the Gila monster, a venomous lizard.
Microbial metabolites
Microbes compete for living space and nutrients. To survive in these conditions, many microbes have developed abilities to prevent competing species from proliferating. Microbes are the main source of antimicrobial drugs. Streptomyces isolates have been such a valuable source of antibiotics, that they have been called medicinal molds. The classic example of an antibiotic discovered as a defense mechanism against another microbe is penicillin in bacterial cultures contaminated by Penicillium fungi in 1928.
Marine invertebrates
Marine environments are potential sources for new bioactive agents. Arabinose nucleosides discovered from marine invertebrates in 1950s, demonstrated for the first time that sugar moieties other than ribose and deoxyribose can yield bioactive nucleoside structures. It took until 2004 when the first marine-derived drug was approved. For example, the cone snail toxin ziconotide, also known as Prialt treats severe neuropathic pain. Several other marine-derived agents are now in clinical trials for indications such as cancer, anti-inflammatory use and pain. One class of these agents are bryostatin-like compounds, under investigation as anti-cancer therapy.
Chemical diversity
As above mentioned, combinatorial chemistry was a key technology enabling the efficient generation of large screening libraries for the needs of high-throughput screening. However, now, after two decades of combinatorial chemistry, it has been pointed out that despite the increased efficiency in chemical synthesis, no increase in lead or drug candidates has been reached. This has led to analysis of chemical characteristics of combinatorial chemistry products, compared to existing drugs or natural products. The chemoinformatics concept chemical diversity, depicted as distribution of compounds in the chemical space based on their physicochemical characteristics, is often used to describe the difference between the combinatorial chemistry libraries and natural products. The synthetic, combinatorial library compounds seem to cover only a limited and quite uniform chemical space, whereas existing drugs and particularly natural products, exhibit much greater chemical diversity, distributing more evenly to the chemical space. The most prominent differences between natural products and compounds in combinatorial chemistry libraries is the number of chiral centers (much higher in natural compounds), structure rigidity (higher in natural compounds) and number of aromatic moieties (higher in combinatorial chemistry libraries). Other chemical differences between these two groups include the nature of heteroatoms (O and N enriched in natural products, and S and halogen atoms more often present in synthetic compounds), as well as level of non-aromatic unsaturation (higher in natural products). As both structure rigidity and chirality are well-established factors in medicinal chemistry known to enhance compounds specificity and efficacy as a drug, it has been suggested that natural products compare favourably to today's combinatorial chemistry libraries as potential lead molecules.
Screening
Two main approaches exist for the finding of new bioactive chemical entities from natural sources.
The first is sometimes referred to as random collection and screening of material, but the collection is far from random. Biological (often botanical) knowledge is often used to identify families that show promise. This approach is effective because only a small part of the earth's biodiversity has ever been tested for pharmaceutical activity. Also, organisms living in a species-rich environment need to evolve defensive and competitive mechanisms to survive. Those mechanisms might be exploited in the development of beneficial drugs.
A collection of plant, animal and microbial samples from rich ecosystems can potentially give rise to novel biological activities worth exploiting in the drug development process. One example of successful use of this strategy is the screening for antitumor agents by the National Cancer Institute, which started in the 1960s. Paclitaxel was identified from Pacific yew tree Taxus brevifolia. Paclitaxel showed anti-tumour activity by a previously undescribed mechanism (stabilization of microtubules) and is now approved for clinical use for the treatment of lung, breast, and ovarian cancer, as well as for Kaposi's sarcoma. Early in the 21st century, Cabazitaxel (made by Sanofi, a French firm), another relative of taxol has been shown effective against prostate cancer, also because it works by preventing the formation of microtubules, which pull the chromosomes apart in dividing cells (such as cancer cells). Other examples are: 1. Camptotheca (Camptothecin · Topotecan · Irinotecan · Rubitecan · Belotecan); 2. Podophyllum (Etoposide · Teniposide); 3a. Anthracyclines (Aclarubicin · Daunorubicin · Doxorubicin · Epirubicin · Idarubicin · Amrubicin · Pirarubicin · Valrubicin · Zorubicin); 3b. Anthracenediones (Mitoxantrone · Pixantrone).
The second main approach involves ethnobotany, the study of the general use of plants in society, and ethnopharmacology, an area inside ethnobotany, which is focused specifically on medicinal uses.
Artemisinin, an antimalarial agent from sweet wormtree Artemisia annua, used in Chinese medicine since 200BC is one drug used as part of combination therapy for multiresistant Plasmodium falciparum.
Additionally, since machine learning has become more advanced, virtual screening is now an option for drug developers. AI algorithms are being used to perform virtual screening of chemical compounds, which involves predicting the activity of a compound against a specific target. By using machine learning algorithms to analyse large amounts of chemical data, researchers can identify potential new drug candidates that are more likely to be effective against a specific disease. Algorithms, such as Nearest-Neighbour classifiers, RF, extreme learning machines, SVMs, and deep neural networks (DNNs), are used for VS based on synthesis feasibility and can also predict in vivo activity and toxicity.
Structural elucidation
The elucidation of the chemical structure is critical to avoid the re-discovery of a chemical agent that is already known for its structure and chemical activity. Mass spectrometry is a method in which individual compounds are identified based on their mass/charge ratio, after ionization. Chemical compounds exist in nature as mixtures, so the combination of liquid chromatography and mass spectrometry (LC-MS) is often used to separate the individual chemicals. Databases of mass spectra for known compounds are available and can be used to assign a structure to an unknown mass spectrum. Nuclear magnetic resonance spectroscopy is the primary technique for determining chemical structures of natural products. NMR yields information about individual hydrogen and carbon atoms in the structure, allowing detailed reconstruction of the molecule's architecture.
New Drug Application
When a drug is developed with evidence throughout its history of research to show it is safe and effective for the intended use in the United States, the company can file an application – the New Drug Application (NDA) – to have the drug commercialized and available for clinical application. NDA status enables the FDA to examine all submitted data on the drug to reach a decision on whether to approve or not approve the drug candidate based on its safety, specificity of effect, and efficacy of doses.
See also
References
Further reading
External links | Drug discovery | [
"Chemistry",
"Biology"
] | 4,869 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
571,341 | https://en.wikipedia.org/wiki/DOT%20%28graph%20description%20language%29 | DOT is a graph description language, developed as a part of the Graphviz project. DOT graphs are typically stored as files with the .gv or .dot filename extension — .gv is preferred, to avoid confusion with the .dot extension used by versions of Microsoft Word before 2007. dot is also the name of the main program to process DOT files in the Graphviz package.
Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented graph. Finally, others, such as lefty, dotty, and grappa, provide an interactive interface. The GVedit tool combines a text editor and a non-interactive viewer. Most programs are part of the Graphviz package or use it internally.
DOT is historically an acronym for "DAG of tomorrow", as the successor to a DAG format and a dag program which handled only directed acyclic graphs.
Syntax
Graph types
Undirected graphs
At its simplest, DOT can be used to describe an undirected graph. An undirected graph shows simple relations between objects, such as reciprocal friendship between people. The graph keyword is used to begin a new graph, and nodes are described within curly braces. A double-hyphen (--) is used to show relations between the nodes.
// The graph name and the semicolons are optional
graph graphname {
a -- b -- c;
b -- d;
}
Directed graphs
Similar to undirected graphs, DOT can describe directed graphs, such as flowcharts and dependency trees. The syntax is the same as for undirected graphs, except the digraph keyword is used to begin the graph, and an arrow (->) is used to show relationships between nodes.
digraph graphname {
a -> b -> c;
b -> d;
}
Attributes
Various attributes can be applied to graphs, nodes and edges in DOT files. These attributes can control aspects such as color, shape, and line styles. For nodes and edges, one or more attribute–value pairs are placed in square brackets [] after a statement and before the semicolon (which is optional). Graph attributes are specified as direct attribute–value pairs under the graph element, where multiple attributes are separated by a comma or using multiple sets of square brackets, while node attributes are placed after a statement containing only the name of the node, but not the relations between the dots.
graph graphname {
// This attribute applies to the graph itself
size="1,1";
// The label attribute can be used to change the label of a node
a [label="Foo"];
// Here, the node shape is changed.
b [shape=box];
// These edges both have different line properties
a -- b -- c [color=blue];
b -- d [style=dotted];
// [style=invis] hides a node.
}HTML-like labels are supported, although initially Graphviz did not handle them.
Comments
DOT supports C and C++ style single line and multiple line comments. In addition, it ignores lines with a number sign symbol # as their first character, like many interpreted languages.
Layout programs
The DOT language defines a graph, but does not provide facilities for rendering the graph. There are several programs that can be used to render, view, and manipulate graphs in the DOT language:
General
Graphviz – a collection of CLI utilities and libraries to manipulate and render graphs into different formats like SVG, PDF, PNG etc.
dot – CLI tool for conversion between and other formats
JavaScript
Canviza JavaScript library for rendering DOT files
d3-graphviza JavaScript library based on Viz.js and D3.js that renders DOT graphs and supports animated transitions between graphs and interactive graph manipulation
Vis.jsa JavaScript library that accept DOT as input for network graphs.
Viz.js – a JavaScript port of Graphviz that provides a simple wrapper for using it in the browser.
hpcc-js/wasm Graphviza fast WASM library for Graphviz similar to Viz.js
Java
Gephian interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs
Grappaa partial port of Graphviz to Java
graphviz-javaan open source partial port of Graphviz to Java available from github.com
ZGRViewera DOT viewer
Other
Beluginga Python- & Google Cloud Platform-based viewer of DOT and Beluga extensions
Delineatea Rust application for Linux than can edit fully-featured DOT graph with interactive preview, and export as PNG, SVG, or JPEG
dot2texa program to convert files from DOT to PGF/TikZ or PSTricks, both of which are rendered in LaTeX
OmniGrafflea digital illustration application for macOS that can import a subset of DOT, producing an editable document (but the result cannot be exported back to DOT)
Tulipa software framework in C++ that can import DOT files for analysis
VizierFXan Apache Flex graph rendering library in ActionScript
Notes
See also
External links
DOT tutorial and specification
Drawing graphs with dot
Node, Edge and Graph Attributes
Node Shapes
Gallery of examples
Graphviz Online: instant conversion and visualization of DOT descriptions
Boost Graph Library
lisp2dot or tree2dot: convert Lisp programming language-like program trees to DOT language (designed for use with genetic programming)
Mathematical software
Graph description languages
Graph drawing | DOT (graph description language) | [
"Mathematics"
] | 1,210 | [
"Mathematical relations",
"Graph description languages",
"Graph theory",
"Mathematical software"
] |
573,343 | https://en.wikipedia.org/wiki/Refractory%20metals | Refractory metals are a class of metals that are extraordinarily resistant to heat and wear. The expression is mostly used in the context of materials science, metallurgy and engineering. The definition of which elements belong to this group differs. The most common definition includes five elements: two of the fifth period (niobium and molybdenum) and three of the sixth period (tantalum, tungsten, and rhenium). They all share some properties, including a melting point above 2000 °C and high hardness at room temperature. They are chemically inert and have a relatively high density. Their high melting points make powder metallurgy the method of choice for fabricating components from these metals. Some of their applications include tools to work metals at high temperatures, wire filaments, casting molds, and chemical reaction vessels in corrosive environments. Partly due to the high melting point, refractory metals are stable against creep deformation to very high temperatures.
Definition
Most definitions of the term 'refractory metals' list the extraordinarily high melting point as a key requirement for inclusion. By one definition, a melting point above is necessary to qualify, which includes iridium, osmium, niobium, molybdenum, tantalum, tungsten, rhenium, rhodium, ruthenium and hafnium. The five elements niobium, molybdenum, tantalum, tungsten and rhenium are included in all definitions, while the widest definition, including all elements with a melting point above , such as titanium, vanadium, zirconium, and chromium. Technetium is not included because of its radioactivity, though it would otherwise have qualified under the widest definition.
Properties
Physical
Refractory metals have high melting points, with tungsten and rhenium the highest of all elements, and the other's melting points only exceeded by osmium and iridium, and the sublimation of carbon. These high melting points define most of their applications. All the metals are body-centered cubic except rhenium which is hexagonal close-packed. The physical properties of the refractory elements vary significantly because they are members of different groups of the periodic table. The hardness, high melting and boiling points, and high enthalpies of atomization of these metals arise from the partial occupation of the outer d subshell, allowing the d electrons to participate in metallic bonding. This gives stiff, highly stable bonds to neighboring atoms and a body-centered cubic crystal structure that resists deformation. Moving to the right in the periodic table, more d electrons increase this effect, but as the d subshell fills they are pulled by the higher nuclear charge into the atom's inert core, reducing their ability to delocalize to form bonds with neighbors. These opposing effects result in groups 5 through 7 exhibiting the most refractory properties.
Creep resistance is a key property of the refractory metals. In metals, the starting of creep correlates with the melting point of the material; the creep in aluminium alloys starts at 200 °C, while for refractory metals temperatures above 1500 °C are necessary. This resistance against deformation at high temperatures makes the refractory metals suitable against strong forces at high temperature, for example in jet engines, or tools used during forging.
Chemical
The refractory metals show a wide variety of chemical properties because they are members of three distinct groups in the periodic table. They are easily oxidized, but this reaction is slowed down in the bulk metal by the formation of stable oxide layers on the surface (passivation). Especially the oxide of rhenium is more volatile than the metal, and therefore at high temperature the stabilization against the attack of oxygen is lost, because the oxide layer evaporates. They all are relatively stable against acids.
Applications
Refractory metals, and alloys made from them, are used in lighting, tools, lubricants, nuclear reaction control rods, as catalysts, and for their chemical or electrical properties. Because of their high melting point, refractory metal components are never fabricated by casting. The process of powder metallurgy is used. Powders of the pure metal are compacted, heated using electric current, and further fabricated by cold working with annealing steps. Refractory metals and their alloys can be worked into wire, ingots, rebars, sheets or foil.
Molybdenum alloys
Molybdenum-based alloys are widely used, because they are cheaper than superior tungsten alloys. The most widely used alloy of molybdenum is the Titanium-Zirconium-Molybdenum alloy TZM, composed of 0.5% titanium and 0.08% of zirconium (with molybdenum being the rest). The alloy exhibits a higher creep resistance and strength at high temperatures, making service temperatures of above 1060 °C possible for the material. The high resistivity of Mo-30W, an alloy of 70% molybdenum and 30% tungsten, against the attack of molten zinc makes it the ideal material for casting zinc. It is also used to construct valves for molten zinc.
Molybdenum is used in mercury wetted reed relays, because molybdenum does not form amalgams and is therefore resistant to corrosion by liquid mercury.
Molybdenum is the most commonly used of the refractory metals. Its most important use is as a strengthening alloy of steel. Structural tubing and piping often contains molybdenum, as do many stainless steels. Its strength at high temperatures, resistance to wear and low coefficient of friction are all properties which make it invaluable as an alloying compound. Its excellent anti-friction properties lead to its incorporation in greases and oils where reliability and performance are critical. Automotive constant-velocity joints use grease containing molybdenum. The compound sticks readily to metal and forms a very hard, friction-resistant coating. Most of the world's molybdenum ore can be found in China, the USA, Chile and Canada.
Tungsten and its alloys
Tungsten was discovered in 1781 by Swedish chemist Carl Wilhelm Scheele. Tungsten has the highest melting point of all metals, at .
Up to 22% Rhenium is alloyed with tungsten to improve its high temperature strength and corrosion resistance. Thorium as an alloying compound is used when electric arcs have to be established. The ignition is easier and the arc burns more stably than without the addition of thorium. For powder metallurgy applications, binders have to be used for the sintering process. For the production of the tungsten heavy alloy, binder mixtures of nickel and iron or nickel and copper are widely used. The tungsten content of the alloy is normally above 90%. The diffusion of the binder elements into the tungsten grains is low even at the sintering temperatures and therefore the interior of the grains are pure tungsten.
Tungsten and its alloys are often used in applications where high temperatures are present but still a high strength is necessary and the high density is not troublesome. Tungsten wire filaments provide the vast majority of household incandescent lighting, but are also common in industrial lighting as electrodes in arc lamps. Lamps get more efficient in the conversion of electric energy to light with higher temperatures and therefore a high melting point is essential for the application as filament in incandescent light. Gas tungsten arc welding (GTAW, also known as tungsten inert gas (TIG) welding) equipment uses a permanent, non-melting electrode. The high melting point and the wear resistance against the electric arc makes tungsten a suitable material for the electrode.
Tungsten's high density and strength are also key properties for its use in weapon projectiles, for example as an alternative to depleted Uranium for tank gun rounds. Its high melting point makes tungsten a good material for applications like rocket nozzles, for example in the UGM-27 Polaris. Some of the applications of tungsten are not related to its refractory properties but simply to its density. For example, it is used in balance weights for planes and helicopters or for heads of golf clubs. In this applications similar dense materials like the more expensive osmium can also be used.
The most common use for tungsten is as the compound tungsten carbide in drill bits, machining and cutting tools. The largest reserves of tungsten are in China, with deposits in Korea, Bolivia, Australia, and other countries.
It also finds itself serving as a lubricant, antioxidant, in nozzles and bushings, as a protective coating and in many other ways. Tungsten can be found in printing inks, x-ray screens, in the processing of petroleum products, and flame proofing of textiles.
Niobium alloys
Niobium is nearly always found together with tantalum, and was named after Niobe, the daughter of the mythical Greek king Tantalus for whom tantalum was named. Niobium has many uses, some of which it shares with other refractory metals. It is unique in that it can be worked through annealing to achieve a wide range of strength and ductility, and is the least dense of the refractory metals. It can also be found in electrolytic capacitors and in the most practical superconducting alloys. Niobium can be found in aircraft gas turbines, vacuum tubes and nuclear reactors.
An alloy used for liquid rocket thruster nozzles, such as in the main engine of the Apollo Lunar Modules, is C103, which consists of 89% niobium, 10% hafnium and 1% titanium. Another niobium alloy was used for the nozzle of the Apollo Service Module. As niobium is oxidized at temperatures above 400 °C, a protective coating is necessary for these applications to prevent the alloy from becoming brittle.
Tantalum and its alloys
Tantalum is one of the most corrosion-resistant substances available.
Many important uses have been found for tantalum owing to this property, particularly in the medical and surgical fields, and also in harsh acidic environments. It is also used to make superior electrolytic capacitors. Tantalum films provide the second most capacitance per volume of any substance after Aerogel, and allow miniaturization of electronic components and circuitry. Many cellular phones and computers contain tantalum capacitors.
Rhenium alloys
Rhenium is the most recently discovered refractory metal. It is found in low concentrations with many other metals, in the ores of other refractory metals, platinum or copper ores. It is useful as an alloy to other refractory metals, where it adds ductility and tensile strength. Rhenium alloys are being used in electronic components, gyroscopes and nuclear reactors. Rhenium finds its most important use as a catalyst. It is used as a catalyst in reactions such as alkylation, dealkylation, hydrogenation and oxidation. However its rarity makes it the most expensive of the refractory metals.
Advantages and shortfalls
The strength and high-temperature stability of refractory metals make them suitable for hot metalworking applications and for vacuum furnace technology. Many special applications exploit these properties: for example, tungsten lamp filaments operate at temperatures up to 3073 K, and molybdenum furnace windings withstand 2273 K.
However, poor low-temperature fabricability and extreme oxidability at high temperatures are shortcomings of most refractory metals. Interactions with the environment can significantly influence their high-temperature creep strength. Application of these metals requires a protective atmosphere or coating.
The refractory metal alloys of molybdenum, niobium, tantalum, and tungsten have been applied to space nuclear power systems. These systems were designed to operate at temperatures from 1350 K to approximately 1900 K. An environment must not interact with the material in question. Liquid alkali metals as the heat transfer fluids are used as well as the ultra-high vacuum.
The high-temperature creep strain of alloys must be limited for them to be used. The creep strain should not exceed 1–2%. An additional complication in studying creep behavior of the refractory metals is interactions with environment, which can significantly influence the creep behavior.
See also
Refractory – heat resistance of nonmetallic materials
References
Further reading
Metals
Metallurgy
Metals, Refractory | Refractory metals | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,616 | [
"Metals",
"Metallurgy",
"Refractory materials",
"Materials science",
"Refractory metals",
"Materials",
"Alloys",
"nan",
"Matter"
] |
573,489 | https://en.wikipedia.org/wiki/C3%20carbon%20fixation | {{DISPLAYTITLE: C3 carbon fixation}}
carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction:
CO2 + H2O + RuBP → (2) 3-phosphoglycerate
This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.)
Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley.
plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth.
plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete plants in these areas.
The isotopic signature of plants shows higher degree of 13C depletion than the plants, due to variation in fractionation of carbon isotopes in oxygenic photosynthesis across plant types. Specifically, plants do not have PEP carboxylase like plants, allowing them to only utilize ribulose-1,5-bisphosphate carboxylase (Rubisco) to fix through the Calvin cycle. The enzyme Rubisco largely discriminates against carbon isotopes, evolving to only bind to 12C isotope compared to 13C (the heavier isotope), contributing to more 13C depletion seen in plants compared to plants especially since the pathway uses PEP carboxylase in addition to Rubisco.
Variations
Not all C3 carbon fixation pathways operate at the same efficiency.
Refixation
Bamboos and the related rice have an improved C3 efficiency. This improvement might be due to its ability to recapture CO2 produced during photorespiration, a behavior termed "carbon refixation". These plants achieve refixation by growing chloroplast extensions called "stromules" around the stroma in mesophyll cells, so that any photorespired CO2 from the mitochondria has to pass through the RuBisCO-filled chloroplast.
Refixation is also performed by a wide variety of plants. The common approach involving growing a bigger bundle sheath leads down to C2 photosynthesis.
Synthetic glycolate pathway
C3 carbon fixation is prone to photorespiration (PR) during dehydration, accumulating toxic glycolate products. In the 2000s scientists used computer simulation combined with an optimization algorithm to figure out what parts of the metabolic pathway may be tuned to improve photosynthesis. According to simulation, improving glycolate metabolism would help significantly to reduce photorespiration.
Instead of optimizing specific enzymes on the PR pathway for glycolate degradation, South et al. decided to bypass PR altogether. In 2019, they transferred Chlamydomonas reinhardtii glycolate dehydrogenase and Cucurbita maxima malate synthase into the chloroplast of tobacco (a model organism). These enzymes, plus the chloroplast's own, create a catabolic cycle: acetyl-CoA combines with glyoxylate to form malate, which is then split into pyruvate and CO2; the former in turn splits into acetyl-CoA and CO2. By forgoing all transport among organelles, all the CO2 released will go into increasing the CO2 concentration in the chloroplast, helping with refixation. The end result is 24% more biomass. An alternative using E. coli glycerate pathway produced a smaller improvement of 13%. They are now working on moving this optimization into other crops like wheat.
References
Photosynthesis
Metabolic pathways
Carbon | C3 carbon fixation | [
"Chemistry",
"Biology"
] | 1,056 | [
"Metabolic pathways",
"Biochemistry",
"Metabolism",
"Photosynthesis"
] |
15,931,153 | https://en.wikipedia.org/wiki/Model%20complete%20theory | In model theory, a first-order theory is called model complete if every embedding of its models is an elementary embedding.
Equivalently, every first-order formula is equivalent to a universal formula.
This notion was introduced by Abraham Robinson.
Model companion and model completion
A companion of a theory T is a theory T* such that every model of T can be embedded in a model of T* and vice versa.
A model companion of a theory T is a companion of T that is model complete. Robinson proved that a theory has at most one model companion. Not every theory is model-companionable, e.g. theory of groups. However if T is an -categorical theory, then it always has a model companion.
A model completion for a theory T is a model companion T* such that for any model M of T, the theory of T* together with the diagram of M is complete. Roughly speaking, this means every model of T is embeddable in a model of T* in a unique way.
If T* is a model companion of T then the following conditions are equivalent:
T* is a model completion of T
T has the amalgamation property.
If T also has universal axiomatization, both of the above are also equivalent to:
T* has elimination of quantifiers
Examples
Any theory with elimination of quantifiers is model complete.
The theory of algebraically closed fields is the model completion of the theory of fields. It is model complete but not complete.
The model completion of the theory of equivalence relations is the theory of equivalence relations with infinitely many equivalence classes, each containing an infinite number of elements.
The theory of real closed fields, in the language of ordered rings, is a model completion of the theory of ordered fields (or even ordered domains).
The theory of real closed fields, in the language of rings, is the model companion for the theory of formally real fields, but is not a model completion.
Non-examples
The theory of dense linear orders with a first and last element is complete but not model complete.
The theory of groups (in a language with symbols for the identity, product, and inverses) has the amalgamation property but does not have a model companion.
Sufficient condition for completeness of model-complete theories
If T is a model complete theory and there is a model of T that embeds into any model of T, then T is complete.
Notes
References
Mathematical logic
Model theory | Model complete theory | [
"Mathematics"
] | 500 | [
"Mathematical logic",
"Model theory"
] |
15,932,531 | https://en.wikipedia.org/wiki/Weak%20focusing | Weak focusing occurs in particle accelerators when charged particles are passing through uniform magnetic fields, causing them to move in circular paths due to the Lorentz force. Because of the circular movement, the orbits of two particles with slightly different positions may approximate or even cross each other.
Because a particle beam has a finite emittance, this effect was used in cyclotrons and early synchrotrons to prevent the growth of deviations from the desired particle orbit. Due to its definition, it also occurs in the dipole magnets of modern accelerator facilities and must be considered in beam optics calculations. In modern facilities, most of the beam focusing is usually done by quadrupole magnets, using Strong focusing to enable smaller beam sizes and vacuum chambers, thus reducing the average magnet size.
References
Accelerator physics | Weak focusing | [
"Physics"
] | 163 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
2,977,884 | https://en.wikipedia.org/wiki/Conjugation%20of%20isometries%20in%20Euclidean%20space | In a group, the conjugate by g of h is ghg−1.
Translation
If h is a translation, then its conjugation by an isometry can be described as applying the isometry to the translation:
the conjugation of a translation by a translation is the first translation
the conjugation of a translation by a rotation is a translation by a rotated translation vector
the conjugation of a translation by a reflection is a translation by a reflected translation vector
Thus the conjugacy class within the Euclidean group E(n) of a translation is the set of all translations by the same distance.
The smallest subgroup of the Euclidean group containing all translations by a given distance is the set of all translations. So, this is the conjugate closure of a singleton containing a translation.
Thus E(n) is a direct product of the orthogonal group O(n) and the subgroup of translations T, and O(n) is isomorphic with the quotient group of E(n) by T:
O(n) E(n) / T
Thus there is a partition of the Euclidean group with in each subset one isometries that keeps the origins fixed, and its combination with all translations.
Each isometry is given by an orthogonal matrix A in O(n) and a vector b:
and each subset in the quotient group is given by the matrix A only.
Similarly, for the special orthogonal group SO(n) we have
SO(n) E+(n) / T
Inversion
The conjugate of the inversion in a point by a translation is the inversion in the translated point, etc.
Thus the conjugacy class within the Euclidean group E(n) of inversion in a point is the set of inversions in all points.
Since a combination of two inversions is a translation, the conjugate closure of a singleton containing inversion in a point is the set of all translations and the inversions in all points. This is the generalized dihedral group dih (Rn).
Similarly { I, −I } is a normal subgroup of O(n), and we have:
E(n) / dih (Rn) O(n) / { I, −I }
For odd n we also have:
O(n) SO(n) × { I, −I }
and hence not only
O(n) / SO(n) { I, −I }
but also:
O(n) / { I, −I } SO(n)
For even n we have:
E+(n) / dih (Rn) SO(n) / { I, −I }
Rotation
In 3D, the conjugate by a translation of a rotation about an axis is the corresponding rotation about the translated axis. Such a conjugation produces the screw displacement known to express an arbitrary Euclidean motion according to Chasles' theorem.
The conjugacy class within the Euclidean group E(3) of a rotation about an axis is a rotation by the same angle about any axis.
The conjugate closure of a singleton containing a rotation in 3D is E+(3).
In 2D it is different in the case of a k-fold rotation: the conjugate closure contains k rotations (including the identity) combined with all translations.
E(2) has quotient group O(2) / Ck and E+(2) has quotient group SO(2) / Ck . For k = 2 this was already covered above.
Reflection
The conjugates of a reflection are reflections with a translated, rotated, and reflected mirror plane. The conjugate closure of a singleton containing a reflection is the whole E(n).
Rotoreflection
The left and also the right coset of a reflection in a plane combined with a rotation by a given angle about a perpendicular axis is the set of all combinations of a reflection in the same or a parallel plane, combined with a rotation by the same angle about the same or a parallel axis, preserving orientation
Isometry groups
Two isometry groups are said to be equal up to conjugacy with respect to affine transformations if there is an affine transformation such that all elements of one group are obtained by taking the conjugates by that affine transformation of all elements of the other group. This applies for example for the symmetry groups of two patterns which are both of a particular wallpaper group type. If we would just consider conjugacy with respect to isometries, we would not allow for scaling, and in the case of a parallelogrammatic lattice, change of shape of the parallelogram. Note however that the conjugate with respect to an affine transformation of an isometry is in general not an isometry, although volume (in 2D: area) and orientation are preserved.
Cyclic groups
Cyclic groups are Abelian, so the conjugate by every element of every element is the latter.
Zmn / Zm Zn.
Zmn is the direct product of Zm and Zn if and only if m and n are coprime. Thus e.g. Z12 is the direct product of Z3 and Z4, but not of Z6 and Z2.
Dihedral groups
Consider the 2D isometry point group Dn. The conjugates of a rotation are the same and the inverse rotation. The conjugates of a reflection are the reflections rotated by any multiple of the full rotation unit. For odd n these are all reflections, for even n half of them.
This group, and more generally, abstract group Dihn, has the normal subgroup Zm for all divisors m of n, including n itself.
Additionally, Dih2n has two normal subgroups isomorphic with Dihn. They both contain the same group elements forming the group Zn, but each has additionally one of the two conjugacy classes of Dih2n \ Z2n.
In fact:
Dihmn / Zn Dihn
Dih2n / Dihn Z2
Dih4n+2 Dih2n+1 × Z2
References
Euclidean symmetries
Group theory | Conjugation of isometries in Euclidean space | [
"Physics",
"Mathematics"
] | 1,268 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Mathematical relations",
"Symmetry"
] |
2,978,332 | https://en.wikipedia.org/wiki/Cathodic%20arc%20deposition | Cathodic arc deposition or Arc-PVD is a physical vapor deposition technique in which an electric arc is used to vaporize material from a cathode target. The vaporized material then condenses on a substrate, forming a thin film. The technique can be used to deposit metallic, ceramic, and composite films.
History
Industrial use of modern cathodic arc deposition technology originated in Soviet Union around 1960–1970.
By the late 1970s, Soviet government released the use of this technology to the West.
Among many designs in USSR at that time the design by L. P. Sablev et al., was allowed to be used outside the USSR.
Process
The arc evaporation process begins with the striking of a high current, low voltage arc on the surface of a cathode (known as the target) that gives rise to a small (usually a few micrometres wide), highly energetic emitting area known as a cathode spot. The localised temperature at the cathode spot is extremely high (around 15000 °C), which results in a high velocity (10 km/s) jet of vapourised cathode material, leaving a crater behind on the cathode surface. The cathode spot is only active for a short period of time, then it self-extinguishes and re-ignites in a new area close to the previous crater. This behaviour causes the apparent motion of the arc.
As the arc is basically a current carrying conductor it can be influenced by the application of an electromagnetic field, which in practice is used to rapidly move the arc over the entire surface of the target, so that the total surface is eroded over time.
The arc has an extremely high power density resulting in a high level of ionization (30-100%), multiple charged ions, neutral particles, clusters and macro-particles (droplets). If a reactive gas is introduced during the evaporation process, dissociation, ionization and excitation can occur during interaction with the ion flux and a compound film will be deposited.
One downside of the arc evaporation process is that if the cathode spot stays at an evaporative point for too long it can eject a large amount of macro-particles or droplets. These droplets are detrimental to the performance of the coating as they are poorly adhered and can extend through the coating. Worse still if the cathode target material has a low melting point such as aluminium the cathode spot can evaporate through the target resulting in either the target backing plate material being evaporated or cooling water entering the chamber. Therefore, magnetic fields as mentioned previously are used to control the motion of the arc. If cylindrical cathodes are used the cathodes can also be rotated during deposition. By not allowing the cathode spot to remain in one position too long aluminium targets can be used and the number of droplets is reduced. Some companies also use filtered arcs that use magnetic fields to separate the droplets from the coating flux.
Equipment design
A Sablev type Cathodic arc source, which is the most widely used in the West, consists of a short cylindrically shaped, electrically conductive target at the cathode with one open end. This target has an electrically-floating metal ring surrounding it, working as an arc confinement ring (Strel'nitskij shield). The anode for the system can be either the vacuum chamber wall or a discrete anode. Arc spots are generated by a mechanical trigger (or igniter) striking on the open end of the target making a temporary short circuit between the cathode and anode. After the arc spots are generated they can be steered by a magnetic field, or move randomly in absence of magnetic field.
The plasma beam from a Cathodic Arc source contains some larger clusters of atoms or molecules (so called macro-particles), which prevent it from being useful for some applications without some kind of filtering.
There are many designs for macro-particle filters and the most studied design is based on the work by I. I. Aksenov et al. in 70's. It consists of a quarter-torus duct bent at 90 degrees from the arc source and the plasma is guided out of the duct by principle of plasma optics.
There are also other interesting designs, such as a design which incorporates a straight duct filter built-in with a truncated cone shaped cathode as reported by D. A. Karpov in the 1990s. This design became quite popular among both the thin hard-film coaters and researchers in Russia and former USSR countries until now.
Cathodic arc sources can be made into a long tubular shape (extended-arc) or a long rectangular shape, but both designs are less popular.
Applications
Cathodic arc deposition is actively used to synthesize extremely hard films to protect the surface of cutting tools and extend their life significantly. A wide variety of thin hard-film, Superhard coatings and nanocomposite coatings can be synthesized by this technology including TiN, TiAlN, CrN, ZrN, AlCrTiN and TiAlSiN.
This is also used quite extensively particularly for carbon ion deposition to create diamond-like carbon films. Because the ions are blasted from the surface ballistically, it is common for not only single atoms, but larger clusters of atoms to be ejected. Thus, this kind of system requires a filter to remove atom clusters from the beam before deposition.
The DLC film from a filtered-arc contains an extremely high percentage of sp3 diamond which is known as tetrahedral amorphous carbon, or ta-C.
Filtered Cathodic arc can be used as metal ion/plasma source for Ion implantation and Plasma Immersion Ion Implantation and Deposition (PIII&D).
See also
Ion beam deposition
Physical vapor deposition
References
SVC "51st Annual Technical Conference Proceedings" (2008) Society of Vacuum Coaters, ISSN 0737-5921 (previous proceedings available on CD from SVC Publications)
A. Anders, "Cathodic Arcs: From Fractal Spots to Energetic Condensation" (2008) Springer, New York.
R. L. Boxman, D. M. Sanders, and P. J. Martin (editors) "Handbook of Vacuum Arc Science and Technology"(1995) Noyes Publications, Park Ridge, N.J.
Brown, I.G., Annu. Rev. Mat. Sci. 28, 243 (1998).
Sablev et al., US Patent #3,783,231, 01 Jan. 1974
Sablev et al., US Patent #3,793,179, 19 Feb. 1974
D. A. Karpov, "Cathodic arc sources and macroparticle filtering", Surface and Coatings technology 96 (1997) 22-23
S. Surinphong, "Basic Knowledge about PVD Systems and Coatings for Tools Coating" (1998), in Thai language
A. I. Morozov, Reports of the Academy of Sciences of the USSR, 163 (1965) 1363, in Russian language
I. I. Aksenov, V. A. Belous, V. G. Padalka, V. M. Khoroshikh, "Transport of plasma streams in a curvilinear plasma-optics system", Soviet Journal of Plasma Physics, 4 (1978) 425
https://www.researchgate.net/publication/273004395_Arc_source_designs
https://www.researchgate.net/publication/234202890_Transport_of_plasma_streams_in_a_curvilinear_plasma-optics_system
Industrial processes
Physical vapor deposition techniques
Thin film deposition
Coatings | Cathodic arc deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,595 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Planes (geometry)",
"Solid state engineering"
] |
2,979,145 | https://en.wikipedia.org/wiki/Delta%20neutral | In finance, delta neutral describes a portfolio of related financial securities, in which the portfolio value remains unchanged when small changes occur in the value of the underlying security (having zero delta). Such a portfolio typically contains options and their corresponding underlying securities such that positive and negative delta components offset, resulting in the portfolio's value being relatively insensitive to changes in the value of the underlying security.
A related term, delta hedging, is the process of setting or keeping a portfolio as close to delta-neutral as possible. In practice, maintaining a zero delta is very complex because there are risks associated with re-hedging on large movements in the underlying stock's price, and research indicates portfolios tend to have lower cash flows if re-hedged too frequently. Delta hedging may be accomplished by trading underlying securities of the portfolio. See for details.
Mathematical interpretation
Delta measures the sensitivity of the value of an option to changes in the price of the underlying stock assuming all other variables remain unchanged.
Mathematically, delta is represented as partial derivative
of the option's fair value with respect to the spot price of the underlying security.
Delta is a function of S, strike price, and time to expiry. Therefore, if a position is delta neutral (or, instantaneously delta-hedged) its instantaneous change in value, for an infinitesimal change in the value of the underlying security, will be zero; see Hedge (finance). Since Delta measures the exposure of a derivative to changes in the value of the underlying, a portfolio that is delta neutral is effectively hedged, in the sense that its overall value will not change for small changes in the price of its underlying instrument.
Techniques
Options market makers, or others, may form a delta neutral portfolio using related options instead of the underlying. The portfolio's delta (assuming the same underlier) is then the sum of all the individual options' deltas. This method can also be used when the underlier is difficult to trade, for instance when an underlying stock is hard to borrow and therefore cannot be sold short.
For example, in the portfolio , an option has the value V, and the stock has a value S. If we assume V is linear, then we can assume , therefore letting means that the value of is approximately 0.
Theory
The existence of a delta neutral portfolio was shown as part of the original proof of the Black–Scholes model, the first comprehensive model to produce correct prices for some classes of options. See Black-Scholes: Derivation.
From the Taylor expansion of the value of an option, we get the change in the value of an option, , for a change in the value of the underlier :
where (delta) and (gamma); see Greeks (finance).
For any small change in the underlier, we can ignore the second-order term and use the quantity to determine how much of the underlier to buy or sell to create a hedged portfolio. However, when the change in the value of the underlier is not small, the second-order term, , cannot be ignored: see Convexity (finance).
In practice, maintaining a delta neutral portfolio requires continuous recalculation of the position's Greeks and rebalancing of the underlier's position. Typically, this rebalancing is performed daily or weekly.
References
External links
Delta Hedging, investopedia.com
Theory & Application for Delta Hedging
Financial markets
Derivatives (finance)
Mathematical finance | Delta neutral | [
"Mathematics"
] | 717 | [
"Applied mathematics",
"Mathematical finance"
] |
2,979,341 | https://en.wikipedia.org/wiki/Diameter%20%28graph%20theory%29 | In graph theory, the diameter of a connected undirected graph is the farthest distance between any two of its vertices. That is, it is the diameter of a set for the set of vertices of the graph, and for the shortest-path distance in the graph. Diameter may be considered either for weighted or for unweighted graphs. Researchers have studied the problem of computing the diameter, both in arbitrary graphs and in special classes of graphs.
The diameter of a disconnected graph may be defined to be infinite, or undefined.
Graphs of low diameter
The degree diameter problem seeks tight relations between the diameter, number of vertices, and degree of a graph. One way of formulating it is to ask for the largest graph with given bounds on its degree and diameter. For any fixed degree, this maximum size is exponential in the diameter, with the base of the exponent depending on the degree.
The girth of a graph, the length of its shortest cycle, can be at most for a graph of diameter . The regular graphs for which the girth is exactly are the Moore graphs. Only finitely many Moore graphs exist, but their exact number is unknown. They provide the solutions to the degree diameter problem for their degree and diameter.
Small-world networks are a class of graphs with low diameter, modeling the real-world phenomenon of six degrees of separation in social networks.
Algorithms
In arbitrary graphs
The diameter of a graph can be computed by using a shortest path algorithm to compute shortest paths between all pairs of vertices, and then taking the maximum of the distances that it computes. For instance, in a graph with positive edge weights, this can be done by repeatedly using Dijkstra's algorithm, once for each possible starting vertex. In a graph with vertices and edges, this takes time . Computing all-pairs shortest paths is the fastest known method for computing the diameter of a weighted graph exactly.
In an unweighted-graph, Dijkstra's algorithm may be replaced by breadth-first search, giving time . Alternatively, the diameter may be computed using an algorithm based on fast matrix multiplication, in time proportional to the time for multiplying matrices, approximately using known matrix multiplication algorithms. For sparse graphs, with few edges, repeated breadth-first search is faster than matrix multiplication. Assuming the exponential time hypothesis, repeated breadth-first search is near-optimal: this hypothesis implies that no algorithm can achieve time for any .
It is possible to approximate the diameter of a weighted graph to within an approximation ratio of 3/2, in time , where the notation hides logarithmic factors in the time bound. Under the exponential time hypothesis, no substantially more accurate approximation, substantially faster than all pairs shortest paths, is possible.
In special classes of graphs
The diameter can be computed in linear time for interval graphs, and in near-linear time for graphs of bounded treewidth. In median graphs, the diameter can be found in the subquadratic time bound .
In any class of graphs closed under graph minors, such as the planar graphs, it is possible to compute the diameter in subquadratic time, with an exponent depending on the graph family.
See also
Diameter (group theory), the diameter of a Cayley graph of the group, for generators chosen to make this diameter as large as possible
, connecting pairs of triangulations by local moves
References
Graph distance
Graph invariants
Computational problems in graph theory | Diameter (graph theory) | [
"Mathematics"
] | 698 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Graph invariants",
"Mathematical relations",
"Mathematical problems",
"Graph distance"
] |
2,980,233 | https://en.wikipedia.org/wiki/Focused%20ion%20beam | Focused ion beam, also known as FIB, is a technique used particularly in the semiconductor industry, materials science and increasingly in the biological field for site-specific analysis, deposition, and ablation of materials. A FIB setup is a scientific instrument that resembles a scanning electron microscope (SEM). However, while the SEM uses a focused beam of electrons to image the sample in the chamber, a FIB setup uses a focused beam of ions instead. FIB can also be incorporated in a system with both electron and ion beam columns, allowing the same feature to be investigated using either of the beams. FIB should not be confused with using a beam of focused ions for direct write lithography (such as in proton beam writing). These are generally quite different systems where the material is modified by other mechanisms.
Ion beam source
Most widespread instruments are using liquid metal ion sources (LMIS), especially gallium ion sources. Ion sources based on elemental gold and iridium are also available. In a gallium LMIS, gallium metal is placed in contact with a tungsten needle, and heated gallium wets the tungsten and flows to the tip of the needle, where the opposing forces of surface tension and electric field form the gallium into a cusp shaped tip called a Taylor cone. The tip radius of this cone is extremely small (~2 nm). The huge electric field at this small tip (greater than volts per centimeter) causes ionization and field emission of the gallium atoms.
Source ions are then generally accelerated to an energy of , and focused onto the sample by electrostatic lenses. LMIS produce high current density ion beams with very small energy spread. A modern FIB can deliver tens of nanoamperes of current to a sample, or can image the sample with a spot size on the order of a few nanometers.
More recently, instruments using plasma beams of noble gas ions, such as xenon, have become available more widely.
Principle
Focused ion beam (FIB) systems have been produced commercially for approximately twenty years, primarily for large semiconductor manufacturers. FIB systems operate in a similar fashion to a scanning electron microscope (SEM) except, rather than a beam of electrons and as the name implies, FIB systems use a finely focused beam of ions (usually gallium) that can be operated at low beam currents for imaging or at high beam currents for site specific sputtering or milling.
As the diagram on the right shows, the gallium (Ga+) primary ion beam hits the sample surface and sputters a small amount of material, which leaves the surface as either secondary ions (i+ or i−) or neutral atoms (n0). The primary beam also produces secondary electrons (e−). As the primary beam rasters on the sample surface, the signal from the sputtered ions or secondary electrons is collected to form an image.
At low primary beam currents, very little material is sputtered and modern FIB systems can easily achieve 5 nm imaging resolution (imaging resolution with Ga ions is limited to ~5 nm by sputtering and detector efficiency). At higher primary currents, a great deal of material can be removed by sputtering, allowing precision milling of the specimen down to a sub micrometer or even a nano scale.
If the sample is non-conductive, a low energy electron flood gun can be used to provide charge neutralization. In this manner, by imaging with positive secondary ions using the positive primary ion beam, even highly insulating samples may be imaged and milled without a conducting surface coating, as would be required in an SEM.
Until recently, the overwhelming usage of FIB has been in the semiconductor industry. Such applications as defect analysis, circuit modification, photomask repair and transmission electron microscope (TEM) sample preparation of site specific locations on integrated circuits have become commonplace procedures. The latest FIB systems have high resolution imaging capability; this capability coupled with in situ sectioning has eliminated the need, in many cases, to examine FIB sectioned specimens in a separate SEM instrument. SEM imaging is still required for the highest resolution imaging and to prevent damage to sensitive samples. However, the combination of SEM and FIB columns onto the same chamber enables the benefits of both to be utilized.
FIB imaging
At lower beam currents, FIB imaging resolution begins to rival the more familiar scanning electron microscope (SEM) in terms of imaging topography, however the FIB's two imaging modes, using secondary electrons and secondary ions, both produced by the primary ion beam, offer many advantages over SEM.
FIB secondary electron images show intense grain orientation contrast. As a result, grain morphology can be readily imaged without resorting to chemical etching. Grain boundary contrast can also be enhanced through careful selection of imaging parameters. FIB secondary ion images also reveal chemical differences, and are especially useful in corrosion studies, as secondary ion yields of metals can increase by three orders of magnitude in the presence of oxygen, clearly revealing the presence of corrosion.
Another advantage of FIB secondary electron imaging is the fact that the ion beam does not alter the signal from fluorescent probes used in the labelling of proteins, thus creating the opportunity to correlate FIB secondary electron images with images obtained by fluorescence microscopes.
Etching
Unlike an electron microscope, FIB is inherently destructive to the specimen. When the high-energy gallium ions strike the sample, they will sputter atoms from the surface. Gallium atoms will also be implanted into the top few nanometers of the surface, and the surface will be made amorphous.
Because of the sputtering capability, the FIB is used as a micro- and nano-machining tool, to modify or machine materials at the micro- and nanoscale. FIB micro machining has become a broad field of its own, but nano machining with FIB is a field that is still developing. Commonly the smallest beam size for imaging is 2.5–6 nm. The smallest milled features are somewhat larger (10–15 nm) as this is dependent on the total beam size and interactions with the sample being milled.
FIB tools are designed to etch or machine surfaces, an ideal FIB might machine away one atom layer without any disruption of the atoms in the next layer, or any residual disruptions above the surface. Yet currently because of the sputter the machining typically roughens surfaces at the sub-micrometer length scales.
Deposition
A FIB can also be used to deposit material via ion beam induced deposition. FIB-assisted chemical vapor deposition occurs when a gas, such as tungsten hexacarbonyl (W(CO)6) is introduced to the vacuum chamber and allowed to chemisorb onto the sample. By scanning an area with the beam, the precursor gas will be decomposed into volatile and non-volatile components; the non-volatile component, such as tungsten, remains on the surface as a deposition. This is useful, as the deposited metal can be used as a sacrificial layer, to protect the underlying sample from the destructive sputtering of the beam. From nanometers to hundred of micrometers in length, tungsten metal deposition allows metal lines to be put right where needed. Other materials such as platinum, cobalt, carbon, gold, etc., can also be locally deposited. Gas assisted deposition and FIB etching process are shown below.
FIB is often used in the semiconductor industry to patch or modify an existing semiconductor device. For example, in an integrated circuit, the gallium beam could be used to cut unwanted electrical connections, and/or to deposit conductive material in order to make a connection. The high level of surface interaction is exploited in patterned doping of semiconductors. FIB is also used for maskless implantation.
For TEM preparation
The FIB is also commonly used to prepare samples for the transmission electron microscope. The TEM requires very thin samples, typically ~100 nanometers or less. Other techniques, such as ion milling or electropolishing can be used to prepare such thin samples. However, the nanometer-scale resolution of the FIB allows the exact region of interest to be chosen, such as perhaps a grain boundary or defect in a material. This is vital, for example, in integrated circuit failure analysis. If a particular transistor out of several million on a chip is bad, the only tool capable of preparing an electron microscope sample of that single transistor is the FIB.
The same protocol used for preparing samples to transmission electron microscopy can also be used to select a micro area of a sample, extract it and prepare it for analysis using secondary ion mass spectrometry (SIMS).
The drawbacks to FIB sample preparation are the above-mentioned surface damage and implantation, which produce noticeable effects when using techniques such as high-resolution "lattice imaging" TEM or electron energy loss spectroscopy. This damaged layer can be minimized by FIB milling with lower beam voltages, or by further milling with a low-voltage argon ion beam after completion of the FIB process.
FIB preparation can be used with cryogenically frozen samples in a suitably equipped instrument, allowing cross sectional analysis of samples containing liquids or fats, such as biological samples, pharmaceuticals, foams, inks, and food products.
FIB is also used for secondary ion mass spectrometry (SIMS). The ejected secondary ions are collected and analyzed after the surface of the specimen has been sputtered with a primary focused ion beam.
For transfer of sensitive samples
For a minimal introduction of stress and bending to transmission electron microscopy (TEM) samples (lamellae, thin films, and other mechanically and beam sensitive samples), when transferring inside a focused ion beam (FIB), flexible metallic nanowires can be attached to a typically rigid micromanipulator.
The main advantages of this method include a significant reduction of sample preparation time (quick welding and cutting of nanowire at low beam current), and minimization of stress-induced bending, Pt contamination, and ion beam damage.
This technique is particularly suitable for in situ electron microscopy sample preparation.
For Atom Probe sample preparation
The same successive milling steps applied when making TEM samples can be applied to make conical samples for atom probe tomography. In this case the ion moved in an annular milling pattern with the inner milling circle being made progressively smaller. The beam current is generally reduced the smaller the inner circle becomes to avoid damaging or destroying the sample.
FIB tomography
The focused ion beam has become a powerful tool for site-specific 3D imaging of sub-micron features in a sample. In this FIB tomography technique, the sample is sequentially milled using an ion beam perpendicular to the specimen while imaging the newly exposed surface using an electron beam. This so-called, slice and view approach allows larger scale nano-structures to be characterized across the many imaging modes available to an SEM, including secondary electron, backscattered electron, and energy dispersive x-ray measurement. The process is destructive, since the specimen is being sequentially milled away after each image is collected. The collected series of images is then reconstructed to a 3D volume by registering the image stack and removing artifacts. The predominant artifact that degrades FIB tomography is ion mill curtaining, where mill patterns form large aperiodic stripes in each image. The ion mill curtaining can be removed using destriping algorithms. FIB tomography can be done at both room and cryo temperatures as well as on both materials and biological samples.
History
History of FIB technology
1975: The first FIB systems based on field emission technology were developed by Levi-Setti and by Orloff and Swanson and used gas field ionization sources (GFISs).
1978: The first FIB based on an LMIS was built by Seliger et al.
Physics of LMIS
1600: Gilbert documented that fluid under high tension forms a cone.
1914: Zeleny observed and filmed cones and jets
1959: Feynman suggested the use of ion beams.
1964: Taylor produced exactly conical solution to equations of electro hydrodynamics (EHD)
1975: Krohn and Ringo produced first high brightness ion source: LMIS
Some pioneers of LMIS and FIB
Mahoney (1969)
Sudraud et al. Paris XI Orsay (1974)
Hughes Research Labs, Seliger (1978)
Hughes Research Labs, Kubena (1978–1993)
University of Oxford Mair (1980)
Culham UK, Roy Clampitt Prewett (1980)
Oregon Graduate Center, L. Swanson (1980)
Oregon Graduate Center, J. Orloff (1974)
MIT, J. Melngailis (1980)
Helium ion microscope (HeIM)
Another ion source seen in commercially available instruments is a helium ion source, which is inherently less damaging to the sample than Ga ions although it will still sputter small amounts of material especially at high magnifications and long scan times. As helium ions can be focused into a small probe size and provide a much smaller sample interaction than high energy (>1 kV) electrons in the SEM, the He ion microscope can generate equal or higher resolution images with good material contrast and a higher depth of focus. Commercial instruments are capable of sub 1 nm resolution.
Wien filter in focused ion beam setup
Imaging and milling with Ga ions always result in Ga incorporation near the sample surface. As the sample surface is sputtered away at a rate proportional to the sputtering yield and the ion flux (ions per area per time), the Ga is implanted further into the sample, and a steady-state profile of Ga is reached. This implantation is often a problem in the range of the semiconductor where silicon can be amorphised by the gallium. In order to get an alternative solution to Ga LMI sources, mass-filtered columns have been developed, based on a Wien filter technology. Such sources include Au-Si, Au-Ge and Au-Si-Ge sources providing Si, Cr, Fe, Co, Ni, Ge, In, Sn, Au, Pb and other elements.
The principle of a Wien filter is based on the equilibrium of the opposite forces induced by perpendicular electrostatic and a magnetic fields acting on accelerated particles. The proper mass trajectory remains straight and passes through the mass selection aperture while the other masses are stopped.
Besides allowing the use of sources others than gallium, these columns can switch from different species simply by adjusting the properties of the Wien filter. Larger ions can be used to make rapid milling before refining the contours with smaller ones. Users also benefits from the possibility to dope their samples with elements of suitable alloy sources.
The latter property has found great interests in the investigation of magnetic materials and devices. Khizroev and Litvinov have shown, with the help of magnetic force microscopy (MFM), that there is a critical dose of ions that a magnetic material can be exposed to without experiencing a change in the magnetic properties. Exploiting FIB from such an unconventional perspective is especially favourable today when the future of so many novel technologies depends on the ability to rapidly fabricate prototype nanoscale magnetic devices.
See also
Confocal microscopy
Ion milling machine
Powder diffraction
Ultrafast x-ray
X-ray crystallography
X-ray scattering techniques
References
Further reading
Electron microscopy
Scientific techniques
Semiconductor device fabrication
Thin film deposition | Focused ion beam | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 3,174 | [
"Electron",
"Electron microscopy",
"Thin film deposition",
"Microtechnology",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Microscopy",
"Planes (geometry)",
"Solid state engineering"
] |
2,980,361 | https://en.wikipedia.org/wiki/Immunoglobulin%20heavy%20chain | The immunoglobulin heavy chain (IgH) is the large polypeptide subunit of an antibody (immunoglobulin). In human genome, the IgH gene loci are on chromosome 14.
A typical antibody is composed of two immunoglobulin (Ig) heavy chains and two Ig light chains. Several different types of heavy chain exist that define the class or isotype of an antibody. These heavy chain types vary between different animals. All heavy chains contain a series of immunoglobulin domains, usually with one variable domain (VH) that is important for binding antigen and several constant domains (CH1, CH2, etc.). Production of a viable heavy chain is a key step in B cell maturation. If the heavy chain is able to bind to a surrogate light chain and move to the plasma membrane, then the developing B cell can begin producing its light chain.
The heavy chain does not always have to bind to a light chain. Pre-B lymphocytes can synthesize heavy chain in the absence of light chain, which then can allow the heavy chain to bind to a heavy-chain binding protein.
In mammals
Classes
There are five types of mammalian immunoglobulin heavy chain: γ, δ, α, μ and ε. They define classes of immunoglobulins: IgG, IgD, IgA, IgM and IgE, respectively.
Heavy chains α and γ have approximately 450 amino acids.
Heavy chains μ and ε have approximately 550 amino acids.
Regions
Each heavy chain has two regions:
a constant region (which is the same for all immunoglobulins of the same class but differs between classes).
Heavy chains γ, α and δ have a constant region composed of three tandem (in a line next to each other) immunoglobulin domains but also have a hinge region for added flexibility.
Heavy chains μ and ε have a constant region composed of four domains.
a variable region that differs between different B cells, but is the same for all immunoglobulins produced by the same B cell or B cell clone. The variable domain of any heavy chain is composed of a single immunoglobulin domain. These domains are about 110 amino acids long.
Cows
Cows, specifically Bos taurus, show a variation on the general mammalian theme in which the heavy chain CDR H3 region has adapted to produce a divergent repertoire of antibodies which present a "stalk and knob" antigen interaction surface instead of the more familiar bivalent tip surface. The bovine CDR is unusually long and contains unique sequence attributes which support the production of paired cysteine residues during somatic hypermutation. Thus, where in humans the somatic hypermutation step targets the V(D)J recombination process, the target in cows is on the creation of diverse disulfide bonds and the generation of unique sets of loops which interact with antigen. A speculated evolutionary driver for this variation is the presence of a vastly more diverse microbial environment in the digestive system of the cow as a consequence of their being ruminants.
In fish
Jawed fish appear to be the most primitive animals that are able to make antibodies like those described for mammals. However, fish do not have the same repertoire of antibodies that mammals possess. Three distinct Ig heavy chains have so far been identified in bony fish.
The first identified was the μ (or mu) heavy chain that is present in all jawed fish and is the heavy chain for what is thought to be the primordial immunoglobulin. The resulting antibody, IgM, is secreted as a tetramer in teleost fish instead of the typical pentamer found in mammals and sharks.
The heavy chain (δ) for IgD was identified initially from the channel catfish and Atlantic salmon and is now well documented for many teleost fish.
The third teleost Ig heavy chain gene was identified very recently and does not resemble any of the heavy chains so far described for mammals. This heavy chain, identified in both rainbow trout (τ) and zebrafish (ζ), could potentially form a distinct antibody isotype (IgT or IgZ) that may precede IgM in evolutionary terms.
Similar to the situation observed for bony fish, three distinct Ig heavy chain isotypes have been identified in cartilaginous fish. With the exception of μ, these Ig heavy chain isotypes appear to be unique to cartilaginous fish. The resulting antibodies are designated IgW (also called IgX or IgNARC) and IgNAR (immunoglobulin new antigen receptor). The latter type is a heavy-chain antibody, an antibody lacking light chains, and can be used to produce single-domain antibodies, which are essentially the variable domain (VNAR) of an IgNAR. Shark single domain antibodies (VNARs) to tumor or viral antigens can be isolated from a large naïve nurse shark VNAR library using phage display technology.
IgW has now also been found in the group of lobe finned fishes including the coelacanth and lungfish. The IgW1 and IgW2 in coelacanth has a usual (VD)n-Jn-C structure as well as having a large number of constant domains.
In amphibians
Frogs can synthesize IgX and IgY.
See also
Heavy-chain antibody
References
External links
Educational Resource for Heavy Chain Analysis
Immune system | Immunoglobulin heavy chain | [
"Biology"
] | 1,151 | [
"Immune system",
"Organ systems"
] |
2,980,541 | https://en.wikipedia.org/wiki/Clifford%27s%20theorem%20on%20special%20divisors | In mathematics, Clifford's theorem on special divisors is a result of on algebraic curves, showing the constraints on special linear systems on a curve C.
Statement
A divisor on a Riemann surface C is a formal sum of points P on C with integer coefficients. One considers a divisor as a set of constraints on meromorphic functions in the function field of C, defining as the vector space of functions having poles only at points of D with positive coefficient, at most as bad as the coefficient indicates, and having zeros at points of D with negative coefficient, with at least that multiplicity. The dimension of is finite, and denoted . The linear system of divisors attached to D is the corresponding projective space of dimension .
The other significant invariant of D is its degree d, which is the sum of all its coefficients.
A divisor is called special if ℓ(K − D) > 0, where K is the canonical divisor.
Clifford's theorem states that for an effective special divisor D, one has:
,
and that equality holds only if D is zero or a canonical divisor, or if C is a hyperelliptic curve and D linearly equivalent to an integral multiple of a hyperelliptic divisor.
The Clifford index of C is then defined as the minimum of taken over all special divisors (except canonical and trivial), and Clifford's theorem states this is non-negative. It can be shown that the Clifford index for a generic curve of genus g is equal to the floor function
The Clifford index measures how far the curve is from being hyperelliptic. It may be thought of as a refinement of the gonality: in many cases the Clifford index is equal to the gonality minus 2.
Green's conjecture
A conjecture of Mark Green states that the Clifford index for a curve over the complex numbers that is not hyperelliptic should be determined by the extent to which C as canonical curve has linear syzygies. In detail, one defines the invariant a(C) in terms of the minimal free resolution of the homogeneous coordinate ring of C in its canonical embedding, as the largest index i for which the graded Betti number βi, i + 2 is zero. Green and Robert Lazarsfeld showed that a(C) + 1 is a lower bound for the Clifford index, and Green's conjecture states that equality always holds. There are numerous partial results.
Claire Voisin was awarded the Ruth Lyttle Satter Prize in Mathematics for her solution of the generic case of Green's conjecture in two papers. The case of Green's conjecture for generic curves had attracted a huge amount of effort by algebraic geometers over twenty years before finally being laid to rest by Voisin. The conjecture for arbitrary curves remains open.
Notes
References
External links
Algebraic curves
Theorems in algebraic geometry
Unsolved problems in geometry | Clifford's theorem on special divisors | [
"Mathematics"
] | 599 | [
"Theorems in algebraic geometry",
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Theorems in geometry",
"Mathematical problems"
] |
2,981,465 | https://en.wikipedia.org/wiki/Germane | Germane is the chemical compound with the formula GeH4, and the germanium analogue of methane. It is the simplest germanium hydride and one of the most useful compounds of germanium. Like the related compounds silane and methane, germane is tetrahedral. It burns in air to produce GeO2 and water. Germane is a group 14 hydride.
Occurrence
Germane has been detected in the atmosphere of Jupiter.
Synthesis
Germane is typically prepared by reduction of germanium oxides, notably germanates, with hydride reagents such as sodium borohydride, potassium borohydride, lithium borohydride, lithium aluminium hydride, sodium aluminium hydride. The reaction with borohydrides is catalyzed by various acids and can be carried out in either aqueous or organic solvent. On laboratory scale, germane can be prepared by the reaction of Ge(IV) compounds with these hydride reagents. A typical synthesis involved the reaction of sodium germanate with potassium borohydride.
NaHGeO3 + KBH4 + H2O → KGeH3 + KB(OH)4
KGeH3 + HO2CCH3 → GeH4 + KO2CCH3
Other methods for the synthesis of germane include electrochemical reduction and a plasma-based method. The electrochemical reduction method involves applying voltage to a germanium metal cathode immersed in an aqueous electrolyte solution and an anode counter-electrode composed of a metal such as molybdenum or cadmium. In this method, germane and hydrogen gases evolve from the cathode while the anode reacts to form solid molybdenum oxide or cadmium oxides. The plasma synthesis method involves bombarding germanium metal with hydrogen atoms (H) that are generated using a high frequency plasma source to produce germane and digermane.
Reactions
Germane is weakly acidic. In liquid ammonia GeH4 is ionised forming NH4+ and GeH3−. With alkali metals in liquid ammonia GeH4 reacts to give white crystalline MGeH3 compounds. The potassium (potassium germyl or potassium trihydrogen germanide KGeH3) and rubidium compounds (rubidium germyl or rubidium trihydrogen germanide RbGeH3) have the sodium chloride structure implying a free rotation of the trihydrogen germanide anion GeH3−, the caesium compound, caesium germyl or caesium trihydrogen germanide CsGeH3 in contrast has the distorted sodium chloride structure of TlI.
Use in semiconductor industry
The gas decomposes near 600K (327°C; 620°F) to germanium and hydrogen. Because of its thermal lability, germane is used in the semiconductor industry for the epitaxial growth of germanium by MOVPE or chemical beam epitaxy. Organogermanium precursors (e.g. isobutylgermane, alkylgermanium trichlorides, and dimethylaminogermanium trichloride) have been examined as less hazardous liquid alternatives to germane for deposition of Ge-containing films by MOVPE.
Safety
Germane is a highly flammable, potentially pyrophoric, and a highly toxic gas. In 1970, the American Conference of Governmental Industrial Hygienists (ACGIH) published the latest changes and set the occupational exposure threshold limit value at 0.2 ppm for an 8-hour time weighted average.
The LC50 for rats at 1 hour of exposure is 622 ppm. Inhalation or exposure may result in malaise, headache, dizziness, fainting, dyspnea, nausea, vomiting, kidney injury, and hemolytic effects.
The US Department of Transportation hazard class is 2.3 Poisonous Gas.
References
External links
Metaloids (manufacturer) datasheet
Arkonic Specialty Gases China (manufacturer) datasheet
Licensintorg Russia (process technology sale)
Honjo Chemical Japan (manufacturer)
Praxair datasheet
Air liquide gas encyclopedia entry
CDC - NIOSH Pocket Guide to Chemical Hazards
Voltaix (manufacturer) datasheet
Foshan Huate Gas Co., Ltd. (manufacturer)
Horst Technologies, Russia (manufacturer)
Germanium(IV) compounds
Metal hydrides
Industrial gases
Pyrophoric materials | Germane | [
"Chemistry",
"Technology"
] | 925 | [
"Inorganic compounds",
"Reducing agents",
"Metal hydrides",
"Industrial gases",
"Chemical process engineering"
] |
2,983,356 | https://en.wikipedia.org/wiki/Bred%20vector | In applied mathematics, bred vectors are perturbations related to Lyapunov vectors, that capture fast-growing dynamical instabilities of the solution of a numerical model. They are used, for example, as initial perturbations for ensemble forecasting in numerical weather prediction. They were introduced by Zoltan Toth and Eugenia Kalnay.
Method
Bred vectors are created by adding initially random perturbations to a nonlinear model. The control (unperturbed) and the perturbed models are integrated in time, and periodically the control solution is subtracted from the perturbed solution. This difference is the bred vector. The vector is scaled to be the same size as the initial perturbation and is then added back to the control to create the new perturbed initial condition. After a short transient period, this "breeding" process creates bred vectors dominated by the naturally fastest-growing instabilities of the evolving control solution.
References
Functional analysis
Mathematical physics | Bred vector | [
"Physics",
"Mathematics"
] | 204 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical analysis stubs",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Mathematical relations",
"Mathematical physics"
] |
6,959,617 | https://en.wikipedia.org/wiki/Nominal%20level | Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range. The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom. When a signal is chained with improper gain staging through many devices, clipping may occur or the system may operate with reduced dynamic range.
In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. The measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor.
Standards
VU meters are designed to represent the perceived loudness of a passage of music, or other audio content, measuring in volume units. Devices are designed so that the best signal quality is obtained when the meter rarely goes above nominal. The markings are often in dB instead of "VU", and the reference level should be defined in the device's manual. In most professional recording and sound reinforcement equipment, the nominal level is . In semi-professional and domestic equipment, the nominal level is usually −10 dBV. This difference is due to the cost required to create larger power supplies and output higher levels.
In broadcasting equipment, this is termed the Maximum Permitted Level, which is defined by European Broadcasting Union standards. These devices use peak programme meters instead of VU meters, which gives the reading a different meaning.
"Mic level" is sometimes defined as −60 dBV, though levels from microphones vary widely.
In video systems, nominal levels are 1 VP-P for synched systems, such as baseband composite video, and 0.7 VP-P for systems without sync. Note that these levels are measured peak-to-peak, while audio levels are time averages.
See also
Alignment level
Transmission level point
References
External links
Nominal Level — Sweetwater glossary
Level Headed — Nominal Level (explained) plus an SV-3700 modification
Signal processing
Sound | Nominal level | [
"Technology",
"Engineering"
] | 495 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
6,959,754 | https://en.wikipedia.org/wiki/Locally%20connected%20space | In topology and other branches of mathematics, a topological space X is
locally connected if every point admits a neighbourhood basis consisting of open connected sets.
As a stronger notion, the space X is locally path connected if every point admits a neighbourhood basis consisting of open path connected sets.
Background
Throughout the history of topology, connectedness and compactness have been two of the most
widely studied topological properties. Indeed, the study of these properties even among subsets of Euclidean space, and the recognition of their independence from the particular form of the Euclidean metric, played a large role in clarifying the notion of a topological property and thus a topological space. However, whereas the structure of compact subsets of Euclidean space was understood quite early on via the Heine–Borel theorem, connected subsets of (for n > 1) proved to be much more complicated. Indeed, while any compact Hausdorff space is locally compact, a connected space—and even a connected subset of the Euclidean plane—need not be locally connected (see below).
This led to a rich vein of research in the first half of the twentieth century, in which topologists studied the implications between increasingly subtle and complex variations on the notion of a locally connected space. As an example, the notion of local connectedness im kleinen at a point and its relation to local connectedness will be considered later on in the article.
In the latter part of the twentieth century, research trends shifted to more intense study of spaces like manifolds, which are locally well understood (being locally homeomorphic to Euclidean space) but have complicated global behavior. By this it is meant that although the basic point-set topology of manifolds is relatively simple (as manifolds are essentially metrizable according to most definitions of the concept), their algebraic topology is far more complex. From this modern perspective, the stronger property of local path connectedness turns out to be more important: for instance, in order for a space to admit a universal cover it must be connected and locally path connected.
A space is locally connected if and only if for every open set , the connected components of (in the subspace topology) are open. It follows, for instance, that a continuous function from a locally connected space to a totally disconnected space must be locally constant. In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general: for instance Cantor space is totally disconnected but not discrete.
Definitions
Let be a topological space, and let be a point of
A space is called locally connected at if every neighborhood of contains a connected open neighborhood of , that is, if the point has a neighborhood base consisting of connected open sets. A locally connected space is a space that is locally connected at each of its points.
Local connectedness does not imply connectedness (consider two disjoint open intervals in for example); and connectedness does not imply local connectedness (see the topologist's sine curve).
A space is called locally path connected at if every neighborhood of contains a path connected open neighborhood of , that is, if the point has a neighborhood base consisting of path connected open sets. A locally path connected space is a space that is locally path connected at each of its points.
Locally path connected spaces are locally connected. The converse does not hold (see the lexicographic order topology on the unit square).
Connectedness im kleinen
A space is called connected im kleinen at or weakly locally connected at if every neighborhood of contains a connected (not necessarily open) neighborhood of , that is, if the point has a neighborhood base consisting of connected sets. A space is called weakly locally connected if it is weakly locally connected at each of its points; as indicated below, this concept is in fact the same as being locally connected.
A space that is locally connected at is connected im kleinen at The converse does not hold, as shown for example by a certain infinite union of decreasing broom spaces, that is connected im kleinen at a particular point, but not locally connected at that point. However, if a space is connected im kleinen at each of its points, it is locally connected.
A space is said to be path connected im kleinen at if every neighborhood of contains a path connected (not necessarily open) neighborhood of , that is, if the point has a neighborhood base consisting of path connected sets.
A space that is locally path connected at is path connected im kleinen at The converse does not hold, as shown by the same infinite union of decreasing broom spaces as above. However, if a space is path connected im kleinen at each of its points, it is locally path connected.
First examples
For any positive integer n, the Euclidean space is locally path connected, thus locally connected; it is also connected.
More generally, every locally convex topological vector space is locally connected, since each point has a local base of convex (and hence connected) neighborhoods.
The subspace of the real line is locally path connected but not connected.
The topologist's sine curve is a subspace of the Euclidean plane that is connected, but not locally connected.
The space of rational numbers endowed with the standard Euclidean topology, is neither connected nor locally connected.
The comb space is path connected but not locally path connected, and not even locally connected.
A countably infinite set endowed with the cofinite topology is locally connected (indeed, hyperconnected) but not locally path connected.
The lexicographic order topology on the unit square is connected and locally connected, but not path connected, nor locally path connected.
The Kirch space is connected and locally connected, but not path connected, and not path connected im kleinen at any point. It is in fact totally path disconnected.
A first-countable Hausdorff space is locally path-connected if and only if is equal to the final topology on induced by the set of all continuous paths
Properties
For the non-trivial direction, assume is weakly locally connected. To show it is locally connected, it is enough to show that the connected components of open sets are open.
Let be open in and let be a connected component of Let be an element of Then is a neighborhood of so that there is a connected neighborhood of contained in Since is connected and contains must be a subset of (the connected component containing ). Therefore is an interior point of Since was an arbitrary point of is open in Therefore, is locally connected.
Local connectedness is, by definition, a local property of topological spaces, i.e., a topological property P such that a space X possesses property P if and only if each point x in X admits a neighborhood base of sets that have property P. Accordingly, all the "metaproperties" held by a local property hold for local connectedness. In particular:
A space is locally connected if and only if it admits a base of (open) connected subsets.
The disjoint union of a family of spaces is locally connected if and only if each is locally connected. In particular, since a single point is certainly locally connected, it follows that any discrete space is locally connected. On the other hand, a discrete space is totally disconnected, so is connected only if it has at most one point.
Conversely, a totally disconnected space is locally connected if and only if it is discrete. This can be used to explain the aforementioned fact that the rational numbers are not locally connected.
A nonempty product space is locally connected if and only if each is locally connected and all but finitely many of the are connected.
Every hyperconnected space is locally connected, and connected.
Components and path components
The following result follows almost immediately from the definitions but will be quite useful:
Lemma: Let X be a space, and a family of subsets of X. Suppose that is nonempty. Then, if each is connected (respectively, path connected) then the union is connected (respectively, path connected).
Now consider two relations on a topological space X: for write:
if there is a connected subset of X containing both x and y; and
if there is a path connected subset of X containing both x and y.
Evidently both relations are reflexive and symmetric. Moreover, if x and y are contained in a connected (respectively, path connected) subset A and y and z are connected in a connected (respectively, path connected) subset B, then the Lemma implies that is a connected (respectively, path connected) subset containing x, y and z. Thus each relation is an equivalence relation, and defines a partition of X into equivalence classes. We consider these two partitions in turn.
For x in X, the set of all points y such that is called the connected component of x. The Lemma implies that is the unique maximal connected subset of X containing x. Since the closure of is also a connected subset containing x, it follows that is closed.
If X has only finitely many connected components, then each component is the complement of a finite union of closed sets and therefore open. In general, the connected components need not be open, since, e.g., there exist totally disconnected spaces (i.e., for all points x) that are not discrete, like Cantor space. However, the connected components of a locally connected space are also open, and thus are clopen sets. It follows that a locally connected space X is a topological disjoint union of its distinct connected components. Conversely, if for every open subset U of X, the connected components of U are open, then X admits a base of connected sets and is therefore locally connected.
Similarly x in X, the set of all points y such that is called the path component of x. As above, is also the union of all path connected subsets of X that contain x, so by the Lemma is itself path connected. Because path connected sets are connected, we have for all
However the closure of a path connected set need not be path connected: for instance, the topologist's sine curve is the closure of the open subset U consisting of all points (x,sin(x)) with x > 0, and U, being homeomorphic to an interval on the real line, is certainly path connected. Moreover, the path components of the topologist's sine curve C are U, which is open but not closed, and which is closed but not open.
A space is locally path connected if and only if for all open subsets U, the path components of U are open. Therefore the path components of a locally path connected space give a partition of X into pairwise disjoint open sets. It follows that an open connected subspace of a locally path connected space is necessarily path connected. Moreover, if a space is locally path connected, then it is also locally connected, so for all is connected and open, hence path connected, that is, That is, for a locally path connected space the components and path components coincide.
Examples
The set (where ) in the dictionary order topology has exactly one component (because it is connected) but has uncountably many path components. Indeed, any set of the form is a path component for each a belonging to I.
Let be a continuous map from to (which is in the lower limit topology). Since is connected, and the image of a connected space under a continuous map must be connected, the image of under must be connected. Therefore, the image of under must be a subset of a component of Since this image is nonempty, the only continuous maps from ' to are the constant maps. In fact, any continuous map from a connected space to a totally disconnected space must be constant.
Quasicomponents
Let X be a topological space. We define a third relation on X: if there is no separation of X into open sets A and B such that x is an element of A and y is an element of B. This is an equivalence relation on X and the equivalence class containing x is called the quasicomponent of x.
can also be characterized as the intersection of all clopen subsets of X that contain x. Accordingly is closed; in general it need not be open.
Evidently for all Overall we have the following containments among path components, components and quasicomponents at x:
If X is locally connected, then, as above, is a clopen set containing x, so and thus Since local path connectedness implies local connectedness, it follows that at all points x of a locally path connected space we have
Another class of spaces for which the quasicomponents agree with the components is the class of compact Hausdorff spaces.
Examples
An example of a space whose quasicomponents are not equal to its components is a sequence with a double limit point. This space is totally disconnected, but both limit points lie in the same quasicomponent, because any clopen set containing one of them must contain a tail of the sequence, and thus the other point too.
The space is locally compact and Hausdorff but the sets and are two different components which lie in the same quasicomponent.
The Arens–Fort space is not locally connected, but nevertheless the components and the quasicomponents coincide: indeed for all points x.
See also
It is conjectured that the Mandelbrot set is locally connected
Notes
References
Further reading
. For Hausdorff spaces, it is shown that any continuous function from a connected locally connected space into a connected space with a dispersion point is constant
.
Articles containing proofs
Properties of topological spaces
General topology | Locally connected space | [
"Mathematics"
] | 2,763 | [
"General topology",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Articles containing proofs"
] |
6,961,771 | https://en.wikipedia.org/wiki/Principal%20ideal%20ring | In mathematics, a principal right (left) ideal ring is a ring R in which every right (left) ideal is of the form xR (Rx) for some element x of R. (The right and left ideals of this form, generated by one element, are called principal ideals.) When this is satisfied for both left and right ideals, such as the case when R is a commutative ring, R can be called a principal ideal ring, or simply principal ring.
If only the finitely generated right ideals of R are principal, then R is called a right Bézout ring. Left Bézout rings are defined similarly. These conditions are studied in domains as Bézout domains.
A principal ideal ring which is also an integral domain is said to be a principal ideal domain (PID). In this article the focus is on the more general concept of a principal ideal ring which is not necessarily a domain.
General properties
If R is a principal right ideal ring, then it is certainly a right Noetherian ring, since every right ideal is finitely generated. It is also a right Bézout ring since all finitely generated right ideals are principal. Indeed, it is clear that principal right ideal rings are exactly the rings which are both right Bézout and right Noetherian.
Principal right ideal rings are closed under finite direct products. If , then each right ideal of R is of the form , where each is a right ideal of Ri. If all the Ri are principal right ideal rings, then Ai=xiRi, and then it can be seen that . Without much more effort, it can be shown that right Bézout rings are also closed under finite direct products.
Principal right ideal rings and right Bézout rings are also closed under quotients, that is, if I is a proper ideal of principal right ideal ring R, then the quotient ring R/I is also principal right ideal ring. This follows readily from the isomorphism theorems for rings.
All properties above have left analogues as well.
Commutative examples
1. The ring of integers:
2. The integers modulo n: .
3. Let be rings and . Then R is a principal ring if and only if Ri is a principal ring for all i.
4. The localization of a principal ring at any multiplicative subset is again a principal ring. Similarly, any quotient of a principal ring is again a principal ring.
5. Let R be a Dedekind domain and I be a nonzero ideal of R. Then the quotient R/I is a principal ring. Indeed, we may factor I as a product of prime
powers: , and by the Chinese Remainder Theorem
, so it suffices to see that each
is a principal ring. But is isomorphic to the quotient of the discrete valuation ring
and, being a quotient of a principal ring, is itself a principal ring.
6. Let k be a finite field and put , and . Then R is a finite local ring which is not principal.
7. Let X be a finite set. Then forms a commutative principal ideal ring with unity, where represents set symmetric difference and represents the powerset of X. If X has at least two elements, then the ring also has zero divisors. If I is an ideal, then . If instead X is infinite, the ring is not principal: take the ideal generated by the finite subsets of X, for example.
8. Galois rings are commutative local PIRs. They are constructed from the integers modulo in essentially the same way that finite field extensions of the integers modulo , and the maximal ideal is generated by
Structure theory for commutative PIR's
The principal rings constructed in Example 5 above are always Artinian rings; in particular they are isomorphic to a finite direct product of principal Artinian local rings.
A local Artinian principal ring is called a special principal ring and has an extremely simple ideal structure: there are only finitely many ideals, each of which is a power of the maximal ideal. For this reason, special principal rings are examples of uniserial rings.
The following result gives a complete classification of principal rings in terms of special principal rings and principal ideal domains.
Zariski–Samuel theorem: Let R be a principal ring. Then R can be written as a direct product , where each Ri is either a principal ideal domain or a special principal ring.
The proof applies the Chinese Remainder theorem to a minimal primary decomposition of the zero ideal.
There is also the following result, due to Hungerford:
Theorem (Hungerford): Let R be a principal ring. Then R can be written as a direct product , where each Ri is a quotient of a principal ideal domain.
The proof of Hungerford's theorem employs Cohen's structure theorems for complete local rings.
Arguing as in Example 3. above and using the Zariski-Samuel theorem, it is easy to check that Hungerford's theorem is equivalent to the statement that any special principal ring is the quotient of a discrete valuation ring.
Noncommutative examples
Every semisimple ring R which is not just a product of fields is a noncommutative right and left principal ideal ring (it need not be a domain, as the example of n x n matrices over a field shows). Every right and left ideal is a direct summand of R, and so is of the form eR or Re where e is an idempotent of R. Paralleling this example, von Neumann regular rings are seen to be both right and left Bézout rings.
If D is a division ring and is a ring endomorphism which is not an automorphism, then the skew polynomial ring is known to be a principal left ideal domain which is not right Noetherian, and hence it cannot be a principal right ideal ring. This shows that even for domains principal left and principal right ideal rings are different.
References
Pages 86 & 146-155 of
Commutative algebra
Ring theory | Principal ideal ring | [
"Mathematics"
] | 1,245 | [
"Fields of abstract algebra",
"Commutative algebra",
"Ring theory"
] |
6,962,075 | https://en.wikipedia.org/wiki/Ruedi%20Aebersold | Rudolf Aebersold (better known as Ruedi Aebersold; born September 12, 1954) is a Swiss biologist, regarded as a pioneer in the fields of proteomics and systems biology. He has primarily researched techniques for measuring proteins in complex samples, in many cases via mass spectrometry. Ruedi Aebersold is a professor of Systems biology at the Institute of Molecular Systems Biology (IMSB) in ETH Zurich. He was one of the founders of the Institute for Systems Biology in Seattle, Washington, United States where he previously had a research group.
Aebersold is known for the development and application of targeted proteomics techniques in the field of biomedical research, in order to understand the function, interaction and localization of each protein in the cell and its changes in disease states. To this end, Ruedi Abersold has made significant contributions in the development and application of targeted proteomics methods, including selected reaction monitoring and data-independent acquisition. Ruedi Aebersold is also recognized for its contributions in the development of standard formats and open source software for the analysis and storage of mass spectrometry and proteomics data, and he is one of the inventors of the Isotope-Coded Affinity Tag (ICAT) technique for quantitative proteomics, a technique that measures the relative quantities of proteins between two sample by using tags containing stable isotopes of different masses.
Aebersold is co-founder and scientific advisor of the companies ProteoMediX and Biognosys.
Honors and awards
2005 – HUPO Award
2006 – Buchner Medal
2008 – In recognition of his contribution to the field of protein sciences and proteomics the Association of Biomolecular Resource Facilities (ABRF) selected him for the ABRF 2008 Award.
2010 – Herbert A. Sober Lectureship
2010 – Otto Naegeli Prize
2012 – Thomson Medal Award
In 2014 he became a member of the German Academy of Sciences Leopoldina.
2015 – ranked #1 on the 2015 list of "most influential people in the analytical sciences" (by the Analytical Scientist)
2018 – Bijvoet Medal of the Bijvoet Center for Biomolecular Research of Utrecht University
2020 – Marcel Benoist Prize
References
1954 births
Living people
Members of the European Molecular Biology Organization
Swiss biologists
Systems biologists
Academic staff of ETH Zurich
Mass spectrometrists
Bijvoet Medal recipients
Thomson Medal recipients
Members of the German National Academy of Sciences Leopoldina | Ruedi Aebersold | [
"Physics",
"Chemistry"
] | 502 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
6,962,132 | https://en.wikipedia.org/wiki/Grotrian%20diagram | A Grotrian diagram, or term diagram, shows the allowed electronic transitions between the energy levels of atoms. They can be used for one-electron and multi-electron atoms. They take into account the specific selection rules related to changes in angular momentum of the electron. The diagrams are named after Walter Grotrian, who introduced them in his 1928 book Graphische Darstellung der Spektren von Atomen und Ionen mit ein, zwei und drei Valenzelektronen ("Graphical representation of the spectra of atoms and ions with one, two and three valence electrons").
See also
Jablonski diagram (for molecules)
References
External links
Hyperphysics: Atomic Energy Level Diagrams
Volumes with Grotrian diagrams of most elements
Atomic energy-level and Grotrian diagrams by Stanley Bashkin and John O. Stoner Jr.
Volume I: Hydrogen - Phosphorus
Volume I: Hydrogen - Phosphorus (Addenda)
Volume III
Volume IV
Grotrian diagrams
Electronic structure of atoms
NIST Atomic Spectra Database Lines Form
Grotrian Diagrams
Diagrams
Spectroscopy
Atomic physics
Photochemistry | Grotrian diagram | [
"Physics",
"Chemistry",
"Astronomy"
] | 225 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
"Astronomy stubs",
" molecular",
"Atomic physics",
"nan",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
6,966,559 | https://en.wikipedia.org/wiki/Nanocrystalline%20material | A nanocrystalline (NC) material is a polycrystalline material with a crystallite size of only a few nanometers. These materials fill the gap between amorphous materials without any long range order and conventional coarse-grained materials. Definitions vary, but nanocrystalline material is commonly defined as a crystallite (grain) size below 100 nm. Grain sizes from 100 to 500 nm are typically considered "ultrafine" grains.
The grain size of a NC sample can be estimated using x-ray diffraction. In materials with very small grain sizes, the diffraction peaks will be broadened. This broadening can be related to a crystallite size using the Scherrer equation (applicable up to ~50 nm), a Williamson-Hall plot, or more sophisticated methods such as the Warren-Averbach method or computer modeling of the diffraction pattern. The crystallite size can be measured directly using transmission electron microscopy.
Synthesis
Nanocrystalline materials can be prepared in several ways. Methods are typically categorized based on the phase of matter the material transitions through before forming the nanocrystalline final product.
Solid-state processing
Solid-state processes do not involve melting or evaporating the material and are typically done at relatively low temperatures. Examples of solid state processes include mechanical alloying using a high-energy ball mill and certain types of severe plastic deformation processes.
Liquid processing
Nanocrystalline metals can be produced by rapid solidification from the liquid using a process such as melt spinning. This often produces an amorphous metal, which can be transformed into an nanocrystalline metal by annealing above the crystallization temperature.
Vapor-phase processing
Thin films of nanocrystalline materials can be produced using vapor deposition processes such as MOCVD.
Solution processing
Some metals, particularly nickel and nickel alloys, can be made into nanocrystalline foils using electrodeposition.
Mechanical properties
Nanocrystalline materials show exceptional mechanical properties relative to their coarse-grained varieties. Because the volume fraction of grain boundaries in nanocrystalline materials can be as large as 30%, the mechanical properties of nanocrystalline materials are significantly influenced by this amorphous grain boundary phase. For example, the elastic modulus has been shown to decrease by 30% for nanocrystalline metals and more than 50% for nanocrystalline ionic materials. This is because the amorphous grain boundary regions are less dense than the crystalline grains, and thus have a larger volume per atom, . Assuming the interatomic potential, , is the same within the grain boundaries as in the bulk grains, the elastic modulus, , will be smaller in the grain boundary regions than in the bulk grains. Thus, via the rule of mixtures, a nanocrystalline material will have a lower elastic modulus than its bulk crystalline form.
Nanocrystalline metals
The exceptional yield strength of nanocrystalline metals is due to grain boundary strengthening, as grain boundaries are extremely effective at blocking the motion of dislocations. Yielding occurs when the stress due to dislocation pileup at a grain boundary becomes sufficient to activate slip of dislocations in the adjacent grain. This critical stress increases as the grain size decreases, and these physics are empirically captured by the Hall-Petch relationship,
where is the yield stress, is a material-specific constant that accounts for the effects of all other strengthening mechanisms, is a material-specific constant that describes the magnitude of the metal's response to grain size strengthening, and is the average grain size. Additionally, because nanocrystalline grains are too small to contain a significant number of dislocations, nanocrystalline metals undergo negligible amounts of strain-hardening, and nanocrystalline materials can thus be assumed to behave with perfect plasticity.
As the grain size continues to decrease, a critical grain size is reached at which intergranular deformation, i.e. grain boundary sliding, becomes more energetically favorable than intragranular dislocation motion. Below this critical grain size, often referred to as the “reverse” or “inverse” Hall-Petch regime, any further decrease in the grain size weakens the material because an increase in grain boundary area results in increased grain boundary sliding. Chandross & Argibay modeled grain boundary sliding as viscous flow and related the yield strength of the material in this regime to material properties as
where is the enthalpy of fusion, is the atomic volume in the amorphous phase, is the melting temperature, and is the volume fraction of material in the grains vs the grain boundaries, given by , where is the grain boundary thickness and typically on the order of 1 nm. The maximum strength of a metal is given by the intersection of this line with the Hall-Petch relationship, which typically occurs around a grain size of = 10 nm for BCC and FCC metals.
Due to the large amount of interfacial energy associated with a large volume fraction of grain boundaries, nanocrystalline metals are thermally unstable. In nanocrystalline samples of low-melting point metals (i.e. aluminum, tin, and lead), the grain size of the samples was observed to double from 10 to 20 nm after 24 hours of exposure to ambient temperatures. Although materials with higher melting points are more stable at room temperatures, consolidating nanocrystalline feedstock into a macroscopic component often requires exposing the material to elevated temperatures for extended periods of time, which will result in coarsening of the nanocrystalline microstructure. Thus, thermally stable nanocrystalline alloys are of considerable engineering interest. Experiments have shown that traditional microstructural stabilization techniques such as grain boundary pinning via solute segregation or increasing solute concentrations have proven successful in some alloy systems, such as Pd-Zr and Ni-W.
Nanocrystalline ceramics
While the mechanical behavior of ceramics is often dominated by flaws, i.e. porosity, instead of grain size, grain-size strengthening is also observed in high-density ceramic specimens. Additionally, nanocrystalline ceramics have been shown to sinter more rapidly than bulk ceramics, leading to higher densities and improved mechanical properties, although extended exposure to the high pressures and elevated temperatures required to sinter the part to full density can result in coarsening of the nanostructure.
The large volume fraction of grain boundaries associated with nanocrystalline materials causes interesting behavior in ceramic systems, such as superplasticity in otherwise brittle ceramics. The large volume fraction of grain boundaries allows for a significant diffusional flow of atoms via Coble creep, analogous to the grain boundary sliding deformation mechanism in nanocrystalline metals. Because the diffusional creep rate scales as and linearly with the grain boundary diffusivity, refining the grain size from 10 μm to 10 nm can increase the diffusional creep rate by approximately 11 orders of magnitude. This superplasticity could prove invaluable for the processing of ceramic components, as the material may be converted back into a conventional, coarse-grained material via additional thermal treatment after forming.
Processing
While the synthesis of nanocrystalline feedstocks in the form of foils, powders, and wires is relatively straightforward, the tendency of nanocrystalline feedstocks to coarsen upon extended exposure to elevated temperatures means that low-temperature and rapid densification techniques are necessary to consolidate these feedstocks into bulk components. A variety of techniques show potential in this respect, such as spark plasma sintering or ultrasonic additive manufacturing, although the synthesis of bulk nanocrystalline components on a commercial scale remains untenable.
See also
Nanoparticle
Quantum dot
References
Crystals
Nanomaterials
Metallurgy | Nanocrystalline material | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,622 | [
"Metallurgy",
"Materials science",
"Crystallography",
"Crystals",
"nan",
"Nanotechnology",
"Nanomaterials"
] |
14,416,367 | https://en.wikipedia.org/wiki/Atrasentan | Atrasentan is an experimental drug that is being studied for the treatment of various types of cancer, including non-small cell lung cancer. It is also being investigated as a therapy for diabetic kidney disease.
It is an endothelin receptor antagonist selective for subtype A (ETA). While other drugs of this type (sitaxentan, ambrisentan) exploit the vasoconstrictive properties of endothelin and are mainly used for the treatment of pulmonary arterial hypertension, atrasentan blocks endothelin induced cell proliferation.
Clinical trials
Atrasentan failed a phase 3 trial for prostate cancer in patients unresponsive to hormone therapy. A second trial confirmed this finding.
In April 2014, de Zeeuw et al. showed that 0.75 mg and 1.25 mg of atrasentan reduced urinary albumin by 35 and 38% respectively with modest side effects. Patients also had decreased home blood pressures (but no change in office readings) decrease total cholesterol and LDL. Patients in the 1.25 mg dose group had increased weight gain which was presumably due to increased edema and had to withdraw from the study more than the placebo or 0.75 mg dose group. Reductions in proteinuria have been associated with beneficial patient outcomes in diabetic kidney disease with other interventions but is not an accepted end-point by the FDA.
In 2013, SONAR trial was initiated to determine if atrasentan reduces kidney failure in diabetic kidney disease.
In 2024, the phase 3 ALIGN trial found atrasentan to be effective in reducing proteinuria in patients with IgA nephropathy.
References
Endothelin receptor antagonists
Experimental cancer drugs
Benzodioxoles
Carboxylic acids
4-Methoxyphenyl compounds | Atrasentan | [
"Chemistry"
] | 373 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
14,416,795 | https://en.wikipedia.org/wiki/Endothelin%20receptor%20type%20B | Endothelin receptor type B, (ET-B) is a protein that in humans is encoded by the EDNRB gene.
Function
Endothelin receptor type B is a G protein-coupled receptor which activates a phosphatidylinositol-calcium second messenger system. Its ligand, endothelin, consists of a family of three potent vasoactive peptides: ET1, ET2, and ET3. A splice variant, named SVR, has been described; the sequence of the ETB-SVR receptor is identical to ETRB except for the intracellular C-terminal domain. While both splice variants bind ET1, they exhibit different responses upon binding which suggests that they may be functionally distinct.
Regulation
In melanocytic cells the EDNRB gene is regulated by the microphthalmia-associated transcription factor. Mutations in either gene are links to Waardenburg syndrome.
Clinical significance
The multigenic disorder, Hirschsprung disease type 2, is due to mutation in endothelin receptor type B gene.
Animals
In horses, a mutation in the middle of the EDNRB gene, Ile118Lys, when homozygous, causes Lethal White Syndrome. In this mutation, a mismatch in the DNA replication causes lysine to be made instead of isoleucine. The resulting EDNRB protein is unable to fulfill its role in the development of the embryo, limiting the migration of the melanocyte and enteric neuron precursors. A single copy of the EDNRB mutation, the heterozygous state, produces an identifiable and completely benign spotted coat color called frame overo.
Interactions
Endothelin receptor type B has been shown to interact with Caveolin 1.
Ligands
Agonists
IRL-1620
Antagonists
A-192,621
BQ-788
Bosentan (unselective ETA / ETB antagonist)
See also
Endothelin receptor
References
Further reading
External links
G protein-coupled receptors | Endothelin receptor type B | [
"Chemistry"
] | 415 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,422,554 | https://en.wikipedia.org/wiki/Isoenthalpic%E2%80%93isobaric%20ensemble | The isoenthalpic-isobaric ensemble (constant enthalpy and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant enthalpy and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. It was developed by physicist H. C. Andersen in 1980. The ensemble adds another degree of freedom, which represents the variable volume of a system to which the coordinates of all particles are relative. The volume becomes a dynamical variable with potential energy and kinetic energy given by . The enthalpy is a conserved quantity.
Using isoenthalpic-isobaric ensemble of Lennard-Jones fluid, it was shown that the Joule–Thomson coefficient and inversion curve can be computed directly from a single molecular dynamics simulation. A complete vapor-compression refrigeration cycle and a vapor–liquid coexistence curve, as well as a reasonable estimate of the supercritical point can be also simulated from this approach.
NPH simulation can be carried out using GROMACS and LAMMPS.
References
Statistical ensembles | Isoenthalpic–isobaric ensemble | [
"Physics"
] | 222 | [
"Statistical mechanics stubs",
"Statistical ensembles",
"Statistical mechanics"
] |
14,423,824 | https://en.wikipedia.org/wiki/Chemical%20tests%20in%20mushroom%20identification | Chemical tests in mushroom identification are methods that aid in determining the variety of some fungi. The most useful tests are Melzer's reagent and potassium hydroxide.
Ammonia
Household ammonia can be used. A couple of drops are placed on the flesh. For example, Boletus spadiceus gives a fleeting blue to blue-green reaction.
Iron salts
Iron salts are used commonly in Russula and Bolete identification. It is best to dissolve the salts in water (typically a 10% solution) and then apply to the flesh, but it is sometimes possible to apply the dry salts directly to see a color change. For example, the white flesh of Boletus chrysenteron stains lemon-yellow or olive. Three results are expected with the iron salts tests: no change indicates a negative reaction; a color change to olive, green or blackish green; or a color change to reddish-pink.
Meixner test for amatoxins
The Meixner test (also known as the Wieland test) uses concentrated hydrochloric acid and newspaper to test for the deadly amatoxins found in some species of Amanita, Lepiota, and Galerina. The test yields false positives for some compounds, such as psilocin.
Melzer's reagent
Melzer's reagent can be used to test whether spores are amyloid, nonamyloid, or dextrinoid.
Spores that stain bluish-gray to bluish-black are amyloid
Spores that stain brown to reddish-brown are dextrinoid
This test is normally performed on white spored mushrooms. If the spores are not light colored, a change will not be readily apparent. It is easiest to see the color change under a microscope, but it is possible to see it with the naked eye with a good spore print.
Paradimethylaminobenzaldehyde
In the genus Lyophyllum the lamellae usually turn blue with the application of para-Dimethylaminobenzaldehyde (PDAB or pDAB).
Phenol
A 2–3% aqueous solution of phenol gives a color change in some species when applied to the cap or stem.
Potassium hydroxide
A 3–10% solution of potassium hydroxide (KOH) gives a color change in some species of mushrooms:
In Agaricus, some species such as A. xanthodermus turn yellow with KOH, many have no reaction, and A. subrutilescens turns green.
Distinctive change occurs for some species of Cortinarius and Boletes
Schaeffer reaction
Developed by Julius Schäffer to help with the identification of Agaricus species. A positive reaction of Schaeffer's test, which uses the reaction of aniline and nitric acid on the surface of the mushroom, is indicated by an orange to red color; it is characteristic of species in the section Flavescentes. The compounds responsible for the reaction were named schaefferal A and B to honor Schäffer.
Two intersecting lines are drawn on the surface of the cap, the first with aniline or aniline water, the second with an aqueous solution of 65% nitric acid. The test is considered positive when a bright orange color forms where the lines cross.
Agaricus placomyces and Agaricus xanthodermus produce false negative reactions.
Sometimes referred to as "Schaeffer's reaction", "Schaeffer's cross reaction" or "Schaeffer's test".
Aniline + acid(s)
Kerrigan's 2016 Agaricus of North America P45:
(Referring to Schaffer's reaction) "In fact I recommend switching to the following modified test. Frank (1988) developed an alternative formulation in which aniline oil is combined with glacial acetic acid (GAA, essentially distilled vinegar) in a 50:50 solution. GAA is a much safer, less reactive acid. This single combined reagent is relatively stable over time. A single spot or line applied to the pileus (or other surface). In my experience the newer formulation works as well as Schaffer's while being safer and more convenient."
Sulfo-vanillin
Made from sulfuric acid (H2SO4) and vanillin (vanilla). Used in Russula and Panaeolus identification.
References
Arora, David "Mushrooms Demystified" 2nd Edition, Ten Speed Press, Berkeley, 1986
Jordan, Michael "The Encyclopedia of Fungi of Britain and Europe" Frances Lincoln 2004
Kuo, Michael "100 Edible Mushrooms", University of Michigan Press, Ann Arbor 2007
Largent, David L., Baroni, Timothy J. "How to Identify Mushrooms to Genus VI: Modern Genera" Mad River Press 1988
External links
MushroomExpert.com
Mycology
Mushroom identification | Chemical tests in mushroom identification | [
"Chemistry",
"Biology"
] | 1,012 | [
"Mycology",
"Chemical tests"
] |
14,425,296 | https://en.wikipedia.org/wiki/Hume-Rothery%20rules | Hume-Rothery rules, named after William Hume-Rothery, are a set of basic rules that describe the conditions under which an element could dissolve in a metal, forming a solid solution. There are two sets of rules; one refers to substitutional solid solutions, and the other refers to interstitial solid solutions.
Substitutional solid solution rules
For substitutional solid solutions, the Hume-Rothery rules are as follows:
The atomic radius of the solute and solvent atoms must differ by no more than 15%:
The crystal structures of solute and solvent must be similar.
Complete solubility occurs when the solvent and solute have the same valency. A metal is more likely to dissolve a metal of higher valency, than vice versa.
The solute and solvent should have similar electronegativity. If the electronegativity difference is too great, the metals tend to form intermetallic compounds instead of solid solutions.
Interstitial solid solution rules
For interstitial solid solutions, the Hume-Rothery Rules are:
Solute atoms should have a smaller radius than 59% of the radius of solvent atoms.
The solute and solvent should have similar electronegativity.
Valency factor: two elements should have the same valence. The greater the difference in valence between solute and solvent atoms, the lower the solubility.
Solid solution rules for multicomponent systems
Fundamentally, the Hume-Rothery rules are restricted to binary systems that form either substitutional or interstitial solid solutions. However, this approach limits assessing advanced alloys which are commonly multicomponent systems. Free energy diagrams (or phase diagrams) offer in-depth knowledge of equilibrium restraints in complex systems. In essence the Hume-Rothery rules (and Pauling's rules) are based on geometrical restraints. Likewise are the advancements being done to the Hume-Rothery rules. Where they are being considered as critical contact criterion describable with Voronoi diagrams. This could ease the theoretical phase diagram generation of multicomponent systems.
For alloys containing transition metal elements there is a difficulty in interpretation of the Hume-Rothery electron concentration rule, as the values of e/a values (number of itinerant electrons per atom) for transition metals have been quite controversial for a long time, and no satisfactory solutions have yet emerged.
See also
CALPHAD
Enthalpy of mixing
Gibbs energy
Phase diagram
References
Further reading
Eponymous chemical rules
Materials science
Rules | Hume-Rothery rules | [
"Physics",
"Materials_science",
"Engineering"
] | 505 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
156,700 | https://en.wikipedia.org/wiki/Communication%20channel | A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Communicating an information signal across distance requires some form of pathway or medium. These pathways, called communication channels, use two types of media: Transmission line-based telecommunications cable (e.g. twisted-pair, coaxial, and fiber-optic cable) and broadcast (e.g. microwave, satellite, radio, and infrared).
In information theory, a channel refers to a theoretical channel model with certain error characteristics. In this more general view, a storage device is also a communication channel, which can be sent to (written) and received from (reading) and allows communication of an information signal across time.
Examples
Examples of communications channels include:
A connection between initiating and terminating communication endpoints of a telecommunication circuit.
A single path provided by a transmission medium via either
physical separation, such as by multipair cable or
separation, such as by frequency-division or time-division multiplexing.
A path for conveying electrical or electromagnetic signals, usually distinguished from other parallel paths.
A data storage device which can communicate a message over time.
The portion of a storage medium, such as a track or band, that is accessible to a given reading or writing station or head.
A buffer from which messages can be put and got.
In a communications system, the physical or logical link that connects a data source to a data sink.
A specific radio frequency, pair or band of frequencies, usually named with a letter, number, or codeword, and often allocated by international agreement, for example:
Marine VHF radio uses some 88 channels in the VHF band for two-way FM voice communication. Channel 16, for example, is 156.800 MHz. In the US, seven additional channels, WX1 - WX7, are allocated for weather broadcasts.
Television channels such as North American TV Channel 2 at 55.25 MHz, Channel 13 at 211.25 MHz. Each channel is 6 MHz wide. This was based on the bandwidth required by analog television signals. Since 2006, television broadcasting has switched to digital modulation (digital television) which uses image compression to transmit a television signal in a much smaller bandwidth, so each of these physical channels has been divided into multiple virtual channels each carrying a DTV channel.
Original Wi-Fi uses 13 channels in the ISM bands from 2412 MHz to 2484 MHz in 5 MHz steps.
The radio channel between an amateur radio repeater and an amateur radio operator uses two frequencies often 600 kHz (0.6 MHz) apart. For example, a repeater that transmits on 146.94 MHz typically listens for a ham transmitting on 146.34 MHz.
All of these communication channels share the property that they transfer information. The information is carried through the channel by a signal.
Channel models
Mathematical models of the channel can be made to describe how the input (the transmitted signal) is mapped to the output (the received signal). There exist many types and uses of channel models specific to the field of communication. In particular, separate models are formulated to describe each layer of a communication system.
A channel can be modeled physically by trying to calculate the physical processes which modify the transmitted signal. For example, in wireless communications, the channel can be modeled by calculating the reflection from every object in the environment. A sequence of random numbers might also be added to simulate external interference or electronic noise in the receiver.
Statistically, a communication channel is usually modeled as a tuple consisting of an input alphabet, an output alphabet, and for each pair (i, o) of input and output elements, a transition probability p(i, o). Semantically, the transition probability is the probability that the symbol o is received given that i was transmitted over the channel.
Statistical and physical modeling can be combined. For example, in wireless communications the channel is often modeled by a random attenuation (known as fading) of the transmitted signal, followed by additive noise. The attenuation term is a simplification of the underlying physical processes and captures the change in signal power over the course of the transmission. The noise in the model captures external interference or electronic noise in the receiver. If the attenuation term is complex it also describes the relative time a signal takes to get through the channel. The statistical properties of the attenuation in the model are determined by previous measurements or physical simulations.
Communication channels are also studied in discrete-alphabet modulation schemes. The mathematical model consists of a transition probability that specifies an output distribution for each possible sequence of channel inputs. In information theory, it is common to start with memoryless channels in which the output probability distribution only depends on the current channel input.
A channel model may either be digital or analog.
Digital channel models
In a digital channel model, the transmitted message is modeled as a digital signal at a certain protocol layer. Underlying protocol layers are replaced by a simplified model. The model may reflect channel performance measures such as bit rate, bit errors, delay, delay variation, etc. Examples of digital channel models include:
Binary symmetric channel (BSC), a discrete memoryless channel with a certain bit error probability
Binary asymmetric channel (BAC), similar to BSC but the probability of a flip from 0 to 1 and vice-versa is unequal
Binary bursty bit error channel model, a channel with memory
Binary erasure channel (BEC), a discrete channel with a certain bit error detection (erasure) probability
Packet erasure channel, where packets are lost with a certain packet loss probability or packet error rate
Arbitrarily varying channel (AVC), where the behavior and state of the channel can change randomly
Analog channel models
In an analog channel model, the transmitted message is modeled as an analog signal. The model can be a linear or non-linear, time-continuous or time-discrete (sampled), memoryless or dynamic (resulting in burst errors), time-invariant or time-variant (also resulting in burst errors), baseband, passband (RF signal model), real-valued or complex-valued signal model. The model may reflect the following channel impairments:
Noise model, for example
Additive white Gaussian noise (AWGN) channel, a linear continuous memoryless model
Phase noise model
Interference model, for example crosstalk (co-channel interference) and intersymbol interference (ISI)
Distortion model, for example a non-linear channel model causing intermodulation distortion (IMD)
Frequency response model, including attenuation and phase-shift
Group delay model
Modelling of underlying physical layer transmission techniques, for example a complex-valued equivalent baseband model of modulation and frequency response
Radio frequency propagation model, for example
Log-distance path loss model
Fading model, for example Rayleigh fading, Ricean fading, log-normal shadow fading and frequency selective (dispersive) fading
Doppler shift model, which combined with fading results in a time-variant system
Ray tracing models, which attempt to model the signal propagation and distortions for specified transmitter-receiver geometries, terrain types, and antennas
Propagation graph, models signal dispersion by representing the radio propagation environment by a graph.
Mobility models, which also causes a time-variant system
Types
Digital (discrete) or analog (continuous) channel
Transmission medium, for example a fiber-optic cable
Multiplexed channel
Computer network virtual channel
Simplex communication, duplex communication or half-duplex communication channel
Return channel
Uplink or downlink (upstream or downstream channel)
Broadcast channel, unicast channel or multicast channel
Channel performance measures
These are examples of commonly used channel capacity and performance measures:
Spectral bandwidth in Hertz
Symbol rate in baud, symbols/s
Digital bandwidth in bit/s measures: gross bit rate (signalling rate), net bit rate (information rate), channel capacity, and maximum throughput
Channel utilization
Spectral efficiency
Signal-to-noise ratio in decibel measures: signal-to-interference ratio, Eb/N0
Bit error rate (BER), packet error rate (PER)
Latency in seconds: propagation time, transmission time, round-trip delay, end-to-end delay
Packet delay variation
Eye pattern
Multi-terminal channels, with application to cellular systems
In networks, as opposed to point-to-point communication, the communication media can be shared between multiple communication endpoints (terminals). Depending on the type of communication, different terminals can cooperate or interfere with each other. In general, any complex multi-terminal network can be considered as a combination of simplified multi-terminal channels. The following channels are the principal multi-terminal channels first introduced in the field of information theory:
A point-to-multipoint channel, also known as broadcasting medium (not to be confused with broadcasting channel): In this channel, a single sender transmits multiple messages to different destination nodes. All wireless channels except directional links can be considered as broadcasting media, but may not always provide broadcasting service. The downlink of a cellular system can be considered as a point-to-multipoint channel, if only one cell is considered and inter-cell co-channel interference is neglected. However, the communication service of a phone call is unicasting.
Multiple access channel: In this channel, multiple senders transmit multiple possible different messages over a shared physical medium to one or several destination nodes. This requires a channel access scheme, including a media access control (MAC) protocol combined with a multiplexing scheme. This channel model has applications in the uplink of cellular networks.
Relay channel: In this channel, one or several intermediate nodes (called relay, repeater or gap filler nodes) cooperate with a sender to send the message to an ultimate destination node.
Interference channel: In this channel, two different senders transmit their data to different destination nodes. Hence, the different senders can have a possible crosstalk or co-channel interference on the signal of each other. The inter-cell interference in cellular wireless communications is an example of an interference channel. In spread-spectrum systems like 3G, interference also occurs inside the cell if non-orthogonal codes are used.
A unicast channel is a channel that provides a unicast service, i.e. that sends data addressed to one specific user. An established phone call is an example.
A broadcast channel is a channel that provides a broadcasting service, i.e. that sends data addressed to all users in the network. Cellular network examples are the paging service as well as the Multimedia Broadcast Multicast Service.
A multicast channel is a channel where data is addressed to a group of subscribing users. LTE examples are the physical multicast channel (PMCH) and multicast broadcast single frequency network (MBSFN).
References
C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, vol. 27, pp. 379–423 and 623–656, (July and October, 1948)
Information theory
Telecommunication theory
Television terminology | Communication channel | [
"Mathematics",
"Technology",
"Engineering"
] | 2,313 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
156,706 | https://en.wikipedia.org/wiki/Effective%20mass%20%28solid-state%20physics%29 | In solid state physics, a particle's effective mass (often denoted ) is the mass that it seems to have when responding to forces, or the mass that it seems to have when interacting with other identical particles in a thermal distribution. One of the results from the band theory of solids is that the movement of particles in a periodic potential, over long distances larger than the lattice spacing, can be very different from their motion in a vacuum. The effective mass is a quantity that is used to simplify band structures by modeling the behavior of a free particle with that mass. For some purposes and some materials, the effective mass can be considered to be a simple constant of a material. In general, however, the value of effective mass depends on the purpose for which it is used, and can vary depending on a number of factors.
For electrons or electron holes in a solid, the effective mass is usually stated as a factor multiplying the rest mass of an electron, me (9.11 × 10−31 kg). This factor is usually in the range 0.01 to 10, but can be lower or higher—for example, reaching 1,000 in exotic heavy fermion materials, or anywhere from zero to infinity (depending on definition) in graphene. As it simplifies the more general band theory, the electronic effective mass can be seen as an important basic parameter that influences measurable properties of a solid, including everything from the efficiency of a solar cell to the speed of an integrated circuit.
Simple case: parabolic, isotropic dispersion relation
At the highest energies of the valence band in many semiconductors (Ge, Si, GaAs, ...), and the lowest energies of the conduction band in some semiconductors (GaAs, ...), the band structure can be locally approximated as
where is the energy of an electron at wavevector in that band, is a constant giving the edge of energy of that band, and is a constant (the effective mass).
It can be shown that the electrons placed in these bands behave as free electrons except with a different mass, as long as their energy stays within the range of validity of the approximation above. As a result, the electron mass in models such as the Drude model must be replaced with the effective mass.
One remarkable property is that the effective mass can become negative, when the band curves downwards away from a maximum. As a result of the negative mass, the electrons respond to electric and magnetic forces by gaining velocity in the opposite direction compared to normal; even though these electrons have negative charge, they move in trajectories as if they had positive charge (and positive mass). This explains the existence of valence-band holes, the positive-charge, positive-mass quasiparticles that can be found in semiconductors.
In any case, if the band structure has the simple parabolic form described above, then the value of effective mass is unambiguous. Unfortunately, this parabolic form is not valid for describing most materials. In such complex materials there is no single definition of "effective mass" but instead multiple definitions, each suited to a particular purpose. The rest of the article describes these effective masses in detail.
Intermediate case: parabolic, anisotropic dispersion relation
In some important semiconductors (notably, silicon) the lowest energies of the conduction band are not symmetrical, as the constant-energy surfaces are now ellipsoids, rather than the spheres in the isotropic case. Each conduction band minimum can be approximated only by
where , , and axes are aligned to the principal axes of the ellipsoids, and , and are the inertial effective masses along these different axes. The offsets , , and reflect that the conduction band minimum is no longer centered at zero wavevector. (These effective masses correspond to the principal components of the inertial effective mass tensor, described later.)
In this case, the electron motion is no longer directly comparable to a free electron; the speed of an electron will depend on its direction, and it will accelerate to a different degree depending on the direction of the force. Still, in crystals such as silicon the overall properties such as conductivity appear to be isotropic. This is because there are multiple valleys (conduction-band minima), each with effective masses rearranged along different axes. The valleys collectively act together to give an isotropic conductivity. It is possible to average the different axes' effective masses together in some way, to regain the free electron picture. However, the averaging method turns out to depend on the purpose:
General case
In general the dispersion relation cannot be approximated as parabolic, and in such cases the effective mass should be precisely defined if it is to be used at all.
Here a commonly stated definition of effective mass is the inertial effective mass tensor defined below; however, in general it is a matrix-valued function of the wavevector, and even more complex than the band structure.
Other effective masses are more relevant to directly measurable phenomena.
Inertial effective mass tensor
A classical particle under the influence of a force accelerates according to Newton's second law, , or alternatively, the momentum changes according to . This intuitive principle appears identically in semiclassical approximations derived from band structure when interband transitions can be ignored for sufficiently weak external fields.
The force gives a rate of change in crystal momentum :
where is the reduced Planck constant.
Acceleration for a wave-like particle becomes the rate of change in group velocity:
where is the del operator in reciprocal space. The last step follows from using the chain rule for a total derivative for a quantity with indirect dependencies, because the direct result of the force is the change in given above, which indirectly results in a change in .
Combining these two equations yields
using the dot product rule with a uniform force (). is the Hessian matrix of in reciprocal space. We see that the equivalent of the Newtonian reciprocal inertial mass for a free particle defined by has become a tensor quantity
whose elements are
This tensor allows the acceleration and force to be in different directions, and for the magnitude of the acceleration to depend on the direction of the force.
For parabolic bands, the off-diagonal elements of are zero, and the diagonal elements are constants
For isotropic bands the diagonal elements must all be equal and the off-diagonal elements must all be equal.
For parabolic isotropic bands, , where is a scalar effective mass and is the identity.
In general, the elements of are functions of .
The inverse, , is known as the effective mass tensor. Note that it is not always possible to invert
For bands with linear dispersion such as with photons or electrons in graphene, the group velocity is fixed, i.e. electrons travelling with parallel with to the force direction cannot be accelerated and the diagonal elements of are obviously zero. However, electrons travelling with a component perpendicular to the force can be accelerated in the direction of the force, and the off-diagonal elements of are non-zero. In fact the off-diagonal elements scale inversely with , i.e. they diverge (become infinite) for small . This is why the electrons in graphene are sometimes said to have infinite mass (due to the zeros on the diagonal of ) and sometimes said to be massless (due to the divergence on the off-diagonals).
Cyclotron effective mass
Classically, a charged particle in a magnetic field moves in a helix along the magnetic field axis. The period T of its motion depends on its mass m and charge e,
where B is the magnetic flux density.
For particles in asymmetrical band structures, the particle no longer moves exactly in a helix, however its motion transverse to the magnetic field still moves in a closed loop (not necessarily a circle). Moreover, the time to complete one of these loops still varies inversely with magnetic field, and so it is possible to define a cyclotron effective mass from the measured period, using the above equation.
The semiclassical motion of the particle can be described by a closed loop in k-space. Throughout this loop, the particle maintains a constant energy, as well as a constant momentum along the magnetic field axis. By defining to be the area enclosed by this loop (this area depends on the energy , the direction of the magnetic field, and the on-axis wavevector ), then it can be shown that the cyclotron effective mass depends on the band structure via the derivative of this area in energy:
Typically, experiments that measure cyclotron motion (cyclotron resonance, De Haas–Van Alphen effect, etc.) are restricted to only probe motion for energies near the Fermi level.
In two-dimensional electron gases, the cyclotron effective mass is defined only for one magnetic field direction (perpendicular) and the out-of-plane wavevector drops out. The cyclotron effective mass therefore is only a function of energy, and it turns out to be exactly related to the density of states at that energy via the relation , where is the valley degeneracy. Such a simple relationship does not apply in three-dimensional materials.
Density of states effective masses (lightly doped semiconductors)
In semiconductors with low levels of doping, the electron concentration in the conduction band is in general given by
where is the Fermi level, is the minimum energy of the conduction band, and is a concentration coefficient that depends on temperature. The above relationship for can be shown to apply for any conduction band shape (including non-parabolic, asymmetric bands), provided the doping is weak (); this is a consequence of Fermi–Dirac statistics limiting towards Maxwell–Boltzmann statistics.
The concept of effective mass is useful to model the temperature dependence of , thereby allowing the above relationship to be used over a range of temperatures. In an idealized three-dimensional material with a parabolic band, the concentration coefficient is given by
In semiconductors with non-simple band structures, this relationship is used to define an effective mass, known as the density of states effective mass of electrons. The name "density of states effective mass" is used since the above expression for is derived via the density of states for a parabolic band.
In practice, the effective mass extracted in this way is not quite constant in temperature ( does not exactly vary as ). In silicon, for example, this effective mass varies by a few percent between absolute zero and room temperature because the band structure itself slightly changes in shape. These band structure distortions are a result of changes in electron–phonon interaction energies, with the lattice's thermal expansion playing a minor role.
Similarly, the number of holes in the valence band, and the density of states effective mass of holes are defined by:
where is the maximum energy of the valence band. Practically, this effective mass tends to vary greatly between absolute zero and room temperature in many materials (e.g., a factor of two in silicon), as there are multiple valence bands with distinct and significantly non-parabolic character, all peaking near the same energy.
Determination
Experimental
Traditionally effective masses were measured using cyclotron resonance, a method in which microwave absorption of a semiconductor immersed in a magnetic field goes through a sharp peak when the microwave frequency equals the cyclotron frequency . In recent years effective masses have more commonly been determined through measurement of band structures using techniques such as angle-resolved photoemission spectroscopy (ARPES) or, most directly, the de Haas–van Alphen effect. Effective masses can also be estimated using the coefficient γ of the linear term in the low-temperature electronic specific heat at constant volume . The specific heat depends on the effective mass through the density of states at the Fermi level and as such is a measure of degeneracy as well as band curvature. Very large estimates of carrier mass from specific heat measurements have given rise to the concept of heavy fermion materials. Since carrier mobility depends on the ratio of carrier collision lifetime to effective mass, masses can in principle be determined from transport measurements, but this method is not practical since carrier collision probabilities are typically not known a priori. The optical Hall effect is an emerging technique for measuring the free charge carrier density, effective mass and mobility parameters in semiconductors. The optical Hall effect measures the analogue of the quasi-static electric-field-induced electrical Hall effect at optical frequencies in conductive and complex layered materials. The optical Hall effect also permits characterization of the anisotropy (tensor character) of the effective mass and mobility parameters.
Theoretical
A variety of theoretical methods including density functional theory, k·p perturbation theory, and others are used to supplement and support the various experimental measurements described in the previous section, including interpreting, fitting, and extrapolating these measurements. Some of these theoretical methods can also be used for predictions of effective mass in the absence of any experimental data, for example to study materials that have not yet been created in the laboratory.
Significance
The effective mass is used in transport calculations, such as transport of electrons under the influence of fields or carrier gradients, but it also is used to calculate the carrier density and density of states in semiconductors. These masses are related but, as explained in the previous sections, are not the same because the weightings of various directions and wavevectors are different. These differences are important, for example in thermoelectric materials, where high conductivity, generally associated with light mass, is desired at the same time as high Seebeck coefficient, generally associated with heavy mass. Methods for assessing the electronic structures of different materials in this context have been developed.
Certain group III–V compounds such as gallium arsenide (GaAs) and indium antimonide (InSb) have far smaller effective masses than tetrahedral group IV materials like silicon and germanium. In the simplest Drude picture of electronic transport, the maximum obtainable charge carrier velocity is inversely proportional to the effective mass: , where with being the electronic charge. The ultimate speed of integrated circuits depends on the carrier velocity, so the low effective mass is the fundamental reason that GaAs and its derivatives are used instead of Si in high-bandwidth applications like cellular telephony.
In April 2017, researchers at Washington State University claimed to have created a fluid with negative effective mass inside a Bose–Einstein condensate, by engineering the dispersion relation.
See also
Models of solids and crystals:
Tight-binding model
Free electron model
Nearly free electron model
Footnotes
References
This book contains an exhaustive but accessible discussion of the topic with extensive comparison between calculations and experiment.
S. Pekar, The method of effective electron mass in crystals, Zh. Eksp. Teor. Fiz. 16, 933 (1946).
External links
NSM archive
Condensed matter physics
Mass | Effective mass (solid-state physics) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,074 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Phases of matter",
"Materials science",
"Size",
"Condensed matter physics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
156,787 | https://en.wikipedia.org/wiki/Desalination | Desalination is a process that removes mineral components from saline water. More generally, desalination is the removal of salts and minerals from a substance. One example is soil desalination. This is important for agriculture. It is possible to desalinate saltwater, especially sea water, to produce water for human consumption or irrigation. The by-product of the desalination process is brine. Many seagoing ships and submarines use desalination. Modern interest in desalination mostly focuses on cost-effective provision of fresh water for human use. Along with recycled wastewater, it is one of the few water resources independent of rainfall.
Due to its energy consumption, desalinating sea water is generally more costly than fresh water from surface water or groundwater, water recycling and water conservation; however, these alternatives are not always available and depletion of reserves is a critical problem worldwide. Desalination processes are using either thermal methods (in the case of distillation) or membrane-based methods (e.g. in the case of reverse osmosis).
An estimate in 2018 found that "18,426 desalination plants are in operation in over 150 countries. They produce 87 million cubic meters of clean water each day and supply over 300 million people." The energy intensity has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20–30 kWh/m3 in 1970. Nevertheless, desalination represented about 25% of the energy consumed by the water sector in 2016.
History
Ancient Greek philosopher Aristotle observed in his work Meteorology that "salt water, when it turns into vapour, becomes sweet and the vapour does not form salt water again when it condenses", and that a fine wax vessel would hold potable water after being submerged long enough in seawater, having acted as a membrane to filter the salt.
At the same time the desalination of seawater was recorded in China. Both the Classic of Mountains and Water Seas in the Period of the Warring States and the Theory of the Same Year in the Eastern Han Dynasty mentioned that people found that the bamboo mats used for steaming rice would form a thin outer layer after long use. The as-formed thin film had adsorption and ion exchange functions, which could adsorb salt.
Numerous examples of experimentation in desalination appeared throughout Antiquity and the Middle Ages, but desalination became feasible on a large scale only in the modern era. A good example of this experimentation comes from Leonardo da Vinci (Florence, 1452), who realized that distilled water could be made cheaply in large quantities by adapting a still to a cookstove. During the Middle Ages elsewhere in Central Europe, work continued on distillation refinements, although not necessarily directed towards desalination.
The first major land-based desalination plant may have been installed under emergency conditions on an island off the coast of Tunisia in 1560. It is believed that a garrison of 700 Spanish soldiers was besieged by the Turkish army and that, during the siege, the captain in charge fabricated a still capable of producing 40 barrels of fresh water per day, though details of the device have not been reported.
Before the Industrial Revolution, desalination was primarily of concern to oceangoing ships, which otherwise needed to keep on board supplies of fresh water. Sir Richard Hawkins (1562–1622), who made extensive travels in the South Seas, reported that he had been able to supply his men with fresh water by means of shipboard distillation. Additionally, during the early 1600s, several prominent figures of the era such as Francis Bacon and Walter Raleigh published reports on desalination. These reports and others, set the climate for the first patent dispute concerning desalination apparatus. The two first patents regarding water desalination were approved in 1675 and 1683 (patents No. 184 and No. 226, published by William Walcot and Robert Fitzgerald (and others), respectively). Nevertheless, neither of the two inventions entered service as a consequence of scale-up difficulties. No significant improvements to the basic seawater distillation process were made during the 150 years from the mid-1600s until 1800.
When the frigate Protector was sold to Denmark in the 1780s (as the ship Hussaren) its still was studied and recorded in great detail. In the United States, Thomas Jefferson catalogued heat-based methods going back to the 1500s, and formulated practical advice that was publicized to all U.S. ships on the reverse side of sailing clearance permits.
Beginning about 1800, things started changing as a consequence of the appearance of the steam engine and the so-called age of steam. Knowledge of the thermodynamics of steam processes and the need for a pure water source for its use in boilers generated a positive effect regarding distilling systems. Additionally, the spread of European colonialism induced a need for freshwater in remote parts of the world, thus creating the appropriate climate for water desalination.
In parallel with the development and improvement of systems using steam (multiple-effect evaporators), these type of devices quickly demonstrated their desalination potential. In 1852, Alphonse René le Mire de Normandy was issued a British patent for a vertical tube seawater distilling unit that, thanks to its simplicity of design and ease of construction, gained popularity for shipboard use. Land-based units did not significantly appear until the latter half of the nineteenth century. In the 1860s, the US Army purchased three Normandy evaporators, each rated at 7000 gallons/day and installed them on the islands of Key West and Dry Tortugas. Another land-based plant was installed at Suakin during the 1880s that provided freshwater to the British troops there. It consisted of six-effect distillers with a capacity of 350 tons/day.
After World War II, many technologies were developed or improved such as Multi Effect Flash desalination (MEF) and Multi Stage Flash desalination (MSF). Another notable technology is freeze-thaw desalination. Freeze-thaw desalination, (cryo-desalination or FD), excludes dissolved minerals from saline water through crystallization.
The Office of Saline Water was created in the United States Department of the Interior in 1955 in accordance with the Saline Water Conversion Act of 1952. This act was motivated by a water shortage in California and inland western United States. The Department of the Interior allocated resources including research grants, expert personnel, patent data, and land for experiments to further advancements.
The results of these efforts included the construction of over 200 electrodialysis and distillation plants globally, reverse osmosis (RO) research, and international cooperation (for example, the First International Water Desalination Symposium and Exposition in 1965). The Office of Saline Water merged into the Office of Water Resources Research in 1974.
The first industrial desalination plant in the United States opened in Freeport, Texas in 1961 after a decade of regional drought.
By the late 1960s and the early 1970s, RO started to show promising results to replace traditional thermal desalination units. Research took place at state universities in California, at the Dow Chemical Company and DuPont. Many studies focus on ways to optimize desalination systems. The first commercial RO plant, the Coalinga desalination plant, was inaugurated in California in 1965 for brackish water. Dr. Sidney Loeb, in conjunction with staff at UCLA, designed a large pilot plant to gather data on RO, but was successful enough to provide freshwater to the residents of Coalinga. This was a milestone in desalination technology, as it proved the feasibility of RO and its advantages compared to existing technologies (efficiency, no phase change required, ambient temperature operation, scalability, and ease of standardization). A few years later, in 1975, the first sea water reverse osmosis desalination plant came into operation.
As of 2000, more than 2000 plants were operated. The largest are in Saudi Arabia, Israel, and the UAE; and the biggest plant with a volume of 1,401,000 m3/d is in Saudi Arabia (Ras Al Khair).
As of 2021 22,000 plants were in operation In 2024 the Catalan government installed a floating offshore plant near the port of Barcelona and purchased 12 mobile desalination units for the northern region of the Costa Brava to combat the severe drought.
In 2012, cost averaged $0.75 per cubic meter. By 2022, that had declined (before inflation) to $0.41. Desalinated supplies are growing at a 10%+ compound rate, doubling in abundance every seven years.
Applications
There are now about 21,000 desalination plants in operation around the globe. The biggest ones are in the United Arab Emirates, Saudi Arabia, and Israel. The world's largest desalination plant is located in Saudi Arabia (Ras Al-Khair Power and Desalination Plant) with a capacity of 1,401,000 cubic meters per day.
Desalination is currently expensive compared to most alternative sources of water, and only a very small fraction of total human use is satisfied by desalination. It is usually only economically practical for high-valued uses (such as household and industrial uses) in arid areas. However, there is growth in desalination for agricultural use and highly populated areas such as Singapore or California. The most extensive use is in the Persian Gulf.
While noting costs are falling, and generally positive about the technology for affluent areas in proximity to oceans, a 2005 study argued, "Desalinated water may be a solution for some water-stress regions, but not for places that are poor, deep in the interior of a continent, or at high elevation. Unfortunately, that includes some of the places with the biggest water problems.", and, "Indeed, one needs to lift the water by 2000 m, or transport it over more than 1600 km to get transport costs equal to the desalination costs."
Thus, it may be more economical to transport fresh water from somewhere else than to desalinate it. In places far from the sea, like New Delhi, or in high places, like Mexico City, transport costs could match desalination costs. Desalinated water is also expensive in places that are both somewhat far from the sea and somewhat high, such as Riyadh and Harare. By contrast in other locations transport costs are much less, such as Beijing, Bangkok, Zaragoza, Phoenix, and, of course, coastal cities like Tripoli. After desalination at Jubail, Saudi Arabia, water is pumped 320 km inland to Riyadh. For coastal cities, desalination is increasingly viewed as a competitive choice.
In 2023, Israel was using desalination to replenish the Sea of Galilee's water supply.
Not everyone is convinced that desalination is or will be economically viable or environmentally sustainable for the foreseeable future. Debbie Cook wrote in 2011 that desalination plants can be energy intensive and costly. Therefore, water-stressed regions might do better to focus on conservation or other water supply solutions than invest in desalination plants.
Technologies
Desalination is an artificial process by which saline water (generally sea water) is converted to fresh water. The most common desalination processes are distillation and reverse osmosis.
There are several methods. Each has advantages and disadvantages but all are useful. The methods can be divided into membrane-based (e.g., reverse osmosis) and thermal-based (e.g., multistage flash distillation) methods. The traditional process of desalination is distillation (i.e., boiling and re-condensation of seawater to leave salt and impurities behind).
There are currently two technologies with a large majority of the world's desalination capacity: multi-stage flash distillation and reverse osmosis.
Distillation
Solar distillation
Solar distillation mimics the natural water cycle, in which the sun heats sea water enough for evaporation to occur. After evaporation, the water vapor is condensed onto a cool surface. There are two types of solar desalination. The first type uses photovoltaic cells to convert solar energy to electrical energy to power desalination. The second type converts solar energy to heat, and is known as solar thermal powered desalination.
Natural evaporation
Water can evaporate through several other physical effects besides solar irradiation. These effects have been included in a multidisciplinary desalination methodology in the IBTS Greenhouse. The IBTS is an industrial desalination (power)plant on one side and a greenhouse operating with the natural water cycle (scaled down 1:10) on the other side. The various processes of evaporation and condensation are hosted in low-tech utilities, partly underground and the architectural shape of the building itself. This integrated biotectural system is most suitable for large scale desert greening as it has a km2 footprint for the water distillation and the same for landscape transformation in desert greening, respectively the regeneration of natural fresh water cycles.
Vacuum distillation
In vacuum distillation atmospheric pressure is reduced, thus lowering the temperature required to evaporate the water. Liquids boil when the vapor pressure equals the ambient pressure and vapor pressure increases with temperature. Effectively, liquids boil at a lower temperature, when the ambient atmospheric pressure is less than usual atmospheric pressure. Thus, because of the reduced pressure, low-temperature "waste" heat from electrical power generation or industrial processes can be employed.
Multi-stage flash distillation
Water is evaporated and separated from sea water through multi-stage flash distillation, which is a series of flash evaporations. Each subsequent flash process uses energy released from the condensation of the water vapor from the previous step.
Multiple-effect distillation
Multiple-effect distillation (MED) works through a series of steps called "effects". Incoming water is sprayed onto pipes which are then heated to generate steam. The steam is then used to heat the next batch of incoming sea water. To increase efficiency, the steam used to heat the sea water can be taken from nearby power plants. Although this method is the most thermodynamically efficient among methods powered by heat, a few limitations exist such as a max temperature and max number of effects.
Vapor-compression distillation
Vapor-compression evaporation involves using either a mechanical compressor or a jet stream to compress the vapor present above the liquid. The compressed vapor is then used to provide the heat needed for the evaporation of the rest of the sea water. Since this system only requires power, it is more cost effective if kept at a small scale.
Membrane distillation
Membrane distillation uses a temperature difference across a membrane to evaporate vapor from a brine solution and condense pure water on the colder side. The design of the membrane can have a significant effect on efficiency and durability. A study found that a membrane created via co-axial electrospinning of PVDF-HFP and silica aerogel was able to filter 99.99% of salt after continuous 30-day usage.
Osmosis
Reverse osmosis
The leading process for desalination in terms of installed capacity and yearly growth is reverse osmosis (RO). The RO membrane processes use semipermeable membranes and applied pressure (on the membrane feed side) to preferentially induce water permeation through the membrane while rejecting salts. Reverse osmosis plant membrane systems typically use less energy than thermal desalination processes. Energy cost in desalination processes varies considerably depending on water salinity, plant size and process type. At present the cost of seawater desalination, for example, is higher than traditional water sources, but it is expected that costs will continue to decrease with technology improvements that include, but are not limited to, improved efficiency, reduction in plant footprint, improvements to plant operation and optimization, more effective feed pretreatment, and lower cost energy sources.
Reverse osmosis uses a thin-film composite membrane, which comprises an ultra-thin, aromatic polyamide thin-film. This polyamide film gives the membrane its transport properties, whereas the remainder of the thin-film composite membrane provides mechanical support. The polyamide film is a dense, void-free polymer with a high surface area, allowing for its high water permeability. A recent study has found that the water permeability is primarily governed by the internal nanoscale mass distribution of the polyamide active layer.
The reverse osmosis process requires maintenance. Various factors interfere with efficiency: ionic contamination (calcium, magnesium etc.); dissolved organic carbon (DOC); bacteria; viruses; colloids and insoluble particulates; biofouling and scaling. In extreme cases, the RO membranes are destroyed. To mitigate damage, various pretreatment stages are introduced. Anti-scaling inhibitors include acids and other agents such as the organic polymers polyacrylamide and polymaleic acid, phosphonates and polyphosphates. Inhibitors for fouling are biocides (as oxidants against bacteria and viruses), such as chlorine, ozone, sodium or calcium hypochlorite. At regular intervals, depending on the membrane contamination; fluctuating seawater conditions; or when prompted by monitoring processes, the membranes need to be cleaned, known as emergency or shock-flushing. Flushing is done with inhibitors in a fresh water solution and the system must go offline. This procedure is environmentally risky, since contaminated water is diverted into the ocean without treatment. Sensitive marine habitats can be irreversibly damaged.
Off-grid solar-powered desalination units use solar energy to fill a buffer tank on a hill with seawater. The reverse osmosis process receives its pressurized seawater feed in non-sunlight hours by gravity, resulting in sustainable drinking water production without the need for fossil fuels, an electricity grid or batteries. Nano-tubes are also used for the same function (i.e., Reverse Osmosis).
Forward osmosis
Forward osmosis uses a semi-permeable membrane to effect separation of water from dissolved solutes. The driving force for this separation is an osmotic pressure gradient, such as a "draw" solution of high concentration.
Freeze–thaw
Freeze–thaw desalination (or freezing desalination) uses freezing to remove fresh water from salt water. Salt water is sprayed during freezing conditions into a pad where an ice-pile builds up. When seasonal conditions warm, naturally desalinated melt water is recovered. This technique relies on extended periods of natural sub-freezing conditions.
A different freeze–thaw method, not weather dependent and invented by Alexander Zarchin, freezes seawater in a vacuum. Under vacuum conditions the ice, desalinated, is melted and diverted for collection and the salt is collected.
Electrodialysis
Electrodialysis uses electric potential to move the salts through pairs of charged membranes, which trap salt in alternating channels. Several variances of electrodialysis exist such as conventional electrodialysis, electrodialysis reversal.
Electrodialysis can simultaneously remove salt and carbonic acid from seawater. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct.
Microbial desalination
Microbial desalination cells are biological electrochemical systems that implements the use of electro-active bacteria to power desalination of water in situ, resourcing the natural anode and cathode gradient of the electro-active bacteria and thus creating an internal supercapacitor.
Wave-powered desalination
Wave powered desalination systems generally convert mechanical wave motion directly to hydraulic power for reverse osmosis. Such systems aim to maximize efficiency and reduce costs by avoiding conversion to electricity, minimizing excess pressurization above the osmotic pressure, and innovating on hydraulic and wave power components.
One such approach is desalinating using submerged buoys, a wave power approach done by CETO and Oneka. Wave-powered desalination plants began operating by CETO on Garden Island in Western Australia in 2013 and in Perth in 2015 , and Oneka has installations in Chile, Florida, California, and the Caribbean.
Wind-powered desalination
Wind energy can also be coupled to desalination. Similar to wave power, a direct conversion of mechanical energy to hydraulic power can reduce components and losses in powering reverse osmosis. Wind power has also been considered for coupling with thermal desalination technologies.
Other techniques
In a April 2024, researchers from the Australian National University published experimental results of a novel technique for desalination. This technique, thermodiffusive desalination, passes saline water through a channel with a temperature gradient. Species migrate under this temperature gradient in a process known a thermodiffusion. Researchers then separated the water into fractions. After multiple passes through the channel, the researchers were able to achieve NaCL concentration drop of 25000 ppm with a recovery rate of 10% of the original water volume.
Design aspects
Energy consumption
The desalination process's energy consumption depends on the water's salinity. Brackish water desalination requires less energy than seawater desalination.
The energy intensity of seawater desalination has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20-30 kWh/m3 in 1970. This is similar to the energy consumption of other freshwater supplies transported over large distances, but much higher than local fresh water supplies that use 0.2 kWh/m3 or less.
A minimum energy consumption for seawater desalination of around 1 kWh/m3 has been determined, excluding prefiltering and intake/outfall pumping. Under 2 kWh/m3 has been achieved with reverse osmosis membrane technology, leaving limited scope for further energy reductions as the reverse osmosis energy consumption in the 1970s was 16 kWh/m3.
Supplying all US domestic water by desalination would increase domestic energy consumption by around 10%, about the amount of energy used by domestic refrigerators. Domestic consumption is a relatively small fraction of the total water usage.
Note: "Electrical equivalent" refers to the amount of electrical energy that could be generated using a given quantity of thermal energy and an appropriate turbine generator. These calculations do not include the energy required to construct or refurbish items consumed.
Given the energy-intensive nature of desalination and the associated economic and environmental costs, desalination is generally considered a last resort after water conservation. But this is changing as prices continue to fall.
Cogeneration
Cogeneration is generating useful heat energy and electricity from a single process. Cogeneration can provide usable heat for desalination in an integrated, or "dual-purpose", facility where a power plant provides the energy for desalination. Alternatively, the facility's energy production may be dedicated to the production of potable water (a stand-alone facility), or excess energy may be produced and incorporated into the energy grid. Cogeneration takes various forms, and theoretically any form of energy production could be used. However, the majority of current and planned cogeneration desalination plants use either fossil fuels or nuclear power as their source of energy. Most plants are located in the Middle East or North Africa, which use their petroleum resources to offset limited water resources. The advantage of dual-purpose facilities is they can be more efficient in energy consumption, thus making desalination more viable.
The current trend in dual-purpose facilities is hybrid configurations, in which the permeate from reverse osmosis desalination is mixed with distillate from thermal desalination. Basically, two or more desalination processes are combined along with power production. Such facilities have been implemented in Saudi Arabia at Jeddah and Yanbu.
A typical supercarrier in the US military is capable of using nuclear power to desalinate of water per day.
Alternatives to desalination
Increased water conservation and efficiency remain the most cost-effective approaches in areas with a large potential to improve the efficiency of water use practices. Wastewater reclamation provides multiple benefits over desalination of saline water, although it typically uses desalination membranes. Urban runoff and storm water capture also provide benefits in treating, restoring and recharging groundwater.
A proposed alternative to desalination in the American Southwest is the commercial importation of bulk water from water-rich areas either by oil tankers converted to water carriers, or pipelines. The idea is politically unpopular in Canada, where governments imposed trade barriers to bulk water exports as a result of a North American Free Trade Agreement (NAFTA) claim.
The California Department of Water Resources and the California State Water Resources Control Board submitted a report to the state legislature recommending that urban water suppliers achieve an indoor water use efficiency standard of per capita per day by 2023, declining to per day by 2025, and by 2030 and beyond.
Costs
Factors that determine the costs for desalination include capacity and type of facility, location, feed water, labor, energy, financing, and concentrate disposal. Costs of desalinating sea water (infrastructure, energy, and maintenance) are generally higher than fresh water from rivers or groundwater, water recycling, and water conservation, but alternatives are only sometimes available. Desalination costs in 2013 ranged from US$0.45 to US$1.00/m3. More than half of the cost comes directly from energy costs, and since energy prices are very volatile, actual costs can vary substantially.
The cost of untreated fresh water in the developing world can reach US$5/cubic metre.
Since 1975, desalination technology has seen significant advancements, decreasing the average cost of producing one cubic meter of freshwater from seawater from $1.10 in 2000 to approximately $0.50 today. Improved desalination efficiency is a primary factor contributing to this reduction. Energy consumption remains a significant cost component, accounting for up to half the total cost of the desalination process.
Desalination can substantially increase energy intensity, particularly for regions with limited energy resources. For instance, in the island nation of Cyprus, desalination accounts for approximately 5% of the country's total power consumption.
The global desalination market was valued at $20 billion in 2023. With growing populations in arid coastal regions, this market is projected to double by 2032. In 2023, global desalination capacity reached 99 million cubic meters per day, a significant increase from 27 million cubic meters per day in 2003.
Desalination stills control pressure, temperature and brine concentrations to optimize efficiency. Nuclear-powered desalination might be economical on a large scale.
In 2014, the Israeli facilities of Hadera, Palmahim, Ashkelon, and Sorek were desalinizing water for less than US$0.40 per cubic meter. As of 2006, Singapore was desalinating water for US$0.49 per cubic meter.
Environmental concerns
Intake
In the United States, cooling water intake structures are regulated by the Environmental Protection Agency (EPA). These structures can have the same impacts on the environment as desalination facility intakes. According to EPA, water intake structures cause adverse environmental impact by sucking fish and shellfish or their eggs into an industrial system. There, the organisms may be killed or injured by heat, physical stress, or chemicals. Larger organisms may be killed or injured when they become trapped against screens at the front of an intake structure. Alternative intake types that mitigate these impacts include beach wells, but they require more energy and higher costs.
The Kwinana Desalination Plant opened in the Australian city of Perth, in 2007. Water there and at Queensland's Gold Coast Desalination Plant and Sydney's Kurnell Desalination Plant is withdrawn at , which is slow enough to let fish escape. The plant provides nearly of clean water per day.
Outflow
Desalination processes produce large quantities of brine, possibly at above ambient temperature, and contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion (especially in thermal-based plants). Chemical pretreatment and cleaning are a necessity in most desalination plants, which typically includes prevention of biofouling, scaling, foaming and corrosion in thermal plants, and of biofouling, suspended solids and scale deposits in membrane plants.
To limit the environmental impact of returning the brine to the ocean, it can be diluted with another stream of water entering the ocean, such as the outfall of a wastewater treatment or power plant. With medium to large power plant and desalination plants, the power plant's cooling water flow is likely to be several times larger than that of the desalination plant, reducing the salinity of the combination. Another method to dilute the brine is to mix it via a diffuser in a mixing zone. For example, once a pipeline containing the brine reaches the sea floor, it can split into many branches, each releasing brine gradually through small holes along its length. Mixing can be combined with power plant or wastewater plant dilution. Furthermore, zero liquid discharge systems can be adopted to treat brine before disposal.
Another possibility is making the desalination plant movable, thus avoiding that the brine builds up into a single location (as it keeps being produced by the desalination plant). Some such movable (ship-connected) desalination plants have been constructed.
Brine is denser than seawater and therefore sinks to the ocean bottom and can damage the ecosystem. Brine plumes have been seen to diminish over time to a diluted concentration, to where there was little to no effect on the surrounding environment. However studies have shown the dilution can be misleading due to the depth at which it occurred. If the dilution was observed during the summer season, there is possibility that there could have been a seasonal thermocline event that could have prevented the concentrated brine to sink to sea floor. This has the potential to not disrupt the sea floor ecosystem and instead the waters above it. Brine dispersal from the desalination plants has been seen to travel several kilometers away, meaning that it has the potential to cause harm to ecosystems far away from the plants. Careful reintroduction with appropriate measures and environmental studies can minimize this problem.
Energy use
The energy demand for desalination in the Middle East, driven by severe water scarcity, is expected to double by 2030. Currently, this process primarily uses fossil fuels, comprising over 95% of its energy source. In 2023, desalination consumed nearly half of the residential sector's energy in the region.
Other issues
Due to the nature of the process, there is a need to place the plants on approximately 25 acres of land on or near the shoreline. In the case of a plant built inland, pipes have to be laid into the ground to allow for easy intake and outtake. However, once the pipes are laid into the ground, they have a possibility of leaking into and contaminating nearby aquifers. Aside from environmental risks, the noise generated by certain types of desalination plants can be loud.
Health aspects
Iodine deficiency
Desalination removes iodine from water and could increase the risk of iodine deficiency disorders. Israeli researchers claimed a possible link between seawater desalination and iodine deficiency, finding iodine deficits among adults exposed to iodine-poor water concurrently with an increasing proportion of their area's drinking water from seawater reverse osmosis (SWRO). They later found probable iodine deficiency disorders in a population reliant on desalinated seawater.
A possible link of heavy desalinated water use and national iodine deficiency was suggested by Israeli researchers. They found a high burden of iodine deficiency in the general population of Israel: 62% of school-age children and 85% of pregnant women fall below the WHO's adequacy range. They also pointed out the national reliance on iodine-depleted desalinated water, the absence of a universal salt iodization program and reports of increased use of thyroid medication in Israel as a possible reasons that the population's iodine intake is low. In the year that the survey was conducted, the amount of water produced from the desalination plants constitutes about 50% of the quantity of fresh water supplied for all needs and about 80% of the water supplied for domestic and industrial needs in Israel.
Experimental techniques
Other desalination techniques include:
Waste heat
Thermally-driven desalination technologies are frequently suggested for use with low-temperature waste heat sources, as the low temperatures are not useful for process heat needed in many industrial processes, but ideal for the lower temperatures needed for desalination. In fact, such pairing with waste heat can even improve electrical process:
Diesel generators commonly provide electricity in remote areas. About 40–50% of the energy output is low-grade heat that leaves the engine via the exhaust. Connecting a thermal desalination technology such as membrane distillation system to the diesel engine exhaust repurposes this low-grade heat for desalination. The system actively cools the diesel generator, improving its efficiency and increasing its electricity output. This results in an energy-neutral desalination solution. An example plant was commissioned by Dutch company Aquaver in March 2014 for Gulhi, Maldives.
Low-temperature thermal
Originally stemming from ocean thermal energy conversion research, low-temperature thermal desalination (LTTD) takes advantage of water boiling at low pressure, even at ambient temperature. The system uses pumps to create a low-pressure, low-temperature environment in which water boils at a temperature gradient of between two volumes of water. Cool ocean water is supplied from depths of up to . This water is pumped through coils to condense the water vapor. The resulting condensate is purified water. LTTD may take advantage of the temperature gradient available at power plants, where large quantities of warm wastewater are discharged from the plant, reducing the energy input needed to create a temperature gradient.
Experiments were conducted in the US and Japan to test the approach. In Japan, a spray-flash evaporation system was tested by Saga University. In Hawaii, the National Energy Laboratory tested an open-cycle OTEC plant with fresh water and power production using a temperature difference of between surface water and water at a depth of around . LTTD was studied by India's National Institute of Ocean Technology (NIOT) in 2004. Their first LTTD plant opened in 2005 at Kavaratti in the Lakshadweep islands. The plant's capacity is /day, at a capital cost of INR 50 million (€922,000). The plant uses deep water at a temperature of . In 2007, NIOT opened an experimental, floating LTTD plant off the coast of Chennai, with a capacity of /day. A smaller plant was established in 2009 at the North Chennai Thermal Power Station to prove the LTTD application where power plant cooling water is available.
Thermoionic process
In October 2009, Saltworks Technologies announced a process that uses solar or other thermal heat to drive an ionic current that removes all sodium and chlorine ions from the water using ion-exchange membranes.
Evaporation and condensation for crops
The Seawater greenhouse uses natural evaporation and condensation processes inside a greenhouse powered by solar energy to grow crops in arid coastal land.
Ion concentration polarisation (ICP)
In 2022, using a technique that used multiple stages of ion concentration polarisation followed by a single stage of electrodialysis, researchers from MIT manage to create a filterless portable desalination unit, capable of removing both dissolved salts and suspended solids. Designed for use by non-experts in remote areas or natural disasters, as well as on military operations, the prototype is the size of a suitcase, measuring 42 × 33.5 × 19 cm3 and weighing 9.25 kg. The process is fully automated, notifying the user when the water is safe to drink, and can be controlled by a single button or smartphone app. As it does not require a high pressure pump the process is highly energy efficient, consuming only 20 watt-hours per liter of drinking water produced, making it capable of being powered by common portable solar panels. Using a filterless design at low pressures or replaceable filters significantly reduces maintenance requirements, while the device itself is self cleaning. However, the device is limited to producing 0.33 liters of drinking water per minute. There are also concerns that fouling will impact the long-term reliability, especially in water with high turbidity. The researchers are working to increase the efficiency and production rate with the intent to commercialise the product in the future, however a significant limitation is the reliance on expensive materials in the current design.
Other approaches
Adsorption-based desalination (AD) relies on the moisture absorption properties of certain materials such as Silica Gel.
Forward osmosis
One process was commercialized by Modern Water PLC using forward osmosis, with a number of plants reported to be in operation.
Hydrogel based desalination
The idea of the method is in the fact that when the hydrogel is put into contact with aqueous salt solution, it swells absorbing a solution with the ion composition different from the original one. This solution can be easily squeezed out from the gel by means of sieve or microfiltration membrane. The compression of the gel in closed system lead to change in salt concentration, whereas the compression in open system, while the gel is exchanging ions with bulk, lead to the change in the number of ions. The consequence of the compression and swelling in open and closed system conditions mimics the reverse Carnot Cycle of refrigerator machine. The only difference is that instead of heat this cycle transfers salt ions from the bulk of low salinity to a bulk of high salinity. Similarly to the Carnot cycle this cycle is fully reversible, so can in principle work with an ideal thermodynamic efficiency. Because the method is free from the use of osmotic membranes it can compete with reverse osmosis method. In addition, unlike the reverse osmosis, the approach is not sensitive to the quality of feed water and its seasonal changes, and allows the production of water of any desired concentration.
Small-scale solar
The United States, France and the United Arab Emirates are working to develop practical solar desalination. AquaDania's WaterStillar has been installed at Dahab, Egypt, and in Playa del Carmen, Mexico. In this approach, a solar thermal collector measuring two square metres can distill from 40 to 60 litres per day from any local water source – five times more than conventional stills. It eliminates the need for plastic PET bottles or energy-consuming water transport. In Central California, a startup company WaterFX is developing a solar-powered method of desalination that can enable the use of local water, including runoff water that can be treated and used again. Salty groundwater in the region would be treated to become freshwater, and in areas near the ocean, seawater could be treated.
Passarell
The Passarell process uses reduced atmospheric pressure rather than heat to drive evaporative desalination. The pure water vapor generated by distillation is then compressed and condensed using an advanced compressor. The compression process improves distillation efficiency by creating the reduced pressure in the evaporation chamber. The compressor centrifuges the pure water vapor after it is drawn through a demister (removing residual impurities) causing it to compress against tubes in the collection chamber. The compression of the vapor increases its temperature. The heat is transferred to the input water falling in the tubes, vaporizing the water in the tubes. Water vapor condenses on the outside of the tubes as product water. By combining several physical processes, Passarell enables most of the system's energy to be recycled through its evaporation, demisting, vapor compression, condensation, and water movement processes.
Geothermal
Geothermal energy can drive desalination. In most locations, geothermal desalination beats using scarce groundwater or surface water, environmentally and economically.
Nanotechnology
Nanotube membranes of higher permeability than current generation of membranes may lead to eventual reduction in the footprint of RO desalination plants. It has also been suggested that the use of such membranes will lead to reduction in the energy needed for desalination.
Hermetic, sulphonated nano-composite membranes have shown to be capable of removing various contaminants to the parts per billion level, and have little or no susceptibility to high salt concentration levels.
Biomimesis
Biomimetic membranes are another approach.
Electrochemical
In 2008, Siemens Water Technologies announced technology that applied electric fields to desalinate one cubic meter of water while using only a purported 1.5 kWh of energy. If accurate, this process would consume one-half the energy of other processes. As of 2012 a demonstration plant was operating in Singapore. Researchers at the University of Texas at Austin and the University of Marburg are developing more efficient methods of electrochemically mediated seawater desalination.
Electrokinetic shocks
A process employing electrokinetic shock waves can be used to accomplish membraneless desalination at ambient temperature and pressure. In this process, anions and cations in salt water are exchanged for carbonate anions and calcium cations, respectively using electrokinetic shockwaves. Calcium and carbonate ions react to form calcium carbonate, which precipitates, leaving fresh water. The theoretical energy efficiency of this method is on par with electrodialysis and reverse osmosis.
Temperature swing solvent extraction
Temperature Swing Solvent Extraction (TSSE) uses a solvent instead of a membrane or high temperatures.
Solvent extraction is a common technique in chemical engineering. It can be activated by low-grade heat (less than , which may not require active heating. In a study, TSSE removed up to 98.4 percent of the salt in brine. A solvent whose solubility varies with temperature is added to saltwater. At room temperature the solvent draws water molecules away from the salt. The water-laden solvent is then heated, causing the solvent to release the now salt-free water.
It can desalinate extremely salty brine up to seven times as salty as the ocean. For comparison, the current methods can only handle brine twice as salty.
Wave energy
A small-scale offshore system uses wave energy to desalinate 30–50 m3/day. The system operates with no external power, and is constructed of recycled plastic bottles.
Plants
Trade Arabia claims Saudi Arabia to be producing 7.9 million cubic meters of desalinated water daily, or 22% of world total as of 2021 yearend.
Perth began operating a reverse osmosis seawater desalination plant in 2006. The Perth desalination plant is powered partially by renewable energy from the Emu Downs Wind Farm.
A desalination plant now operates in Sydney, and the Wonthaggi desalination plant was under construction in Wonthaggi, Victoria. A wind farm at Bungendore in New South Wales was purpose-built to generate enough renewable energy to offset the Sydney plant's energy use, mitigating concerns about harmful greenhouse gas emissions.
A January 17, 2008, article in The Wall Street Journal stated, "In November, Connecticut-based Poseidon Resources Corp. won a key regulatory approval to build the $300 million water-desalination plant in Carlsbad, north of San Diego. The facility would produce 190,000 cubic metres of drinking water per day, enough to supply about 100,000 homes. As of June 2012, the cost for the desalinated water had risen to $2,329 per acre-foot. Each $1,000 per acre-foot works out to $3.06 for 1,000 gallons, or $0.81 per cubic meter.
As new technological innovations continue to reduce the capital cost of desalination, more countries are building desalination plants as a small element in addressing their water scarcity problems.
Israel desalinizes water for a cost of 53 cents per cubic meter
Singapore desalinizes water for 49 cents per cubic meter and also treats sewage with reverse osmosis for industrial and potable use (NEWater).
China and India, the world's two most populous countries, are turning to desalination to provide a small part of their water needs
In 2007 Pakistan announced plans to use desalination
All Australian capital cities (except Canberra, Darwin, Northern Territory and Hobart) are either in the process of building desalination plants, or are already using them. In late 2011, Melbourne will begin using Australia's largest desalination plant, the Wonthaggi desalination plant to raise low reservoir levels.
In 2007 Bermuda signed a contract to purchase a desalination plant
Before 2015, the largest desalination plant in the United States was at Tampa Bay, Florida, which began desalinizing 25 million gallons (95000 m3) of water per day in December 2007. In the United States, the cost of desalination is $3.06 for 1,000 gallons, or 81 cents per cubic meter. In the United States, California, Arizona, Texas, and Florida use desalination for a very small part of their water supply. Since 2015, the Claude "Bud" Lewis Carlsbad Desalination Plant has been producing 50 million gallons of drinking water daily.
After being desalinized at Jubail, Saudi Arabia, water is pumped inland though a pipeline to the capital city of Riyadh.
As of 2008, "World-wide, 13,080 desalination plants produce more than 12 billion gallons of water a day, according to the International Desalination Association." An estimate in 2009 found that the worldwide desalinated water supply will triple between 2008 and 2020.
One of the world's largest desalination hubs is the Jebel Ali Power Generation and Water Production Complex in the United Arab Emirates. It is a site featuring multiple plants using different desalination technologies and is capable of producing 2.2 million cubic meters of water per day.
A typical aircraft carrier in the U.S. military uses nuclear power to desalinize of water per day.
In nature
Evaporation of water over the oceans in the water cycle is a natural desalination process.
The formation of sea ice produces ice with little salt, much lower than in seawater.
Seabirds distill seawater using countercurrent exchange in a gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans, petrels, albatrosses, gulls and terns, possess this gland, which allows them to drink the salty water from their environments while they are far from land.
Mangrove trees grow in seawater; they secrete salt by trapping it in parts of the root, which are then eaten by animals (usually crabs). Additional salt is removed by storing it in leaves that fall off. Some types of mangroves have glands on their leaves, which work in a similar way to the seabird desalination gland. Salt is extracted to the leaf exterior as small crystals, which then fall off the leaf.
Willow trees and reeds absorb salt and other contaminants, effectively desalinating the water. This is used in artificial constructed wetlands, for treating sewage.
Society and culture
Despite the issues associated with desalination processes, public support for its development can be very high. One survey of a Southern California community saw 71.9% of all respondents being in support of desalination plant development in their community. In many cases, high freshwater scarcity corresponds to higher public support for desalination development whereas areas with low water scarcity tend to have less public support for its development.
See also
Metal–organic framework
Atmospheric water generator
Dewvaporation
Flexible barge
Peak water
Pumpable ice technology
Soil desalination model
Soil salinity
Soil salinity and groundwater model
References
External links
International Desalination Association
European Desalination Society
Working principles in desalination systems
Classification of Desalination Technologies (CDT)
SOLAR TOWER Project – Clean Electricity Generation for Desalination.
Desalination bibliography Library of Congress
Encyclopedia of Desalination and water and Water Resources
Environmental issues with water
Filters
Fresh water
Water supply
Water desalination
Water treatment | Desalination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 10,098 | [
"Hydrology",
"Water desalination",
"Water treatment",
"Chemical equipment",
"Fresh water",
"Filters",
"Water pollution",
"Filtration",
"Environmental engineering",
"Water technology",
"Water supply"
] |
156,940 | https://en.wikipedia.org/wiki/Electrophysiology | Electrophysiology (from Greek , ēlektron, "amber" [see the etymology of "electron"]; , physis, "nature, origin"; and , -logia) is the branch of physiology that studies the electrical properties of biological cells and tissues. It involves measurements of voltage changes or electric current or manipulations on a wide variety of scales from single ion channel proteins to whole organs like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and, in particular, action potential activity. Recordings of large-scale electric signals from the nervous system, such as electroencephalography, may also be referred to as electrophysiological recordings. They are useful for electrodiagnosis and monitoring.
Definition and scope
Classical electrophysiological techniques
Principle and mechanisms
Electrophysiology is the branch of physiology that pertains broadly to the flow of ions (ion current) in biological tissues and, in particular, to the electrical recording techniques that enable the measurement of this flow. Classical electrophysiology techniques involve placing electrodes into various preparations of biological tissue. The principal types of electrodes are:
Simple solid conductors, such as discs and needles (singles or arrays, often insulated except for the tip),
Tracings on printed circuit boards or flexible polymers, also insulated except for the tip, and
Hollow, often elongated or 'pulled', tubes filled with an electrolyte, such as glass pipettes filled with potassium chloride solution or another electrolyte solution.
The principal preparations include:
living organisms (example in insects),
excised tissue (acute or cultured),
dissociated cells from excised tissue (acute or cultured),
artificially grown cells or tissues, or
hybrids of the above.
Neuronal electrophysiology is the study of electrical properties of biological cells and tissues within the nervous system. With neuronal electrophysiology doctors and specialists can determine how neuronal disorders happen, by looking at the individual's brain activity. Activity such as which portions of the brain light up during any situations encountered.
If an electrode is small enough (micrometers) in diameter, then the electrophysiologist may choose to insert the tip into a single cell. Such a configuration allows direct observation and intracellular recording of the intracellular electrical activity of a single cell. However, this invasive setup reduces the life of the cell and causes a leak of substances across the cell membrane.
Intracellular activity may also be observed using a specially formed (hollow) glass pipette containing an electrolyte. In this technique, the microscopic pipette tip is pressed against the cell membrane, to which it tightly adheres by an interaction between glass and lipids of the cell membrane. The electrolyte within the pipette may be brought into fluid continuity with the cytoplasm by delivering a pulse of negative pressure to the pipette in order to rupture the small patch of membrane encircled by the pipette rim (whole-cell recording). Alternatively, ionic continuity may be established by "perforating" the patch by allowing exogenous pore-forming agents within the electrolyte to insert themselves into the membrane patch (perforated patch recording). Finally, the patch may be left intact (patch recording).
The electrophysiologist may choose not to insert the tip into a single cell. Instead, the electrode tip may be left in continuity with the extracellular space. If the tip is small enough, such a configuration may allow indirect observation and recording of action potentials from a single cell, termed single-unit recording. Depending on the preparation and precise placement, an extracellular configuration may pick up the activity of several nearby cells simultaneously, termed multi-unit recording.
As electrode size increases, the resolving power decreases. Larger electrodes are sensitive only to the net activity of many cells, termed local field potentials. Still larger electrodes, such as uninsulated needles and surface electrodes used by clinical and surgical neurophysiologists, are sensitive only to certain types of synchronous activity within populations of cells numbering in the millions.
Other classical electrophysiological techniques include single channel recording and amperometry.
Electrographic modalities by body part
Electrophysiological recording in general is sometimes called electrography (from electro- + -graphy, "electrical recording"), with the record thus produced being an electrogram. However, the word electrography has other senses (including electrophotography), and the specific types of electrophysiological recording are usually called by specific names, constructed on the pattern of electro- + [body part combining form] + -graphy (abbreviation ExG). Relatedly, the word electrogram (not being needed for those other senses) often carries the specific meaning of intracardiac electrogram, which is like an electrocardiogram but with some invasive leads (inside the heart) rather than only noninvasive leads (on the skin). Electrophysiological recording for clinical diagnostic purposes is included within the category of electrodiagnostic testing. The various "ExG" modes are as follows:
Optical electrophysiological techniques
Optical electrophysiological techniques were created by scientists and engineers to overcome one of the main limitations of classical techniques. Classical techniques allow observation of electrical activity at approximately a single point within a volume of tissue. Classical techniques singularize a distributed phenomenon. Interest in the spatial distribution of bioelectric activity prompted development of molecules capable of emitting light in response to their electrical or chemical environment. Examples are voltage sensitive dyes and fluorescing proteins.
After introducing one or more such compounds into tissue via perfusion, injection or gene expression, the 1 or 2-dimensional distribution of electrical activity may be observed and recorded.
Intracellular recording
Intracellular recording involves measuring voltage and/or current across the membrane of a cell. To make an intracellular recording, the tip of a fine (sharp) microelectrode must be inserted inside the cell, so that the membrane potential can be measured. Typically, the resting membrane potential of a healthy cell will be -60 to -80 mV, and during an action potential the membrane potential might reach +40 mV.
In 1963, Alan Lloyd Hodgkin and Andrew Fielding Huxley won the Nobel Prize in Physiology or Medicine for their contribution to understanding the mechanisms underlying the generation of action potentials in neurons. Their experiments involved intracellular recordings from the giant axon of Atlantic squid (Loligo pealei), and were among the first applications of the "voltage clamp" technique.
Today, most microelectrodes used for intracellular recording are glass micropipettes, with a tip diameter of < 1 micrometre, and a resistance of several megohms. The micropipettes are filled with a solution that has a similar ionic composition to the intracellular fluid of the cell. A chlorided silver wire inserted into the pipette connects the electrolyte electrically to the amplifier and signal processing circuit. The voltage measured by the electrode is compared to the voltage of a reference electrode, usually a silver chloride-coated silver wire in contact with the extracellular fluid around the cell. In general, the smaller the electrode tip, the higher its electrical resistance. So an electrode is a compromise between size (small enough to penetrate a single cell with minimum damage to the cell) and resistance (low enough so that small neuronal signals can be discerned from thermal noise in the electrode tip).
Maintaining healthy brain slices is pivotal for successful electrophysiological recordings. The preparation of these slices is commonly achieved with tools such as the Compresstome vibratome, ensuring optimal conditions for accurate and reliable recordings. Nevertheless, even with the highest standards of tissue handling, slice preparation induces rapid and robust phenotype changes of the brain's major immune cells, microglia, which must be taken into consideration when using this model.
Voltage clamp
The voltage clamp technique allows an experimenter to "clamp" the cell potential at a chosen value. This makes it possible to measure how much ionic current crosses a cell's membrane at any given voltage. This is important because many of the ion channels in the membrane of a neuron are voltage-gated ion channels, which open only when the membrane voltage is within a certain range. Voltage clamp measurements of current are made possible by the near-simultaneous digital subtraction of transient capacitive currents that pass as the recording electrode and cell membrane are charged to alter the cell's potential.
Current clamp
The current clamp technique records the membrane potential by injecting current into a cell through the recording electrode. Unlike in the voltage clamp mode, where the membrane potential is held at a level determined by the experimenter, in "current clamp" mode the membrane potential is free to vary, and the amplifier records whatever voltage the cell generates on its own or as a result of stimulation. This technique is used to study how a cell responds when electric current enters a cell; this is important for instance for understanding how neurons respond to neurotransmitters that act by opening membrane ion channels.
Most current-clamp amplifiers provide little or no amplification of the voltage changes recorded from the cell. The "amplifier" is actually an electrometer, sometimes referred to as a "unity gain amplifier"; its main purpose is to reduce the electrical load on the small signals (in the mV range) produced by cells so that they can be accurately recorded by low-impedance electronics. The amplifier increases the current behind the signal while decreasing the resistance over which that current passes. Consider this example based on Ohm's law: A voltage of 10 mV is generated by passing 10 nanoamperes of current across 1 MΩ of resistance. The electrometer changes this "high impedance signal" to a "low impedance signal" by using a voltage follower circuit. A voltage follower reads the voltage on the input (caused by a small current through a big resistor). It then instructs a parallel circuit that has a large current source behind it (the electrical mains) and adjusts the resistance of that parallel circuit to give the same output voltage, but across a lower resistance.
Patch-clamp recording
This technique was developed by Erwin Neher and Bert Sakmann who received the Nobel Prize in 1991. Conventional intracellular recording involves impaling a cell with a fine electrode; patch-clamp recording takes a different approach. A patch-clamp microelectrode is a micropipette with a relatively large tip diameter. The microelectrode is placed next to a cell, and gentle suction is applied through the microelectrode to draw a piece of the cell membrane (the 'patch') into the microelectrode tip; the glass tip forms a high resistance 'seal' with the cell membrane. This configuration is the "cell-attached" mode, and it can be used for studying the activity of the ion channels that are present in the patch of membrane.
If more suction is now applied, the small patch of membrane in the electrode tip can be displaced, leaving the electrode sealed to the rest of the cell. This "whole-cell" mode allows very stable intracellular recording. A disadvantage (compared to conventional intracellular recording with sharp electrodes) is that the intracellular fluid of the cell mixes with the solution inside the recording electrode, and so some important components of the intracellular fluid can be diluted. A variant of this technique, the "perforated patch" technique, tries to minimize these problems.
Instead of applying suction to displace the membrane patch from the electrode tip, it is also possible to make small holes on the patch with pore-forming agents so that large molecules such as proteins can stay inside the cell and ions can pass through the holes freely. Also the patch of membrane can be pulled away from the rest of the cell. This approach enables the membrane properties of the patch to be analyzed pharmacologically. Patch-clamp may also be combined with RNA sequencing in a technique known as patch-seq by extracting the cellular contents following recording in order to characterize the electrophysiological properties relationship to gene expression and cell-type.
Sharp electrode recording
In situations where one wants to record the potential inside the cell membrane with minimal effect on the ionic constitution of the intracellular fluid a sharp electrode can be used. These micropipettes (electrodes) are again like those for patch clamp pulled from glass capillaries, but the pore is much smaller so that there is very little ion exchange between the intracellular fluid and the electrolyte in the pipette. The electrical resistance of the micropipette electrode is reduced by filling with 2-4M KCl, rather than a salt concentration which mimics the intracellular ionic concentrations as used in patch clamping. Often the tip of the electrode is filled with various kinds of dyes like Lucifer yellow to fill the cells recorded from, for later confirmation of their morphology under a microscope. The dyes are injected by applying a positive or negative, DC or pulsed voltage to the electrodes depending on the polarity of the dye.
Extracellular recording
Single-unit recording
An electrode introduced into the brain of a living animal will detect electrical activity that is generated by the neurons adjacent to the electrode tip. If the electrode is a microelectrode, with a tip size of about 1 micrometre, the electrode will usually detect the activity of at most one neuron. Recording in this way is in general called "single-unit" recording. The action potentials recorded are very much like the action potentials that are recorded intracellularly, but the signals are very much smaller (typically about 1 mV). Most recordings of the activity of single neurons in anesthetized and conscious animals are made in this way. Recordings of single neurons in living animals have provided important insights into how the brain processes information. For example, David Hubel and Torsten Wiesel recorded the activity of single neurons in the primary visual cortex of the anesthetized cat, and showed how single neurons in this area respond to very specific features of a visual stimulus. Hubel and Wiesel were awarded the Nobel Prize in Physiology or Medicine in 1981.
To prepare the brain for such electrode insertion, delicate slicing devices like the compresstome vibratome, leica vibratome, microtome are often employed. These instruments aid in obtaining precise, thin brain sections necessary for electrode placement, enabling neuroscientists to target specific brain regions for recording.
Multi-unit recording
If the electrode tip is slightly larger, then the electrode might record the activity generated by several neurons. This type of recording is often called "multi-unit recording", and is often used in conscious animals to record changes in the activity in a discrete brain area during normal activity. Recordings from one or more such electrodes that are closely spaced can be used to identify the number of cells around it as well as which of the spikes come from which cell. This process is called spike sorting and is suitable in areas where there are identified types of cells with well defined spike characteristics.
If the electrode tip is bigger still, in general the activity of individual neurons cannot be distinguished but the electrode will still be able to record a field potential generated by the activity of many cells.
Field potentials
Extracellular field potentials are local current sinks or sources that are generated by the collective activity of many cells. Usually, a field potential is generated by the simultaneous activation of many neurons by synaptic transmission. The diagram to the right shows hippocampal synaptic field potentials. At the right, the lower trace shows a negative wave that corresponds to a current sink caused by positive charges entering cells through postsynaptic glutamate receptors, while the upper trace shows a positive wave that is generated by the current that leaves the cell (at the cell body) to complete the circuit. For more information, see local field potential.
Amperometry
Amperometry uses a carbon electrode to record changes in the chemical composition of the oxidized components of a biological solution. Oxidation and reduction is accomplished by changing the voltage at the active surface of the recording electrode in a process known as "scanning". Because certain brain chemicals lose or gain electrons at characteristic voltages, individual species can be identified. Amperometry has been used for studying exocytosis in the nervous and endocrine systems. Many monoamine neurotransmitters; e.g., norepinephrine (noradrenalin), dopamine, and serotonin (5-HT) are oxidizable. The method can also be used with cells that do not secrete oxidizable neurotransmitters by "loading" them with 5-HT or dopamine.
Planar patch clamp
Planar patch clamp is a novel method developed for high throughput electrophysiology. Instead of positioning a pipette on an adherent cell, cell suspension is pipetted on a chip containing a microstructured aperture.
A single cell is then positioned on the hole by suction and a tight connection (Gigaseal) is formed.
The planar geometry offers a variety of advantages compared to the classical experiment:
It allows for integration of microfluidics, which enables automatic compound application for ion channel screening.
The system is accessible for optical or scanning probe techniques.
Perfusion of the intracellular side can be performed.
Other methods
Solid-supported membrane (SSM)-based
With this electrophysiological approach, proteoliposomes, membrane vesicles, or membrane fragments containing the channel or transporter of interest are adsorbed to a lipid monolayer painted over a functionalized electrode. This electrode consists of a glass support, a chromium layer, a gold layer, and an octadecyl mercaptane monolayer. Because the painted membrane is supported by the electrode, it is called a solid-supported membrane. Mechanical perturbations, which usually destroy a biological lipid membrane, do not influence the life-time of an SSM. The capacitive electrode (composed of the SSM and the absorbed vesicles) is so mechanically stable that solutions may be rapidly exchanged at its surface. This property allows the application of rapid substrate/ligand concentration jumps to investigate the electrogenic activity of the protein of interest, measured via capacitive coupling between the vesicles and the electrode.
Bioelectric recognition assay (BERA)
The bioelectric recognition assay (BERA) is a novel method for determination of various chemical and biological molecules by measuring changes in the membrane potential of cells immobilized in a gel matrix. Apart from the increased stability of the electrode-cell interface, immobilization preserves the viability and physiological functions of the cells. BERA is used primarily in biosensor applications in order to assay analytes that can interact with the immobilized cells by changing the cell membrane potential. In this way, when a positive sample is added to the sensor, a characteristic, "signature-like" change in electrical potential occurs. BERA is the core technology behind the recently launched pan-European FOODSCAN project, about pesticide and food risk assessment in Europe. BERA has been used for the detection of human viruses (hepatitis B and C viruses and herpes viruses), veterinary disease agents (foot and mouth disease virus, prions, and blue tongue virus), and plant viruses (tobacco and cucumber viruses) in a specific, rapid (1–2 minutes), reproducible, and cost-efficient fashion. The method has also been used for the detection of environmental toxins, such as pesticides and mycotoxins in food, and 2,4,6-trichloroanisole in cork and wine, as well as the determination of very low concentrations of the superoxide anion in clinical samples.
A BERA sensor has two parts:
The consumable biorecognition elements
The electronic read-out device with embedded artificial intelligence.
A recent advance is the development of a technique called molecular identification through membrane engineering (MIME). This technique allows for building cells with defined specificity for virtually any molecule of interest, by embedding thousands of artificial receptors into the cell membrane.
Computational electrophysiology
While not strictly constituting an experimental measurement, methods have been developed to examine the conductive properties of proteins and biomembranes in silico. These are mainly molecular dynamics simulations in which a model system like a lipid bilayer is subjected to an externally applied voltage. Studies using these setups have been able to study dynamical phenomena like electroporation of membranes and ion translocation by channels.
The benefit of such methods is the high level of detail of the active conduction mechanism, given by the inherently high resolution and data density that atomistic simulation affords. There are significant drawbacks, given by the uncertainty of the legitimacy of the model and the computational cost of modeling systems that are large enough and over sufficient timescales to be considered reproducing the macroscopic properties of the systems themselves. While atomistic simulations may access timescales close to, or into the microsecond domain, this is still several orders of magnitude lower than even the resolution of experimental methods such as patch-clamping.
Clinical electrophysiology
Clinical electrophysiology is the study of how electrophysiological principles and technologies can be applied to human health. For example, clinical cardiac electrophysiology is the study of the electrical properties which govern heart rhythm and activity. Cardiac electrophysiology can be used to observe and treat disorders such as arrhythmia (irregular heartbeat). For example, a doctor may insert a catheter containing an electrode into the heart to record the heart muscle's electrical activity.
Another example of clinical electrophysiology is clinical neurophysiology. In this medical specialty, doctors measure the electrical properties of the brain, spinal cord, and nerves. Scientists such as Duchenne de Boulogne (1806–1875) and Nathaniel A. Buchwald (1924–2006) are considered to have greatly advanced the field of neurophysiology, enabling its clinical applications.
Clinical reporting guidelines
Minimum Information (MI) standards or reporting guidelines specify the minimum amount of meta data (information) and data required to meet a specific aim or aims in a clinical study. The "Minimum Information about a Neuroscience investigation" (MINI) family of reporting guideline documents aims to provide a consistent set of guidelines in order to report an electrophysiology experiment. In practice a MINI module comprises a checklist of information that should be provided (for example about the protocols employed) when a data set is described for publication.
See also
References
External links
Book chapter on Planar Patch Clamp
Ion channels
Neuroimaging
Neurophysiology
Biophysics | Electrophysiology | [
"Physics",
"Chemistry",
"Biology"
] | 4,734 | [
"Neurochemistry",
"Applied and interdisciplinary physics",
"Biophysics",
"Ion channels"
] |
156,962 | https://en.wikipedia.org/wiki/X-ray%20scattering%20techniques | X-ray scattering techniques are a family of non-destructive analytical techniques which reveal information about the crystal structure, chemical composition, and physical properties of materials and thin films. These techniques are based on observing the scattered intensity of an X-ray beam hitting a sample as a function of incident and scattered angle, polarization, and wavelength or energy.
Note that X-ray diffraction is sometimes considered a sub-set of X-ray scattering, where the scattering is elastic and the scattering object is crystalline, so that the resulting pattern contains sharp spots analyzed by X-ray crystallography (as in the Figure). However, both scattering and diffraction are related general phenomena and the distinction has not always existed. Thus Guinier's classic text from 1963 is titled "X-ray diffraction in Crystals, Imperfect Crystals and Amorphous Bodies" so 'diffraction' was clearly not restricted to crystals at that time.
Scattering techniques
Elastic scattering
X-ray diffraction, sometimes called Wide-angle X-ray diffraction (WAXD)
Small-angle X-ray scattering (SAXS) probes structure in the nanometer to micrometer range by measuring scattering intensity at scattering angles 2θ close to 0°.
X-ray reflectivity is an analytical technique for determining thickness, roughness, and density of single layer and multilayer thin films.
Wide-angle X-ray scattering (WAXS), a technique concentrating on scattering angles 2θ larger than 5°.
Inelastic X-ray scattering (IXS)
In IXS the energy and angle of inelastically scattered X-rays are monitored, giving the dynamic structure factor . From this many properties of materials can be obtained, the specific property depending on the scale of the energy transfer. The table below, listing techniques, is adapted from. Inelastically scattered X-rays have intermediate phases and so in principle are not useful for X-ray crystallography. In practice X-rays with small energy transfers are included with the diffraction spots due to elastic scattering, and X-rays with large energy transfers contribute to the background noise in the diffraction pattern.
See also
Anomalous scattering
Anomalous X-ray scattering
Backscatter
Materials science
Metallurgy
Mineralogy
Rachinger correction
Structure determination
Ultrafast x-ray
X-rays
X-ray generator
References
External links
Learning Crystallography
International Union of Crystallography
IUCr Crystallography Online
The International Centre for Diffraction Data (ICDD)
The British Crystallographic Association
Introduction to X-ray Diffraction at University of California, Santa Barbara
Laboratory techniques in condensed matter physics
X-ray crystallography
Materials science
X-ray scattering | X-ray scattering techniques | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 555 | [
"Applied and interdisciplinary physics",
"X-ray scattering",
"Materials science",
"Laboratory techniques in condensed matter physics",
"Crystallography",
"Scattering",
"Condensed matter physics",
"nan",
"X-ray crystallography"
] |
157,700 | https://en.wikipedia.org/wiki/Moment%20of%20inertia | The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relative to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis. It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass & distance from the axis.
It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis.
For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other.
In mechanical engineering, simply "inertia" is often used to refer to "inertial mass" or "moment of inertia".
Introduction
When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units.
The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by , where is the distance of the point from the axis, and is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object.
In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law.
The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body.
The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor.
The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw.
Definition
The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section.
The moment of inertia is also defined as the ratio of the net angular momentum of a system to its angular velocity around a principal axis, that is
If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster.
If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque on a body to the angular acceleration around a principal axis, that is
For a simple pendulum, this definition yields a formula for the moment of inertia in terms of the mass of the pendulum and its distance from the pivot point as,
Thus, the moment of inertia of the pendulum depends on both the mass of a body and its geometry, or shape, as defined by the distance to the axis of rotation.
This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses each multiplied by the square of its perpendicular distance to an axis . An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass.
In general, given an object of mass , an effective radius can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is
where is known as the radius of gyration around the axis.
Examples
Simple pendulum
Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle with the square of its distance to the pivot, that is
This can be shown as follows:
The force of gravity on the mass of a simple pendulum generates a torque around the axis perpendicular to the plane of the pendulum movement. Here is the distance vector from the torque axis to the pendulum center of mass, and is the net force on the mass. Associated with this torque is an angular acceleration, , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is . Since the torque equation becomes:
where is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of and .) The quantity is the moment of inertia of this single mass around the pivot point.
The quantity also appears in the angular momentum of a simple pendulum, which is calculated from the velocity of the pendulum mass around the pivot, where is the angular velocity of the mass about the pivot point. This angular momentum is given by
using a similar derivation to the previous equation.
Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield
This shows that the quantity is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values for all of the elements of mass in the body.
Compound pendulums
A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of. The natural frequency () of a compound pendulum depends on its moment of inertia, ,
where is the mass of the object, is local acceleration of gravity, and is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.
Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (), to obtain
where is the period (duration) of oscillation (usually averaged over multiple periods).
Center of oscillation
A simple pendulum that has the same natural frequency as a compound pendulum defines the length from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length is determined from the formula,
or
The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of for the pendulum. In this case, the distance to the center of oscillation, , can be computed to be
Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter.
Measuring moment of inertia
The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system.
Moment of inertia of area
Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia.
These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis and horizontal moment of the y-axis .
Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived,
Sectional areas moment calculated thus
Square:
Rectangular: and;
Triangular:
Circular:
Motion in a fixed plane
Point mass
The moment of inertia about an axis of a body is calculated by summing for every particle in the body, where is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.)
Consider the kinetic energy of an assembly of masses that lie at the distances from the pivot point , which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,
This shows that the moment of inertia of the body is the sum of each of the terms, that is
Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia.
The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows:
Another expression replaces the summation with an integral,
Here, the function gives the mass density at each point , is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point in the solid, and the integration is evaluated over the volume of the body . The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area.
Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the -axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the - and -axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the -axis or -axis depending on the load.
Examples
The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass.
The moment of inertia of a thin rod with constant cross-section and density and with length about a perpendicular axis through its center of mass is determined by integration. Align the -axis with the rod and locate the origin its center of mass at the center of the rod, then where is the mass of the rod.
The moment of inertia of a thin disc of constant thickness , radius , and density about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration. Align the -axis with the axis of the disc and define a volume element as , then where is its mass.
The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point as, where is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum.
A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly.
As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation
then the square of the radius of the disc at the cross-section along the -axis is
Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the -axis,
where is the mass of the sphere.
Rigid body
If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles.
If a system of particles, , are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point , and absolute velocities :
where is the angular velocity of the system and is the velocity of .
For planar movement the angular velocity vector is directed along the unit vector which is perpendicular to the plane of movement. Introduce the unit vectors from the reference point to a point , and the unit vector , so
This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane.
Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement.
Angular momentum
The angular momentum vector for the planar movement of a rigid system of particles is given by
Use the center of mass as the reference point so
and define the moment of inertia relative to the center of mass as
then the equation for angular momentum simplifies to
The moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole).
For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body.
Kinetic energy
The kinetic energy of a rigid system of particles moving in the plane is given by
Let the reference point be the center of mass of the system so the second term becomes zero, and introduce the moment of inertia so the kinetic energy is given by
The moment of inertia is the polar moment of inertia of the body.
Newton's laws
Newton's laws for a rigid system of particles, , can be written in terms of a resultant force and torque at a reference point , to yield
where denotes the trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference particle as well as the angular velocity vector and angular acceleration vector of the rigid system of particles as,
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors from the reference point to a point and the unit vectors , so
This yields the resultant torque on the system as
where , and is the unit vector perpendicular to the plane for all of the particles .
Use the center of mass as the reference point and define the moment of inertia relative to the center of mass , then the equation for the resultant torque simplifies to
Motion in space of a rigid body, and the inertia matrix
The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles.
Let the system of particles, be located at the coordinates with velocities relative to a fixed reference frame. For a (possibly moving) reference point , the relative positions are
and the (absolute) velocities are
where is the angular velocity of the system, and is the velocity of .
Angular momentum
Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, , constructed from the components of :
The inertia matrix is constructed by considering the angular momentum, with the reference point of the body chosen to be the center of mass :
where the terms containing () sum to zero by the definition of center of mass.
Then, the skew-symmetric matrix obtained from the relative position vector , can be used to define,
where defined by
is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass .
Kinetic energy
The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of particles be located at the coordinates with velocities , then the kinetic energy is
where is the position vector of a particle relative to the center of mass.
This equation expands to yield three terms
Since the center of mass is defined by
, the second term in this equation is zero. Introduce the skew-symmetric matrix so the kinetic energy becomes
Thus, the kinetic energy of the rigid system of particles is given by
where is the inertia matrix relative to the center of mass and is the total mass.
Resultant torque
The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is,
where is the acceleration of the particle . The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference point, as well as the angular velocity vector and angular acceleration vector of the rigid system as,
Use the center of mass as the reference point, and introduce the skew-symmetric matrix to represent the cross product , to obtain
The calculation uses the identity
obtained from the Jacobi identity for the triple cross product as shown in the proof below:
Thus, the resultant torque on the rigid system of particles is given by
where is the inertia matrix relative to the center of mass.
Parallel axis theorem
The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass and the inertia matrix relative to another point . This relationship is called the parallel axis theorem.
Consider the inertia matrix obtained for a rigid system of particles measured relative to a reference point , given by
Let be the center of mass of the rigid system, then
where is the vector from the center of mass to the reference point . Use this equation to compute the inertia matrix,
Distribute over the cross product to obtain
The first term is the inertia matrix relative to the center of mass. The second and third terms are zero by definition of the center of mass . And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix constructed from .
The result is the parallel axis theorem,
where is the vector from the center of mass to the reference point .
Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form , which is similar to the that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term , if desired, by using the skew-symmetry property of .
Scalar moment of inertia in a plane
The scalar moment of inertia, , of a body about a specified axis whose direction is specified by the unit vector and passes through the body at a point is as follows:
where is the moment of inertia matrix of the system relative to the reference point , and is the skew symmetric matrix obtained from the vector .
This is derived as follows. Let a rigid assembly of particles, , have coordinates . Choose as a reference point and compute the moment of inertia around a line L defined by the unit vector through the reference point , . The perpendicular vector from this line to the particle is obtained from by removing the component that projects onto .
where is the identity matrix, so as to avoid confusion with the inertia matrix, and is the outer product matrix formed from the unit vector along the line .
To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix such that , then we have the identity
noting that is a unit vector.
The magnitude squared of the perpendicular vector is
The simplification of this equation uses the triple scalar product identity
where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that and are orthogonal:
Thus, the moment of inertia around the line through in the direction is obtained from the calculation
where is the moment of inertia matrix of the system relative to the reference point .
This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body.
Inertia tensor
For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used.
Definition
For a rigid object of point masses , the moment of inertia tensor is given by
Its components are defined as
where
, is equal to 1, 2 or 3 for , , and , respectively,
is the vector to the point mass from the point about which the tensor is calculated and
is the Kronecker delta.
Note that, by the definition, is a symmetric tensor.
The diagonal elements are more succinctly written as
while the off-diagonal elements, also called the , are
Here denotes the moment of inertia around the -axis when the objects are rotated around the x-axis, denotes the moment of inertia around the -axis when the objects are rotated around the -axis, and so on.
These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has
where is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object.
Alternatively it can also be written in terms of the angular momentum operator :
The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction ,
where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as is obtained by the computation
and can be interpreted as the moment of inertia around the -axis when the object rotates around the -axis.
The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by,
It is common in rigid body mechanics to use notation that explicitly identifies the , , and -axes, such as and , for the components of the inertia tensor.
Alternate inertia convention
There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix:
Determine inertia convention (Principal axes method)
If one has the inertia data without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions:
The standard inertia convention has been used .
The alternate inertia convention has been used .
Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used.
Derivation of the tensor components
The distance of a particle at from the axis of rotation passing through the origin in the direction is , where is unit vector. The moment of inertia on the axis is
Rewrite the equation using matrix transpose:
where E3 is the 3×3 identity matrix.
This leads to a tensor formula for the moment of inertia
For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct.
Inertia tensor of translation
Let be the inertia tensor of a body calculated at its center of mass, and be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by:
where is the body's mass, E3 is the 3 × 3 identity matrix, and is the outer product.
Inertia tensor of rotation
Let be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by:
Inertia matrix in different reference frames
The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant.
Body frame
Let the body frame inertia matrix relative to the center of mass be denoted , and define the orientation of the body frame relative to the inertial frame by the rotation matrix , such that,
where vectors in the body fixed coordinate frame have coordinates in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by
Notice that changes as the body moves, while remains constant.
Principal axes
Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix and a diagonal matrix , given by
where
The columns of the rotation matrix define the directions of the principal axes of the body, and the constants , , and are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. The principal axis with the highest moment of inertia is sometimes called the figure axis or axis of figure.
A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis.
The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order , meaning it is symmetrical under rotations of about the given axis, that axis is a principal axis. When , the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid.
The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis.
A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble.
Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type.
Ellipsoid
The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface
or
defines an ellipsoid in the body frame. Write this equation in the form,
to see that the semi-principal diameters of this ellipsoid are given by
Let a point on this ellipsoid be defined in terms of its magnitude and direction, , where is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia around an axis in the direction , yields
Thus, the magnitude of a point in the direction on the inertia ellipsoid is
See also
Central moment
List of moments of inertia
Planar lamina
Rotational energy
Moment of inertia factor
References
External links
Angular momentum and rigid-body rotation in two and three dimensions
Lecture notes on rigid-body rotation and moments of inertia
The moment of inertia tensor
An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation)
Tutorial on finding moments of inertia, with problems and solutions on various basic shapes
Notes on mechanics of manipulation: the angular inertia tensor
Easy to use and Free Moment of Inertia Calculator online
Mechanical quantities
Rigid bodies
Rotation
Articles containing video clips
Moment (physics) | Moment of inertia | [
"Physics",
"Mathematics"
] | 7,143 | [
"Physical phenomena",
"Mechanical quantities",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mechanics",
"Moment (physics)"
] |
157,706 | https://en.wikipedia.org/wiki/Hemiola | In music, hemiola (also hemiolia) is the ratio 3:2. The equivalent Latin term is sesquialtera. In rhythm, hemiola refers to three beats of equal value in the time normally occupied by two beats. In pitch, hemiola refers to the interval of a perfect fifth.
Etymology
The word hemiola comes from the Greek adjective ἡμιόλιος, hemiolios, meaning "containing one and a half," "half as much again," "in the ratio of one and a half to one (3:2), as in musical sounds." The words "hemiola" and "sesquialtera" both signify the ratio 3:2, and in music were first used to describe relations of pitch. Dividing the string of a monochord in this ratio produces the interval of a perfect fifth. Beginning in the 15th century, both words were also used to describe rhythmic relationships, specifically the substitution (usually through the use of coloration—red notes in place of black ones, or black in place of "white", hollow noteheads) of three imperfect notes (divided into two parts) for two perfect ones (divided into three parts) in tempus perfectum or in prolatio maior.
Rhythm
In rhythm, hemiola refers to three beats of equal value in the time normally occupied by two beats.
Vertical hemiola: sesquialtera
The Oxford Dictionary of Music illustrates hemiola with a superimposition of three notes in the time of two and vice versa.
One textbook states that, although the word "hemiola" is commonly used for both simultaneous and successive durational values, describing a simultaneous combination of three against two is less accurate than for successive values and the "preferred term for a vertical two against three … is sesquialtera." The New Harvard Dictionary of Music states that in some contexts, a sesquialtera is equivalent to a hemiola. Grove's Dictionary, on the other hand, has maintained from the first edition of 1880 down to the most recent edition of 2001 that the Greek and Latin terms are equivalent and interchangeable, both in the realms of pitch and rhythm, although David Hiley, E. Thomas Stanford, and Paul R. Laird hold that, though similar in effect, hemiola properly applies to a momentary occurrence of three duple values in place of two triple ones, whereas sesquialtera represents a proportional metric change between successive sections.
Sub-Saharan African music
A repeating vertical hemiola is known as polyrhythm, or more specifically, cross-rhythm. The most basic rhythmic cell of sub-Saharan Africa is the 3:2 cross-rhythm. Novotney observes: "The 3:2 relationship (and [its] permutations) is the foundation of most typical polyrhythmic textures found in West African musics." Agawu states: "[The] resultant [3:2] rhythm holds the key to understanding ... there is no independence here, because 2 and 3 belong to a single Gestalt."
In the following example, a Ghanaian gyil plays a hemiola as the basis of an ostinato melody. The left hand (lower notes) sounds the two main beats, while the right hand (upper notes) sounds the three cross-beats.
European music
In compound time ( or ). Where a regular pattern of two beats to a measure is established at the start of a phrase. This changes to a pattern of three beats at the end of the phrase.
The minuet from J. S. Bach's keyboard Partita No. 5 in G major articulates groups of 2 times 3 quavers that are really in time, despite the metre stated in the initial time-signature. The latter time is restored only at the cadences (bars 4 and 11–12):
Later in the same piece, Bach creates a conflict between the two metres ( against ):
Hemiola is found in many Renaissance pieces in triple rhythm. One composer who exploited this characteristic was the 16th-century French composer Claude Le Jeune, a leading exponent of musique mesurée à l'antique. One of his best-known chansons is "Revoici venir du printemps", where the alternation of compound-duple and simple-triple metres with a common counting unit for the beat subdivisions can be clearly heard:
The hemiola was commonly used in baroque music, particularly in dances, such as the courante and minuet. Other composers who have used the device extensively include Corelli, Handel, Weber and Beethoven. A spectacular example from Beethoven comes in the scherzo from his String Quartet No. 6. As Philip Radcliffe puts it, "The constant cross-rhythms shifting between and , more common at certain earlier and later periods, were far from usual in 1800, and here they are made to sound especially eccentric owing to frequent sforzandi on the last quaver of the bar... it looks ahead to later works and must have sounded very disconcerting to contemporary audiences."
Later in the nineteenth century, Tchaikovsky frequently used hemiolas in his waltzes, as did Richard Strauss in the waltzes from Der Rosenkavalier, and the third movement of Robert Schumann's Piano Concerto is noted for the ambiguity of its rhythm. John Daverio says that the movement's "fanciful hemiolas... serve to legitimize the dance-like material as a vehicle for symphonic elaboration."
Johannes Brahms was particularly famous for exploiting the hemiola's potential for large-scale thematic development. Writing about the rhythm and meter of Brahms's Symphony No. 3, Frisch says "Perhaps in no other first movement by Brahms does the development of these elements play so critical a role. The first movement of the third is cast in meter that is also open, through internal recasting as (a so-called hemiola). Metrical ambiguity arises in the very first appearance of the motto [opening theme]."
At the beginning of the second movement, , of his String Quartet (1903), Ravel "uses the pizzicato as a vehicle for rhythmic interplay between and ."
Horizontal hemiola
Peter Manuel, in the context of an analysis of the flamenco soleá song form, refers to the following figure as a horizontal hemiola or "sesquialtera" (which mistranslates as: "six that alters"). It is "a cliché of various Spanish and Latin American musics ... well established in Spain since the sixteenth century", a twelve-beat scheme with internal accents, consisting of a bar followed by one in , for a 3 + 3 + 2 + 2 + 2 pattern.
This figure is a common African bell pattern, used by the Hausa people of Nigeria, in Haitian Vodou drumming, Cuban palo, and many other drumming systems. The horizontal hemiola suggests metric modulation ( changing to ). This interpretational switch has been exploited, for example, by Leonard Bernstein, in the song "America" from West Side Story, as can be heard in the prominent motif (suggesting a duple beat scheme, followed by a triple beat scheme):
Pitch
The perfect fifth
Hemiola can be used to describe the ratio of the lengths of two strings as three-to-two (3:2), that together sound a perfect fifth. The early Pythagoreans, such as Hippasus and Philolaus, used this term in a music-theoretic context to mean a perfect fifth.
The justly tuned pitch ratio of a perfect fifth means that the upper note makes three vibrations in the same amount of time that the lower note makes two. In the cent system of pitch measurement, the 3:2 ratio corresponds to approximately 702 cents, or 2% of a semitone wider than seven semitones. The just perfect fifth can be heard when a violin is tuned: if adjacent strings are adjusted to the exact ratio of 3:2, the result is a smooth and consonant sound, and the violin sounds in tune. Just perfect fifths are the basis of Pythagorean tuning, and are employed together with other just intervals in just intonation. The 3:2 just perfect fifth arises in the justly tuned C major scale between C and G.
Other intervals
Later Greek authors such as Aristoxenus and Ptolemy use the word to describe smaller intervals as well, such as the hemiolic chromatic pyknon, which is one-and-a-half times the size of the semitone comprising the enharmonic pyknon.
See also
Syncopation
References
Sources
Further reading
Brandel, Rose (1959). The African Hemiola Style, Ethnomusicology, 3(3):106–117, correction, 4(1):iv.
Károlyi, Ottó (1998). Traditional African & Oriental Music, Penguin Books. .
Ratios
Musical techniques
Musical terminology
Rhythm and meter | Hemiola | [
"Physics",
"Mathematics"
] | 1,881 | [
"Physical quantities",
"Time",
"Rhythm and meter",
"Arithmetic",
"Spacetime",
"Ratios"
] |
157,835 | https://en.wikipedia.org/wiki/Water%20table | The water table is the upper surface of the zone of saturation. The zone of saturation is where the pores and fractures of the ground are saturated with groundwater, which may be fresh, saline, or brackish, depending on the locality. It can also be simply explained as the depth below which the ground is saturated.
The water table is the surface where the water pressure head is equal to the atmospheric pressure (where gauge pressure = 0). It may be visualized as the "surface" of the subsurface materials that are saturated with groundwater in a given vicinity.
The groundwater may be from precipitation or from groundwater flowing into the aquifer. In areas with sufficient precipitation, water infiltrates through pore spaces in the soil, passing through the unsaturated zone. At increasing depths, water fills in more of the pore spaces in the soils, until a zone of saturation is reached. Below the water table, in the phreatic zone (zone of saturation), layers of permeable rock that yield groundwater are called aquifers. In less permeable soils, such as tight bedrock formations and historic lakebed deposits, the water table may be more difficult to define.
“Water table” and “water level” are not synonymous. If a deeper aquifer has a lower permeable unit that confines the upward flow, then the water level in this aquifer may rise to a level that is greater or less than the elevation of the actual water table. The elevation of the water in this deeper well is dependent upon the pressure in the deeper aquifer and is referred to as the potentiometric surface, not the water table.
Formation
The water table may vary due to seasonal changes such as precipitation and evapotranspiration. In undeveloped regions with permeable soils that receive sufficient amounts of precipitation, the water table typically slopes toward rivers that act to drain the groundwater away and release the pressure in the aquifer. Springs, rivers, lakes and oases occur when the water table reaches the surface. Groundwater entering rivers and lakes accounts for the base-flow water levels in water bodies.
Surface topography
Within an aquifer, the water table is rarely horizontal, but reflects the surface relief due to the capillary effect (capillary fringe) in soils, sediments and other porous media. In the aquifer, groundwater flows from points of higher pressure to points of lower pressure, and the direction of groundwater flow typically has both a horizontal and a vertical component. The slope of the water table is known as the “hydraulic gradient”, which depends on the rate at which water is added to and removed from the aquifer and the permeability of the material. The water table does not always mimic the topography due to variations in the underlying geological structure (e.g., folded, faulted, fractured bedrock).
Perched water tables
A perched water table (or perched aquifer) is an aquifer that occurs above the regional water table. This occurs when there is an impermeable layer of rock or sediment (aquiclude) or relatively impermeable layer (aquitard) above the main water table/aquifer but below the land surface. If a perched aquifer's flow intersects the surface, at a valley wall, for example, the water is discharged as a spring.
Fluctuations
Tidal
On low-lying oceanic islands with porous soil, freshwater tends to collect in lenticular pools on top of the denser seawater intruding from the sides of the islands. Such an island's freshwater lens, and thus the water table, rises and falls with the tides.
Seasonal
In some regions, for example, Great Britain or California, winter precipitation is often higher than summer precipitation and so the groundwater storage is not fully recharged in summer. Consequently, the water table is lower during the summer. This disparity between the level of the winter and summer water table is known as the "zone of intermittent saturation", wherein the water table will fluctuate in response to climatic conditions.
Long-term
Fossil water is groundwater that has remained in an aquifer for several millennia and occurs mainly in deserts. It is non-renewable by present-day rainfall due to its depth below the surface, and any extraction causes a permanent change in the water table in such regions.
Effects on crop yield
Most crops need a water table at a minimum depth. For some important food and fiber crops a classification was made because at shallower depths the crop suffers a yield decline.
(where DWT = depth to water table in centimetres)
Effects on construction
A water table close to the surface affects excavation, drainage, foundations, wells and leach fields (in areas without municipal water and sanitation), and more.
When excavation occurs near enough to the water table to reach its capillary action, groundwater must be removed during construction. This is conspicuous in Berlin, which is built on sandy, marshy ground, and the water table is generally 2 meters below the surface. Pink and blue pipes can often be seen carrying groundwater from construction sites into the Spree river (or canals).
See also
References
Aquifers
Hydrology
Hydrogeology
Irrigation
Water supply
Water and the environment
Karst | Water table | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,080 | [
"Hydrology",
"Aquifers",
"Environmental engineering",
"Water supply",
"Hydrogeology"
] |
158,405 | https://en.wikipedia.org/wiki/Iron%28II%29%20sulfate | Iron(II) sulfate (British English: iron(II) sulphate) or ferrous sulfate denotes a range of salts with the formula FeSO4·xH2O. These compounds exist most commonly as the heptahydrate (x = 7) but several values for x are known. The hydrated form is used medically to treat or prevent iron deficiency, and also for industrial applications. Known since ancient times as copperas and as green vitriol (vitriol is an archaic name for hydrated sulfate minerals), the blue-green heptahydrate (hydrate with 7 molecules of water) is the most common form of this material. All the iron(II) sulfates dissolve in water to give the same aquo complex [Fe(H2O)6]2+, which has octahedral molecular geometry and is paramagnetic. The name copperas dates from times when the copper(II) sulfate was known as blue copperas, and perhaps in analogy, iron(II) and zinc sulfate were known respectively as green and white copperas.
It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 107th most commonly prescribed medication in the United States, with more than 6million prescriptions.
Uses
Industrially, ferrous sulfate is mainly used as a precursor to other iron compounds. It is a reducing agent, and as such is useful for the reduction of chromate in cement to less toxic Cr(III) compounds. Historically ferrous sulfate was used in the textile industry for centuries as a dye fixative. It is used historically to blacken leather and as a constituent of iron gall ink. The preparation of sulfuric acid ('oil of vitriol') by the distillation of green vitriol (iron(II) sulfate) has been known for at least 700 years.
Medical use
Plant growth
Iron(II) sulfate is sold as ferrous sulfate, a soil amendment for lowering the pH of a high alkaline soil so that plants can access the soil's nutrients.
In horticulture it is used for treating iron chlorosis. Although not as rapid-acting as ferric EDTA, its effects are longer-lasting. It can be mixed with compost and dug into the soil to create a store which can last for years. Ferrous sulfate can be used as a lawn conditioner. It can also be used to eliminate silvery thread moss in golf course putting greens.
Pigment and craft
Ferrous sulfate can be used to stain concrete and some limestones and sandstones a yellowish rust color.
Woodworkers use ferrous sulfate solutions to color maple wood a silvery hue.
Green vitriol is also a useful reagent in the identification of mushrooms.
Historical uses
Ferrous sulfate was used in the manufacture of inks, most notably iron gall ink, which was used from the Middle Ages until the end of the 18th century. Chemical tests made on the Lachish letters () showed the possible presence of iron. It is thought that oak galls and copperas may have been used in making the ink on those letters. It also finds use in wool dyeing as a mordant. Harewood, a material used in marquetry and parquetry since the 17th century, is also made using ferrous sulfate.
Two different methods for the direct application of indigo dye were developed in England in the 18th century and remained in use well into the 19th century. One of these, known as china blue, involved iron(II) sulfate. After printing an insoluble form of indigo onto the fabric, the indigo was reduced to leuco-indigo in a sequence of baths of ferrous sulfate (with reoxidation to indigo in air between immersions). The china blue process could make sharp designs, but it could not produce the dark hues of other methods.
In the second half of the 1850s ferrous sulfate was used as a photographic developer for collodion process images.
Hydrates
Iron(II) sulfate can be found in various states of hydration, and several of these forms exist in nature or were created synthetically.
FeSO4·H2O (mineral: szomolnokite, relatively rare, monoclinic)
FeSO4·H2O (synthetic compound stable at pressures exceeding 6.2 GPa, triclinic)
FeSO4·4H2O (mineral: rozenite, white, relatively common, may be dehydration product of melanterite, monoclinic)
FeSO4·5H2O (mineral: siderotil, relatively rare, triclinic)
FeSO4·6H2O (mineral: ferrohexahydrite, very rare, monoclinic)
FeSO4·7H2O (mineral: melanterite, blue-green, relatively common, monoclinic)
The tetrahydrate is stabilized when the temperature of aqueous solutions reaches . At these solutions form both the tetrahydrate and monohydrate.
Mineral forms are found in oxidation zones of iron-bearing ore beds, e.g. pyrite, marcasite, chalcopyrite, etc. They are also found in related environments, like coal fire sites. Many rapidly dehydrate and sometimes oxidize. Numerous other, more complex (either basic, hydrated, and/or containing additional cations) Fe(II)-bearing sulfates exist in such environments, with copiapite being a common example.
Production and reactions
In the finishing of steel prior to plating or coating, the steel sheet or rod is passed through pickling baths of sulfuric acid. This treatment produces large quantities of iron(II) sulfate as a by-product.
Another source of large amounts results from the production of titanium dioxide from ilmenite via the sulfate process.
Ferrous sulfate is also prepared commercially by oxidation of pyrite:
It can be produced by displacement of metals less reactive than Iron from solutions of their sulfate:
Reactions
Upon dissolving in water, ferrous sulfates form the metal aquo complex [Fe(H2O)6]2+, which is an almost colorless, paramagnetic ion.
On heating, iron(II) sulfate first loses its water of crystallization and the original green crystals are converted into a white anhydrous solid. When further heated, the anhydrous material decomposes into sulfur dioxide and sulfur trioxide, leaving a reddish-brown iron(III) oxide. Thermolysis of iron(II) sulfate begins at about .
->[\Delta]
Like other iron(II) salts, iron(II) sulfate is a reducing agent. For example, it reduces nitric acid to nitrogen monoxide and chlorine to chloride:
Its mild reducing power is of value in organic synthesis. It is used as the iron catalyst component of Fenton's reagent.
Ferrous sulfate can be detected by the cerimetric method, which is the official method of the Indian Pharmacopoeia. This method includes the use of ferroin solution showing a red to light green colour change during titration.
See also
Iron(III) sulfate (ferric sulfate), the other common simple sulfate of iron.
Copper(II) sulfate
Ammonium iron(II) sulfate, also known as Mohr's salt, the common double salt of ammonium sulfate with iron(II) sulfate.
Chalcanthum
Ephraim Seehl known as an early manufacturer of Iron(II) sulfate, which he called 'green vitriol'.
References
External links
Iron(II) compounds
Sulfates
World Health Organization essential medicines
Deliquescent materials | Iron(II) sulfate | [
"Chemistry"
] | 1,624 | [
"Sulfates",
"Deliquescent materials",
"Salts"
] |
158,682 | https://en.wikipedia.org/wiki/Ls | In computing, ls is a command to list computer files and directories in Unix and Unix-like operating systems. It is specified by POSIX and the Single UNIX Specification.
It is available in the EFI shell, as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities,
or as part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The numerical computing environments MATLAB and GNU Octave include an ls
function with similar functionality.
In other environments, such as DOS, OS/2, and Microsoft Windows, similar functionality is provided by the dir command.
History
An ls utility appeared in the first version of AT&T UNIX, the name inherited from a similar command in Multics also named 'ls', short for the word "list". is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification.
Behavior
Unix and Unix-like operating systems maintain the idea of a working directory. When invoked without arguments, ls lists the files in the working directory. If a directory is specified as an argument, the files in that directory are listed; if a file is specified, that file is listed. Multiple directories and files may be specified.
In many Unix-like systems, names starting with a dot (.) are hidden. Examples are ., which refers to the working directory, and .., which refers to its parent directory. Hidden names are not shown by default. With -a, all names, including all hidden names, are shown. Using -A shows all names, including hidden names, except for . and ... File names specified explicitly (for example ls .secret) are always listed.
Without options, ls displays names only.
The different implementations have different options, but common options include:
-l Long format, displaying Unix file types, permissions, number of hard links, owner, group, size, last-modified date-time and name. If the modified date is older than 6 months, the time is replaced with the year. Some implementations add additional flags to permissions. The file type can be one of 8 characters: -, regular file; d, directory; l, symbolic (soft) link; n, network files; s, socket; p, named pipe (FIFO); c, character special file; b, block special file.
-h Output sizes in human readable format (e.g., 1K (kilobytes), 234M (megabytes), 2G (gigabytes)). This option is not part of the POSIX standard, although implemented in several systems, e.g., GNU coreutils in 1997, FreeBSD 4.5 in 2002, and Solaris 9 in 2002.
Additional options controlling how items are displayed include:
-R Recursively list items in subdirectories.
-t Sort the list by modification time (default sort is alphabetically).
-u Sort the list by last access time.
-c Sort the list by last attribute (status) change time.
-r Reverse the order, for example most recent time last.
--full-time Show times down to the second and millisecond instead of just the minute.
-1 One entry per line.
-m Stream format; list items across the page, separated by commas.
-g Include group but not owner.
-o Include owner but not group (when combined with -g both group and owner are suppressed).
-d Show information about a directory or symbolic link, rather than the contents of a directory or the link's target.
-F Append a "/" to directory names and a "*" to executable files.
It may be possible to highlight different types of items with different colors. This is an area where implementations differ:
GNU ls uses the --color option; it checks the Unix file type, the file permissions and the file extension and uses its own database to control colors maintained using dircolors.
FreeBSD ls uses the -G option; it checks only the Unix file type and file permissions and uses the termcap database
When the option to use color to indicate item types is selected, the output might look like:
-rw-r--r-- 1 tsmitt nregion 26650 Dec 20 11:16 audio.ogg
brw-r--r-- 1 tsmitt nregion 64 Jan 27 05:52 bd-block-device
crw-r--r-- 1 tsmitt nregion 255 Jan 26 13:57 cd-character-device
-rw-r--r-- 1 tsmitt nregion 290 Jan 26 14:08 image.png
drwxrwxr-x 2 tsmitt nregion 48 Jan 26 11:28 di-directory
-rwxrwxr-x 1 tsmitt nregion 29 Jan 26 14:03 ex-executable
-rw-r--r-- 1 tsmitt nregion 0 Dec 20 09:39 fi-regular-file
lrwxrwxrwx 1 tsmitt nregion 3 Jan 26 11:44 ln-soft-link -> dir
lrwxrwxrwx 1 tsmitt nregion 15 Dec 20 10:57 or-orphan-link -> mi-missing-link
drwxr-xrwx 2 tsmitt nregion 4096 Dec 20 10:58 ow-other-writeable-dir
prw-r--r-- 1 tsmitt nregion 0 Jan 26 11:50 pi-pipe
-rwxr-sr-x 1 tsmitt nregion 0 Dec 20 11:05 sg-setgid
srw-rw-rw- 1 tsmitt nregion 0 Jan 26 12:00 so-socket
drwxr-xr-t 2 tsmitt nregion 4096 Dec 20 10:58 st-sticky-dir
-rwsr-xr-x 1 tsmitt nregion 0 Dec 20 11:09 su-setuid
-rw-r--r-- 1 tsmitt nregion 10240 Dec 20 11:12 compressed.gz
drwxrwxrwt 2 tsmitt nregion 4096 Dec 20 11:10 tw-sticky-other-writeable-dir
Sample usage
The following example demonstrates the output of the command:
$ ls -l
drwxr--r-- 1 fjones editors 4096 Mar 2 12:52 drafts
-rw-r--r-- 3 fjones editors 30405 Mar 2 12:52 edition-32
-r-xr-xr-x 1 fjones bookkeepers 8460 Jan 16 2022 edit.sh
Each line shows the d (directory) or - (file) indicator, Unix file permission notation, number of hard links (1 or 3), the file's owner, the file's group, the file size, the modification date/time, and the file name. In the working directory, the owner fjones has a directory named drafts, a regular file named edition-32, and an executable named edit.sh which is "old", i.e. modified more than 6 months ago as indicated by the display of the year.┌─────────── file (not a directory)
|┌─────────── read-write (no execution) permissions for the owner
|│ ┌───────── read-only permissions for the group
|│ │ ┌─────── read-only permissions for others
|│ │ │ ┌── number of hard links
|│ │ │ │ ┌── owner
|│ │ │ │ │ ┌── user group
|│ │ │ │ │ │ ┌── file size in bytes
|│ │ │ │ │ │ │ ┌── last modified on
|│ │ │ │ │ │ │ │ ┌── filename
-rw-r--r-- 3 fjones editors 30405 Mar 2 12:52 edition-32
See also
stat (Unix)
chown
chgrp
du (Unix)
mdls
User identifier (Unix)
Group identifier (Unix)
List of Unix commands
Unix directory structure
References
External links
GNU ls source code (as part of coreutils)
ls at the LinuxQuestions.org wiki
Multics commands
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands | Ls | [
"Technology"
] | 1,826 | [
"IBM i Qshell commands",
"Standard Unix programs",
"Multics commands",
"Computing commands",
"Plan 9 commands",
"Inferno (operating system) commands"
] |
158,740 | https://en.wikipedia.org/wiki/Exhaust%20gas%20recirculation | In internal combustion engines, exhaust gas recirculation (EGR) is a nitrogen oxide () emissions reduction technique used in petrol/gasoline, diesel engines and some hydrogen engines. EGR works by recirculating a portion of an engine's exhaust gas back to the engine cylinders. The exhaust gas displaces atmospheric air and reduces in the combustion chamber. Reducing the amount of oxygen reduces the amount of fuel that can burn in the cylinder thereby reducing peak in-cylinder temperatures. The actual amount of recirculated exhaust gas varies with the engine operating parameters.
In the combustion cylinder, is produced by high-temperature mixtures of atmospheric nitrogen and oxygen, and this usually occurs at cylinder peak pressure. In a spark-ignition engine, an ancillary benefit of recirculating exhaust gases via an external EGR valve is an increase in efficiency, as charge dilution allows a larger throttle position and reduces associated pumping losses. Mazda's turbocharged SkyActiv gasoline direct injection engine uses recirculated and cooled exhaust gases to reduce combustion chamber temperatures, thereby permitting the engine to run at higher boost levels before the air-fuel mixture must be enriched to prevent engine knocking.
In a gasoline engine, this inert exhaust displaces some amount of combustible charge in the cylinder, effectively reducing the quantity of charge available for combustion without affecting the air-fuel ratio. In a diesel engine, the exhaust gas replaces some of the excess oxygen in the pre-combustion mixture. Because forms primarily when a mixture of nitrogen and oxygen is subjected to high temperature, the lower combustion chamber temperatures caused by EGR reduces the amount of that the combustion process generates. Gases re-introduced from EGR systems will also contain near equilibrium concentrations of and CO; the small fraction initially within the combustion chamber inhibits the total net production of these and other pollutants when sampled on a time average. Chemical properties of different fuels limit how much EGR may be used. For example methanol is more tolerant to EGR than gasoline.
History
The first EGR systems were crude; some were as simple as an orifice jet between the exhaust and intake tracts which admitted exhaust to the intake tract whenever the engine was running. Difficult starting, rough idling, reduced performance and lost fuel economy inevitably resulted. By 1973, an EGR valve controlled by manifold vacuum opened or closed to admit exhaust to the intake tract only under certain conditions. Control systems grew more sophisticated as automakers gained experience; Volkswagen's "Coolant Controlled Exhaust Gas Recirculation" system of 1973 exemplified this evolution: a coolant temperature sensor blocked vacuum to the EGR valve until the engine reached normal operating temperature. This prevented driveability problems due to unnecessary exhaust induction; forms under elevated temperature conditions generally not present with a cold engine. Moreover, the EGR valve was controlled, in part, by vacuum drawn from the carburetor's venturi, which allowed more precise constraint of EGR flow to only those engine load conditions under which is likely to form. Later, backpressure transducers were added to the EGR valve control to further tailor EGR flow to engine load conditions. Most modern engines now need exhaust gas recirculation to meet emissions standards. However, recent innovations have led to the development of engines that do not require them. The 3.6 Chrysler Pentastar engine is one example that does not require EGR.
EGR
The exhaust gas contains water vapor and carbon dioxide which both have lower heat capacity ratio than air. Adding exhaust gas therefore reduces pressure and temperature during the isentropic compression in the cylinder, thereby lowering the adiabatic flame temperature.
In a typical automotive spark-ignited (SI) engine, 5% to 15% of the exhaust gas is routed back to the intake as EGR. The maximum quantity is limited by the need of the mixture to sustain a continuous flame front during the combustion event; excessive EGR in poorly set up applications can cause misfires and partial burns. Although EGR does measurably slow combustion, this can largely be compensated for by advancing spark timing. The impact of EGR on engine efficiency largely depends on the specific engine design, and sometimes leads to a compromise between efficiency and emissions. In certain types of situations, a properly operating EGR can theoretically increase the efficiency of gasoline engines via several mechanisms:
Reduced throttle losses. The addition of inert exhaust gas into the intake system means that for a given power output, the throttle plate must be opened further, resulting in increased inlet manifold pressure and reduced throttling losses.
Reduced heat rejection. Lowered peak combustion temperatures not only reduces formation, it also reduces the loss of thermal energy to combustion chamber surfaces, leaving more available for conversion to mechanical work during the expansion stroke.
Reduced chemical dissociation. The lower peak temperatures result in more of the released energy remaining as sensible energy near Top Dead Center (TDC), rather than being bound up (early in the expansion stroke) in the dissociation of combustion products. This effect is minor compared to the first two.
EGR is typically not employed at high loads because it would reduce peak power output. This is because it reduces the intake charge density. EGR is also omitted at idle (low-speed, zero load) because it would cause unstable combustion, resulting in rough idle.
Since the EGR system recirculates a portion of exhaust gases, over time the valve can become clogged with carbon deposits, which will prevent it from operating properly. Clogged EGR valves can sometimes be cleaned, but replacement is necessary if the valve is faulty.
Diesel engines
Because diesel engines depend on the heat of compression to ignite their fuel, they are fundamentally different from spark-ignited engines. The physical process of diesel-fuel combustion is such that the most complete combustion occurs at the highest temperatures. Unfortunately, the production of nitrogen oxides () increases at high temperatures. The goal of EGR is thus to reduce production by reducing the combustion temperatures.
In modern diesel engines, the EGR gas is usually cooled with a heat exchanger to allow the introduction of a greater mass of recirculated gas. However, uncooled EGR designs do exist; these are often referred to as hot-gas recirculation (HGR). Cooled EGR components are exposed to repeated, rapid changes in temperatures, which can cause coolant leak and catastrophic engine failure.
Unlike spark-ignition engines, diesel engines are not limited by the need for a contiguous flamefront. Furthermore, since diesels always operate with excess air, they benefit (in terms of reduced output) from EGR rates as high as 50%. However, a 50% EGR rate is only suitable when the diesel engine is at idle, since this is when there is otherwise a large excess of air.
Because modern diesel engines often have a throttle, EGR can reduce the need for throttling, thereby eliminating this type of loss in the same way that it does for spark-ignited engines. In a naturally aspirated (i.e. nonturbocharged) engine, such a reduction in throttling also reduces the problem of engine oil being sucked past the piston rings into the cylinder and causing oil-derived carbon deposits there. (This benefit only applies to nonturbocharged engines.)
In diesel engines in particular, EGR systems come with serious drawbacks, one of which is a reduction in engine longevity. For example, because the EGR system routes exhaust gas directly back into the cylinder intake without any form of filtration, this exhaust gas contains carbon particulates. And, because these tiny particles are abrasive, the recirculation of this material back into the cylinder increases engine wear. This is so because these carbon particles will blow by the piston rings (causing piston-cylinder-interface wear in the process) and then end up in the crankcase oil, where they will cause further wear throughout the engine simply because their tiny size passes through typical oil filters. This enables them to be recirculated indefinitely (until the next oil change takes place).
Exhaust gas—which consists largely of nitrogen, carbon dioxide, and water vapor—has a higher specific heat than air, so it still serves to lower peak combustion temperatures. However, adding EGR to a diesel reduces the specific heat ratio of the combustion gases in the power stroke. This reduces the amount of power that can be extracted by the piston, thereby reducing the thermodynamic efficiency.
EGR also tends to reduce the completeness of fuel combustion during the power stroke. This is plainly evident by the increase in particulate emissions that corresponds to an increase in EGR.
Particulate matter (mainly carbon and also known as soot) that is not burned in the power stroke represents wasted energy. Because of stricter regulations on particulate matter (PM), the soot-increasing effect of EGR required the introduction of further emission controls in order to compensate for the resulting PM emission increases. The most common soot-control device is a diesel particulate filter (DPF) installed downstream of the engine in the exhaust system. This captures soot but causes a reduction in fuel efficiency due to the back pressure created.
Diesel particulate filters come with their own set of very specific operational and maintenance requirements. Firstly, as the DPF captures the soot particles (which are made far more numerous due to the use of EGR), the DPF itself progressively becomes loaded with soot. This soot must then be burned off, either actively or passively.
At sufficiently high temperatures, the nitrogen dioxide component of emissions is the primary oxidizer of the soot caught in the DPF at normal operating temperatures. This process is known as passive regeneration, and it is only partially effective at burning off the captured soot. And, especially at high EGR rates, the effectiveness of passive regeneration is further reduced. This, in turn, necessitates periodic active regeneration of the DPF by burning diesel fuel directly in the oxidation catalyst in order to significantly increase exhaust-gas temperatures through the DPF to the point where PM is incinerated by the residual oxygen in the exhaust.
Because diesel fuel and engine oil both contain nonburnable (i.e. metallic and mineral) impurities, the incineration of soot (PM) in the DPF leaves behind a residue known as ash. For this reason, after repeated regeneration events, eventually the DPF must either be physically removed and cleaned in a special external process, or it must be replaced.
As noted earlier, the feeding of the low-oxygen exhaust gas into the diesel engine's air intake engenders lower combustion temperatures, thereby reducing emissions of . By replacing some of the fresh air intake with inert gases EGR also allows the engine to reduce the amount of injected fuel without compromising ideal air-fuel mixture ratio, therefore reducing fuel consumption in low engine load situation (for ex. while the vehicle is coasting or cruising). Power is not reduced by EGR at any times, as EGR is not employed in high load engine situations. This allows engines to still deliver maximum power when needed, but lower fuel consumption despite large cylinder volume when partial load is sufficient to meet the power needs of the car and the driver.
EGR has nothing to do with oil vapor re-routing from a positive crankcase ventilation system (PCV) system, as the latter is only there to reduce oil vapor emissions, and can be present on engines with or without any EGR system. However, the tripartite mixture resulting from employing both EGR and PCV in an engine (i.e. exhaust gas, fresh air, and oil vapour) can cause the buildup of sticky tar in the intake manifold and valves. This mixture can also cause problems with components such as swirl flaps, where fitted. (These problems, which effectively take the form of an undesirable positive-feedback loop, will worsen as the engine ages. For example, as the piston rings progressively wear out, more crankcase oil will get into the exhaust stream. Simultaneously, more fuel and soot and combustion byproducts will gain access to the engine oil.)
The end result of this recirculation of both exhaust gas and crankcase oil vapour is again an increase in soot production, which however is effectively countered by the DPF, which collects these and in the end will burn those unburnt particles during regeneration, converting them into CO2 and water vapour emissions, that - unlike NOx gases - have no negative health effects.
Modern cooled EGR systems help reduce engine wear by using the waste heat recouped from the recirculated gases to help warm the coolant and hence the engine block faster to operating temperature. This also helps lower fuel consumption through reducing the time after cold starts during which the engine controller has to inject somewhat larger amounts of fuel into the cylinders to counter the effects of fuel vapor condensation on cylinder walls and lowered combustion effectiveness because of the engine block still being below ideal operating temperature. Lowering combustion temperatures also helps reducing the oxidization of engine oil, as the most significant factor affecting that is exposure of the oil to high temperatures.
Although engine manufacturers have refused to release details of the effect of EGR on fuel economy, the EPA regulations of 2002 that led to the introduction of cooled EGR were associated with a 3% drop in engine efficiency, thus bucking the trend of a 0.5% annual increase.
See also
Diesel exhaust
Secondary air injection
Sources
Heywood, John B., "Internal Combustion Engine Fundamentals," McGraw Hill, 1988.
van Basshuysen, Richard, and Schäfer, Fred, "Internal Combustion Engine Handbook," SAE International, 2004.
"Bosch Automotive Handbook," 3rd Edition, Robert Bosch GmbH, 1993.
References
External links
Lecture notes on improving fuel efficiency that discusses the effects of specific heat ratio, University of Washington
Diesel cycle calculator that can be used to show the effect of specific heat ratio, Georgia State University HyperPhysics
A Chrysler Imperial fan club describes different EGR control mechanisms
Don’t Block or Remove the EGR Valve, It’s Saving You Money
What are the symptoms of bad EGR valve
Engine technology
Chemical process engineering
Air pollution control systems
NOx control | Exhaust gas recirculation | [
"Chemistry",
"Technology",
"Engineering"
] | 2,923 | [
"Chemical process engineering",
"Chemical engineering",
"Engine technology",
"Engines"
] |
158,741 | https://en.wikipedia.org/wiki/Power-to-weight%20ratio | Power-to-weight ratio (PWR, also called specific power, or power-to-mass ratio) is a calculation commonly applied to engines and mobile power sources to enable the comparison of one unit or design to another. Power-to-weight ratio is a measurement of actual performance of any engine or power source. It is also used as a measurement of performance of a vehicle as a whole, with the engine's power output being divided by the weight (or mass) of the vehicle, to give a metric that is independent of the vehicle's size. Power-to-weight is often quoted by manufacturers at the peak value, but the actual value may vary in use and variations will affect performance.
The inverse of power-to-weight, weight-to-power ratio (power loading) is a calculation commonly applied to aircraft, cars, and vehicles in general, to enable the comparison of one vehicle's performance to another. Power-to-weight ratio is equal to thrust per unit mass multiplied by the velocity of any vehicle.
Power-to-weight (specific power)
The power-to-weight ratio (specific power) is defined as the power generated by the engine(s) divided by the mass. In this context, the term "weight" can be considered a misnomer, as it colloquially refers to mass. In a zero-gravity (weightless) environment, the power-to-weight ratio would not be considered infinite.
A typical turbocharged V8 diesel engine might have an engine power of and a mass of , giving it a power-to-weight ratio of 0.65 kW/kg (0.40 hp/lb).
Examples of high power-to-weight ratios can often be found in turbines. This is because of their ability to operate at very high speeds. For example, the Space Shuttle's main engines used turbopumps (machines consisting of a pump driven by a turbine engine) to feed the propellants (liquid oxygen and liquid hydrogen) into the engine's combustion chamber. The original liquid hydrogen turbopump is similar in size to an automobile engine (weighing approximately ) and produces for a power-to-weight ratio of 153 kW/kg (93 hp/lb).
Physical interpretation
In classical mechanics, instantaneous power is the limiting value of the average work done per unit time as the time interval Δt approaches zero (i.e. the derivative with respect to time of the work done).
The typically used metric unit of the power-to-weight ratio is which equals . This fact allows one to express the power-to-weight ratio purely by SI base units. A vehicle's power-to-weight ratio equals its acceleration times its velocity; so at twice the velocity, it experiences half the acceleration, all else being equal.
Propulsive power
If the work to be done is rectilinear motion of a body with constant mass , whose center of mass is to be accelerated along a (possibly non-straight) line to a speed and angle with respect to the centre and radial of a gravitational field by an onboard powerplant, then the associated kinetic energy is
where:
is mass of the body
is speed of the center of mass of the body, changing with time.
The work–energy principle states that the work done to the object over a period of time is equal to the difference in its total energy over that period of time, so the rate at which work is done is equal to the rate of change of the kinetic energy (in the absence of potential energy changes).
The work done from time t to time t + Δt along the path C is defined as the line integral , so the fundamental theorem of calculus has that power is given by .
where:
is acceleration of the center of mass of the body, changing with time.
is linear force – or thrust – applied upon the center of mass of the body, changing with time.
is velocity of the center of mass of the body, changing with time.
is torque applied upon the center of mass of the body, changing with time.
is angular velocity of the center of mass of the body, changing with time.
In propulsion, power is only delivered if the powerplant is in motion, and is transmitted to cause the body to be in motion. It is typically assumed here that mechanical transmission allows the powerplant to operate at peak output power. This assumption allows engine tuning to trade power band width and engine mass for transmission complexity and mass. Electric motors do not suffer from this tradeoff, instead trading their high torque for traction at low speed. The power advantage or power-to-weight ratio is then
where:
is linear speed of the center of mass of the body.
Engine power
The useful power of an engine with shaft power output can be calculated using a dynamometer to measure torque and rotational speed, with maximum power reached when torque multiplied by rotational speed is a maximum. For jet engines the useful power is equal to the flight speed of the aircraft multiplied by the force, known as net thrust, required to make it go at that speed. It is used when calculating propulsive efficiency.
Examples
Engines
Heat engines and heat pumps
Thermal energy is made up from molecular kinetic energy and latent phase energy. Heat engines are able to convert thermal energy in the form of a temperature gradient between a hot source and a cold sink into other desirable mechanical work. Heat pumps take mechanical work to regenerate thermal energy in a temperature gradient. Standard definitions should be used when interpreting how the propulsive power of a jet or rocket engine is transferred to its vehicle.
Electric motors and electromotive generators
An electric motor uses electrical energy to provide mechanical work, usually through the interaction of a magnetic field and current-carrying conductors. By the interaction of mechanical work on an electrical conductor in a magnetic field, electrical energy can be generated.
Fluid engines and fluid pumps
Fluids (liquid and gas) can be used to transmit and/or store energy using pressure and other fluid properties. Hydraulic (liquid) and pneumatic (gas) engines convert fluid pressure into other desirable mechanical or electrical work. Fluid pumps convert mechanical or electrical work into movement or pressure changes of a fluid, or storage in a pressure vessel.
Thermoelectric generators and electrothermal actuators
A variety of effects can be harnessed to produce thermoelectricity, thermionic emission, pyroelectricity and piezoelectricity. Electrical resistance and ferromagnetism of materials can be harnessed to generate thermoacoustic energy from an electric current.
Electrochemical (galvanic) and electrostatic cell systems
(Closed cell) batteries
All electrochemical cell batteries deliver a changing voltage as their chemistry changes from "charged" to "discharged". A nominal output voltage and a cutoff voltage are typically specified for a battery by its manufacturer. The output voltage falls to the cutoff voltage when the battery becomes "discharged". The nominal output voltage is always less than the open-circuit voltage produced when the battery is "charged". The temperature of a battery can affect the power it can deliver, where lower temperatures reduce power. Total energy delivered from a single charge cycle is affected by both the battery temperature and the power it delivers. If the temperature lowers or the power demand increases, the total energy delivered at the point of "discharge" is also reduced.
Battery discharge profiles are often described in terms of a factor of battery capacity. For example, a battery with a nominal capacity quoted in ampere-hours (Ah) at a C/10 rated discharge current (derived in amperes) may safely provide a higher discharge current – and therefore higher power-to-weight ratio – but only with a lower energy capacity. Power-to-weight ratio for batteries is therefore less meaningful without reference to corresponding energy-to-weight ratio and cell temperature. This relationship is known as Peukert's law.
Electrostatic, electrolytic and electrochemical capacitors
Capacitors store electric charge onto two electrodes separated by an electric field semi-insulating (dielectric) medium. Electrostatic capacitors feature planar electrodes onto which electric charge accumulates. Electrolytic capacitors use a liquid electrolyte as one of the electrodes and the electric double layer effect upon the surface of the dielectric-electrolyte boundary to increase the amount of charge stored per unit volume. Electric double-layer capacitors extend both electrodes with a nanoporous material such as activated carbon to significantly increase the surface area upon which electric charge can accumulate, reducing the dielectric medium to nanopores and a very thin high permittivity separator.
While capacitors tend not to be as temperature sensitive as batteries, they are significantly capacity constrained and without the strength of chemical bonds suffer from self-discharge. Power-to-weight ratio of capacitors is usually higher than batteries because charge transport units within the cell are smaller (electrons rather than ions), however energy-to-weight ratio is conversely usually lower.
Fuel cell stacks and flow cell batteries
Fuel cells and flow cells, although perhaps using similar chemistry to batteries, do not contain the energy storage medium or fuel. With a continuous flow of fuel and oxidant, available fuel cells and flow cells continue to convert the energy storage medium into electric energy and waste products. Fuel cells distinctly contain a fixed electrolyte whereas flow cells also require a continuous flow of electrolyte. Flow cells typically have the fuel dissolved in the electrolyte.
Photovoltaics
Vehicles
Power-to-weight ratios for vehicles are usually calculated using curb weight (for cars) or wet weight (for motorcycles), that is, excluding weight of the driver and any cargo. This could be slightly misleading, especially with regard to motorcycles, where the driver might weigh 1/3 to 1/2 as much as the vehicle itself. In the sport of competitive cycling athlete's performance is increasingly being expressed in VAMs and thus as a power-to-weight ratio in W/kg. This can be measured through the use of a bicycle powermeter or calculated from measuring incline of a road climb and the rider's time to ascend it.
Locomotives
A locomotive generally must be heavy in order to develop enough adhesion on the rails to start a train. As the coefficient of friction between steel wheels and rails seldom exceeds 0.25 in most cases, improving a locomotive's power-to-weight ratio is often counterproductive. However, the choice of power transmission system, such as variable-frequency drive versus direct-current drive, may support a higher power-to-weight ratio by better managing propulsion power.
Utility and practical vehicles
Most vehicles are designed to meet passenger comfort and cargo carrying requirements. Vehicle designs trade off power-to-weight ratio to increase comfort, cargo space, fuel economy, emissions control, energy security and endurance. Reduced drag and lower rolling resistance in a vehicle design can facilitate increased cargo space without increase in the (zero cargo) power-to-weight ratio. This increases the role flexibility of the vehicle. Energy security considerations can trade off power (typically decreased) and weight (typically increased), and therefore power-to-weight ratio, for fuel flexibility or drive-train hybridisation. Some utility and practical vehicle variants such as hot hatches and sports-utility vehicles reconfigure power (typically increased) and weight to provide the perception of sports car like performance or for other psychological benefit.
Notable low ratio
Common power
Performance luxury, roadsters and mild sports
Increased engine performance is a consideration, but also other features associated with luxury vehicles. Longitudinal engines are common. Bodies vary from hot hatches, sedans (saloons), coupés, convertibles and roadsters. Mid-range dual-sport and cruiser motorcycles tend to have similar power-to-weight ratios.
Sports vehicles
Power-to-weight ratio is an important vehicle characteristic that affects the acceleration of sports vehicles.
Early vehicles
Aircraft
Propeller aircraft depend on high power-to-weight ratios to generate sufficient thrust to achieve sustained flight, and then for speed.
Thrust-to-weight ratio
Jet aircraft produce thrust directly.
Human
Power-to-weight ratio is important in cycling, since it determines acceleration and the speed during hill climbs. Since a cyclist's power-to-weight output decreases with fatigue, it is normally discussed with relation to the length of time that he or she maintains that power. A professional cyclist can produce over 20 W/kg (0.012 hp/lb) as a five-second maximum.
See also
References
Mechanics
Power (physics)
Engineering ratios | Power-to-weight ratio | [
"Physics",
"Mathematics",
"Engineering"
] | 2,563 | [
"Force",
"Physical quantities",
"Metrics",
"Engineering ratios",
"Quantity",
"Power (physics)",
"Energy (physics)",
"Mechanics",
"Mechanical engineering",
"Wikipedia categories named after physical quantities"
] |
11,750,741 | https://en.wikipedia.org/wiki/List%20of%20materials-testing%20resources | Materials testing is used to assess product quality, functionality, safety, reliability and toxicity of both materials and electronic devices. Some applications of materials testing include defect detection, failure analysis, material development, basic materials science research, and the verification of material properties for application trials. This is a list of organizations and companies that publish materials testing standards or offer materials testing laboratory services.
International organizations
These organizations create materials testing standards or conduct active research in the fields of materials analysis and reliability testing.
American Association of Textile Chemists and Colorists (AATCC)
American National Standards Institute (ANSI)
American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE)
American Society of Mechanical Engineers (ASME)
ASTM International
Federal Institute for Materials Research and Testing (German: Bundesanstalt für Materialforschung und -prüfung (BAM))
Instron
International Organization for Standardization (ISO)
MTS Systems Corporation
Nadcap
National Physical Laboratory (United Kingdom)
Society of Automotive Engineers (SAE)
Zwick Roell Group
Global research laboratories
These organizations provide materials testing laboratory services.
FEI Company
Lucideon
SEMATECH
See also
Characterization (materials science)
List of materials analysis methods
References
Tests
Materials testing
Engineering-related lists | List of materials-testing resources | [
"Materials_science",
"Engineering"
] | 252 | [
"Materials testing",
"Materials science"
] |
11,755,283 | https://en.wikipedia.org/wiki/Airflow%20Sciences%20Corporation | Airflow Sciences Corporation (ASC) is an engineering consulting company based in Livonia, Michigan, USA that specializes in the design and optimization of equipment and processes involving flow, heat transfer, combustion, and mass transfer. Engineering techniques include Computational Fluid Dynamics (CFD) modeling, experimental laboratory testing, and field measurements at client sites. ASC works for a wide range of industries world-wide, including power generation, manufacturing, aerospace, HVAC, food processing, biomedical, pollution control, oil and gas, rail, legal, and automotive.
In addition to engineering consulting, ASC has a test equipment division that manufactures flow measurement equipment such as data loggers, pressure/flow/temperature instrumentation, wind tunnels, and online flow systems.
ASC is the parent company of Azore Software, LLC, which develops and sells the commercial simulation software AzoreCFD. This advanced polyhedral-based CFD software use widely used for flow and heat transfer analysis and design.
History
The company was founded in 1975 by Robert Gielow and James Paul, two Professional Engineers with backgrounds in the aerospace industry. They quickly realized that the analysis techniques they applied to projects such as the Apollo program Moon rockets and commercial aircraft design could be used to advance a wide variety of other industries. Early years of the company were focused on aerodynamic optimization of vehicles such as cars, tractor trailers, and rail cars to minimize drag and fuel consumption. This work involved both wind tunnel testing and numerical simulation.
In the 1970s and 1980s, ASC developed a range of simulation software for potential flow and viscous flow analysis, writing its own CFD solver (VISCOUS). The capabilities of this cartesian-based CFD software eventually included advanced physics such as combustion, particulate transport and drying simulations, time-dependence, and convection/conduction/radiation heat transfer. In the 1990s and early 2000s, ASC personnel developed a new CFD software package, AzoreCFD, featuring a modern, polyhedral based solver in order to analyze highly complex geometries and physics. Azore development continues with new features and simulations capabilities incorporated to allow more unique flow problems to be analyzed.
In addition to simulation of flow and heat transfer, ASC has advanced its field testing capabilities on a regular basis over the years. Many of the tests ASC is requested to perform are new or unique, requiring development and fabrication of custom flow measurement equipment. Today's testing capabilities include velocity, temperature, pressure, particulate sampling, gas species and emissions, and more.
Problems solved by ASC
ASC works with its customer's to solve problems involving the flow of fluids (gases or liquids) in or around a wide variety of equipment or goods. Some problems include simply the flow itself. These include such things as reducing pressure drop, eliminating flow induced vibrations, or ensuring uniform flow through an equipment that processes elements of the flow. Since fluid flow carries energy, heat transfer problems such as improving thermal mixing or heating/cooling characteristics are often solved. Similarly, ASC undertakes many problems involving the pneumatic transport of particulate and droplets. The chemical reactions in a flowing gas or liquid are also the subject of ASC studies. Methods used to solve these problems include: Numerical simulation (including Computational fluid dynamics), Wind tunnel testing, Laboratory modeling (including Scale models), and Flow measurement/Field testing.
Customer base
ASC customers include manufacturing and processing firms from a wide variety of industries, including:
Electricity / power generation
Renewable energy
Food processing
Manufacturing
Metals processing / heat treating
Heating, Ventilation, and Air Conditioning (HVAC) of buildings
Cement and mineral production and mining
Pulp and paper
Sporting equipment Aerodynamics
Steel producers
Automotive industry and auto racing companies
Class I railroads and railcar manufacturers
Biotechnology firms
ASC has performed over 4000 engineering studies worldwide since 1975.
External links
Airflow Sciences Corporation web site
Airflow Sciences Equipment Division web site
Azore CFD Software web site
ASC YouTube page
Airflow Sciences Equipment YouTube page
Azore YouTube page
ASC LinkedIn
Fluid dynamics
Companies based in Wayne County, Michigan
Engineering companies of the United States
Technology companies established in 1975 | Airflow Sciences Corporation | [
"Chemistry",
"Engineering"
] | 833 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
11,756,462 | https://en.wikipedia.org/wiki/Hybrid%20cryptosystem | In cryptography, a hybrid cryptosystem is one which combines the convenience of a public-key cryptosystem with the efficiency of a symmetric-key cryptosystem. Public-key cryptosystems are convenient in that they do not require the sender and receiver to share a common secret in order to communicate securely. However, they often rely on complicated mathematical computations and are thus generally much more inefficient than comparable symmetric-key cryptosystems. In many applications, the high cost of encrypting long messages in a public-key cryptosystem can be prohibitive. This is addressed by hybrid systems by using a combination of both.
A hybrid cryptosystem can be constructed using any two separate cryptosystems:
a key encapsulation mechanism, which is a public-key cryptosystem
a data encapsulation scheme, which is a symmetric-key cryptosystem
The hybrid cryptosystem is itself a public-key system, whose public and private keys are the same as in the key encapsulation scheme.
Note that for very long messages the bulk of the work in encryption/decryption is done by the more efficient symmetric-key scheme, while the inefficient public-key scheme is used only to encrypt/decrypt a short key value.
All practical implementations of public key cryptography today employ the use of a hybrid system. Examples include the TLS protocol and the SSH protocol, that use a public-key mechanism for key exchange (such as Diffie-Hellman) and a symmetric-key mechanism for data encapsulation (such as AES). The OpenPGP file format and the PKCS#7 file format are other examples.
Hybrid Public Key Encryption (HPKE, published as RFC 9180) is a modern standard for generic hybrid encryption. HPKE is used within multiple IETF protocols, including MLS and TLS Encrypted Hello.
Envelope encryption is an example of a usage of hybrid cryptosystems in cloud computing. In a cloud context, hybrid cryptosystems also enable centralized key management.
Example
To encrypt a message addressed to Alice in a hybrid cryptosystem, Bob does the following:
Obtains Alice's public key.
Generates a fresh symmetric key for the data encapsulation scheme.
Encrypts the message under the data encapsulation scheme, using the symmetric key just generated.
Encrypts the symmetric key under the key encapsulation scheme, using Alice's public key.
Sends both of these ciphertexts to Alice.
To decrypt this hybrid ciphertext, Alice does the following:
Uses her private key to decrypt the symmetric key contained in the key encapsulation segment.
Uses this symmetric key to decrypt the message contained in the data encapsulation segment.
Security
If both the key encapsulation and data encapsulation schemes in a hybrid cryptosystem are secure against adaptive chosen ciphertext attacks, then the hybrid scheme inherits that property as well. However, it is possible to construct a hybrid scheme secure against adaptive chosen ciphertext attacks even if the key encapsulation has a slightly weakened security definition (though the security of the data encapsulation must be slightly stronger).
Envelope encryption
Envelope encryption is term used for encrypting with a hybrid cryptosystem used by all major cloud service providers, often as part of a centralized key management system in cloud computing.
Envelope encryption gives names to the keys used in hybrid encryption: Data Encryption Keys (abbreviated DEK, and used to encrypt data) and Key Encryption Keys (abbreviated KEK, and used to encrypt the DEKs). In a cloud environment, encryption with envelope encryption involves generating a DEK locally, encrypting one's data using the DEK, and then issuing a request to wrap (encrypt) the DEK with a KEK stored in a potentially more secure service. Then, this wrapped DEK and encrypted message constitute a ciphertext for the scheme. To decrypt a ciphertext, the wrapped DEK is unwrapped (decrypted) via a call to a service, and then the unwrapped DEK is used to decrypt the encrypted message. In addition to the normal advantages of a hybrid cryptosystem, using asymmetric encryption for the KEK in a cloud context provides easier key management and separation of roles, but can be slower.
In cloud systems, such as Google Cloud Platform and Amazon Web Services, a key management system (KMS) can be available as a service. In some cases, the key management system will store keys in hardware security modules, which are hardware systems that protect keys with hardware features like intrusion resistance. This means that KEKs can also be more secure because they are stored on secure specialized hardware. Envelope encryption makes centralized key management easier because a centralized key management system only needs to store KEKs, which occupy less space, and requests to the KMS only involve sending wrapped and unwrapped DEKs, which use less bandwidth than transmitting entire messages. Since one KEK can be used to encrypt many DEKs, this also allows for less storage space to be used in the KMS. This also allows for centralized auditing and access control at one point of access.
See also
Transport Layer Security
Secure Shell
Key Encapsulation Mechanism
References
Cryptography | Hybrid cryptosystem | [
"Mathematics",
"Engineering"
] | 1,154 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
11,757,250 | https://en.wikipedia.org/wiki/CH2N2 | {{DISPLAYTITLE:CH2N2}}
CH2N2 may refer to:
Cyanamide, an organic compound
Diazirine, class of organic molecules with a cyclopropene-like ring, 3H-diazirene
Diazomethane, chemical compound discovered in 1894
Isodiazomethane, parent compound of a class of derivatives of general formula R2N–NC
Nitrilimine, class of organic compounds sharing a common functional group with the general structure R-CN-NR | CH2N2 | [
"Chemistry"
] | 108 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
9,269,065 | https://en.wikipedia.org/wiki/Precision%20tests%20of%20QED | Quantum electrodynamics (QED), a relativistic quantum field theory of electrodynamics, is among the most stringently tested theories in physics. The most precise and specific tests of QED consist of measurements of the electromagnetic fine-structure constant, α, in various physical systems. Checking the consistency of such measurements tests the theory.
Tests of a theory are normally carried out by comparing experimental results to theoretical predictions. In QED, there is some subtlety in this comparison, because theoretical predictions require as input an extremely precise value of α, which can only be obtained from another precision QED experiment. Because of this, the comparisons between theory and experiment are usually quoted as independent determinations of α. QED is then confirmed to the extent that these measurements of α from different physical sources agree with each other.
The agreement found this way is to within ten parts in a billion (10−8), based on the comparison of the electron anomalous magnetic dipole moment and the Rydberg constant from atom recoil measurements as described below. This makes QED one of the most accurate physical theories constructed thus far.
Besides these independent measurements of the fine-structure constant, many other predictions of QED have been tested as well.
Measurements of the fine-structure constant using different systems
Precision tests of QED have been performed in low-energy atomic physics experiments, high-energy collider experiments, and condensed matter systems. The value of α is obtained in each of these experiments by fitting an experimental measurement to a theoretical expression (including higher-order radiative corrections) that includes α as a parameter. The uncertainty in the extracted value of α includes both experimental and theoretical uncertainties. This program thus requires both high-precision measurements and high-precision theoretical calculations. Unless noted otherwise, all results below are taken from.
Low-energy measurements
Anomalous magnetic dipole moments
The most precise measurement of α comes from the anomalous magnetic dipole moment, or g−2 (pronounced "g minus 2"), of the electron. To make this measurement, two ingredients are needed:
A precise measurement of the anomalous magnetic dipole moment, and
A precise theoretical calculation of the anomalous magnetic dipole moment in terms of α.
As of February 2007, the best measurement of the anomalous magnetic dipole moment of the electron was made by the group of Gerald Gabrielse at Harvard University, using a single electron caught in a Penning trap. The difference between the electron's cyclotron frequency and its spin precession frequency in a magnetic field is proportional to g−2. An extremely high precision measurement of the quantized energies of the cyclotron orbits, or Landau levels, of the electron, compared to the quantized energies of the electron's two possible spin orientations, gives a value for the electron's spin g-factor:
g/2 = ,
a precision of better than one part in a trillion. (The digits in parentheses indicate the standard uncertainty in the last listed digits of the measurement.)
The current state-of-the-art theoretical calculation of the anomalous magnetic dipole moment of the electron includes QED diagrams with up to four loops. Combining this with the experimental measurement of g yields the most precise value of α:
α−1 = ,
a precision of better than a part in a billion. This uncertainty is ten times smaller than the nearest rival method involving atom-recoil measurements.
A value of α can also be extracted from the anomalous magnetic dipole moment of the muon. The g-factor of the muon is extracted using the same physical principle as for the electron above – namely, that the difference between the cyclotron frequency and the spin precession frequency in a magnetic field is proportional to g−2. The most precise measurement comes from Brookhaven National Laboratory's muon g−2 experiment, in which polarized muons are stored in a cyclotron and their spin orientation is measured by the direction of their decay electrons. As of February 2007, the current world average muon g-factor measurement is,
g/2 = ,
a precision of better than one part in a billion. The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED.
See muon g–2 for current efforts to refine the measurement.
Atom-recoil measurements
This is an indirect method of measuring α, based on measurements of the masses of the electron, certain atoms, and the Rydberg constant. The Rydberg constant is known to seven parts in a trillion. The mass of the electron relative to that of caesium and rubidium atoms is also known with extremely high precision. If the mass of the electron can be measured with sufficiently high precision, then α can be found from the Rydberg constant according to
To get the mass of the electron, this method actually measures the mass of an 87Rb atom by measuring the recoil speed of the atom after it emits a photon of known wavelength in an atomic transition. Combining this with the ratio of electron to 87Rb atom, the result for α is,
α−1 = .
Because this measurement is the next-most-precise after the measurement of α from the electron's anomalous magnetic dipole moment described above, their comparison provides the most stringent test of QED: the value of α obtained here is within one standard deviation of that found from the electron's anomalous magnetic dipole moment, an agreement to within ten parts in a billion.
Neutron Compton wavelength
This method of measuring α is very similar in principle to the atom-recoil method. In this case, the accurately known mass ratio of the electron to the neutron is used. The neutron mass is measured with high precision through a very precise measurement of its Compton wavelength. This is then combined with the value of the Rydberg constant to extract α. The result is,
α−1 = .
Hyperfine splitting
Hyperfine splitting is a splitting in the energy levels of an atom caused by the interaction between the magnetic moment of the nucleus and the combined spin and orbital magnetic moment of the electron. The hyperfine splitting in hydrogen, measured using Ramsey's hydrogen maser, is known with great precision. Unfortunately, the influence of the proton's internal structure limits how precisely the splitting can be predicted theoretically. This leads to the extracted value of α being dominated by theoretical uncertainty:
α−1 = .
The hyperfine splitting in muonium, an "atom" consisting of an electron and an antimuon, provides a more precise measurement of α because the muon has no internal structure:
α−1 = .
Lamb shift
The Lamb shift is a small difference in the energies of the 2 S1/2 and 2 P1/2 energy levels of hydrogen, which arises from a one-loop effect in quantum electrodynamics. The Lamb shift is proportional to α5 and its measurement yields the extracted value:
α−1 = .
Positronium
Positronium is an "atom" consisting of an electron and a positron. Whereas the calculation of the energy levels of ordinary hydrogen is contaminated by theoretical uncertainties from the proton's internal structure, the particles that make up positronium have no internal structure so precise theoretical calculations can be performed. The measurement of the splitting between the 2 3S1 and the 1 3S1 energy levels of positronium yields
α−1 = .
Measurements of α can also be extracted from the positronium decay rate. Positronium decays through the annihilation of the electron and the positron into two or more gamma-ray photons. The decay rate of the singlet ("para-positronium") 1S0 state yields
α−1 = ,
and the decay rate of the triplet ("ortho-positronium") 3S1 state yields
α−1 = .
This last result is the only serious discrepancy among the numbers given here, but there is some evidence that uncalculated higher-order quantum corrections give a large correction to the value quoted here.
High-energy QED processes
The cross sections of higher-order QED reactions at high-energy electron-positron colliders provide a determination of α. In order to compare the extracted value of α with the low-energy results, higher-order QED effects including the running of α due to vacuum polarization must be taken into account. These experiments typically achieve only percent-level accuracy, but their results are consistent with the precise measurements available at lower energies.
The cross section for e e → e e e e yields
α−1 = ,
and the cross section for e e → e e μ μ yields
α−1 = .
Condensed matter systems
The quantum Hall effect and the AC Josephson effect are exotic quantum interference phenomena in condensed matter systems. These two effects provide a standard electrical resistance and a standard frequency, respectively, which measure the charge of the electron with corrections that are strictly zero for macroscopic systems.
The quantum Hall effect yields
α−1 = ,
and the AC Josephson effect yields
α−1 = .
Other tests
QED predicts that the photon is a massless particle. A variety of highly sensitive tests have proven that the photon mass is either zero, or else extraordinarily small. One type of these tests, for example, works by checking Coulomb's law at high accuracy, as the photon's mass would be nonzero if Coulomb's law were modified. See .
QED predicts that when electrons get very close to each other, they behave as if they had a higher electric charge, due to vacuum polarization. This prediction was experimentally verified in 1997 using the TRISTAN particle accelerator in Japan.
QED effects like vacuum polarization and self-energy influence the electrons bound to a nucleus in a heavy atom due to extreme electromagnetic fields. A recent experiment on the ground state hyperfine splitting in 209Bi80+ and 209Bi82+ ions revealed a deviation from the theory by more than 7 standard uncertainties. Indications show that this deviation may originate from a wrong value of the nuclear magnetic moment of 209Bi.
See also
QED vacuum
Eötvös experiment, another very high accuracy test, of gravitation
References
External links
Particle Data Group (PDG)
PDG Review of the Muon Anomalous Magnetic Moment as of July 2007
PDG 2007 Listing of particle properties for electron
PDG 2007 Listing of particle properties for muon
Quantum electrodynamics
Electrodynamics | Precision tests of QED | [
"Mathematics"
] | 2,276 | [
"Electrodynamics",
"Dynamical systems"
] |
9,269,357 | https://en.wikipedia.org/wiki/Oxidative%20deamination | Oxidative deamination is a form of deamination that generates α-keto acids and other oxidized products from amine-containing compounds, and occurs primarily in the liver. Oxidative deamination is stereospecific, meaning it contains different stereoisomers as reactants and products; this process is either catalyzed by L or D- amino acid oxidase and L-amino acid oxidase is present only in the liver and kidney. Oxidative deamination is an important step in the catabolism of amino acids, generating a more metabolizable form of the amino acid, and also generating ammonia as a toxic byproduct. The ammonia generated in this process can then be neutralized into urea via the urea cycle.
Much of the oxidative deamination occurring in cells involves the amino acid glutamate, which can be oxidatively deaminated by the enzyme glutamate dehydrogenase (GDH), using NAD or NADP as a coenzyme. This reaction generates α-ketoglutarate (α-KG) and ammonia. Glutamate can then be regenerated from α-KG via the action of transaminases or aminotransferase, which catalyze the transfer of an amino group from an amino acid to an α-keto acid. In this manner, an amino acid can transfer its amine group to glutamate, after which GDH can then liberate ammonia via oxidative deamination. This is a common pathway during amino acid catabolism.
Another enzyme responsible for oxidative deamination is monoamine oxidase, which catalyzes the deamination of monoamines via addition of oxygen. This generates the corresponding ketone- or aldehyde-containing form of the molecule, and generates ammonia. Monoamine oxidases MAO-A and MAO-B play vital roles in the degradation and inactivation of monoamine neurotransmitters such as serotonin and epinephrine. Monoamine oxidases are important drug targets, targeted by MAO inhibitors (MAOIs) such as selegiline. Glutamate dehydrogenase play an important role in oxidative deamination.
References
External links
Diagram from Elmhurst
Metabolism
Protein catabolism | Oxidative deamination | [
"Chemistry",
"Biology"
] | 496 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
9,270,946 | https://en.wikipedia.org/wiki/Calcium%20aluminates | Calcium aluminates are a range of materials obtained by heating calcium oxide and aluminium oxide together at high temperatures. They are encountered in the manufacture of refractories and cements.
The stable phases shown in the phase diagram (formed at atmospheric pressure under an atmosphere of normal humidity) are:
Tricalcium aluminate, 3CaO·Al2O3 ()
Dodecacalcium hepta-aluminate, 12CaO·7Al2O3 () (once known as mayenite)
Monocalcium aluminate, CaO·Al2O3 (CA) (occurring in nature as krotite and dmitryivanovite – two polymorphs)
Monocalcium dialuminate, CaO·2Al2O3 () (occurring in nature as grossite )
Monocalcium hexa-aluminate, CaO·6Al2O3 () (occurring in nature as hibonite, a representative of magnetoplumbite group)
In addition, other phases include:
Dicalcium aluminate, 2CaO·Al2O3 (), which exists only at pressures above 2500 MPa. The crystal is orthorhombic, with density 3480 kg·m−3. The natural dicalcium aluminate, brownmillerite, may form at normal pressure but elevated temperature in pyrometamorphic zones, e.g., in burning coal-mining heaps.
Pentacalcium trialuminate, 5CaO·3Al2O3 (), forms only under an anhydrous and oxygen free atmosphere. The crystal is orthorhombic, with a density of 3067 kg·m−3. It reacts rapidly with water.
Tetracalcium trialuminate, 4CaO·3Al2O3 (), is a metastable phase formed by dehydrating 4CaO·3Al2O3·3H2O ().
Hydration reaction
In contrast to Portland cements, calcium aluminates do not release calcium hydroxide () portlandite or lime during their hydration.
See also
Calcium aluminate cements
Cement
Cement chemist notation (CCN)in which the following abbreviations for calcium and aluminium oxides are defined as:
C = CaO
A =
Hydrocalumite
Mayenite
Ye'elimite, , a rare natural anhydrous calcium sulfoaluminate
References
Further reading
Aluminates
Calcium compounds
Cement
Ceramic materials
Refractory materials | Calcium aluminates | [
"Physics",
"Engineering"
] | 526 | [
"Refractory materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
9,272,422 | https://en.wikipedia.org/wiki/Specific%20ultraviolet%20absorbance | Specific ultraviolet absorbance (SUVA) is the absorbance of ultraviolet light in a water sample at a specified wavelength that is normalized for dissolved organic carbon (DOC) concentration. Specific UV absorbance (SUVA) wavelengths have analytical uses to measure the aromatic character of dissolved organic matter by detecting density of electron conjugation which is associated with aromatic bonds.
Derivation
To derive SUVA, first, UVC light (UV spectrum subtypes) at 254 nm or 280 nm, is measured in units of absorbance per meter of path length, often the sample must be diluted with ultrapure water because absorbance can be high. As increasing dissolved organic carbon concentration increases absorbance in the UV range, the UV light has to be normalized to the concentration of dissolved organic carbon in mg per L to ascertain differences in the aromatic quality of the water.
Aromatic character is used in the study of dissolved organic matter, from mineral soils, or organic soils, to use as an assay to whether or not dissolved organic carbon in the water is labile, a ready source of energy, or is from a relatively old source of carbon (recalcitrant). However, although a good indicator of aromaticity, caution must be used with determination of reactivity.
Measures of water purity often rely on measuring turbidity, not aromaticity.
References
Further reading
Water chemistry
Spectroscopy | Specific ultraviolet absorbance | [
"Physics",
"Chemistry"
] | 282 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"nan",
"Spectroscopy"
] |
9,272,721 | https://en.wikipedia.org/wiki/Kinematic%20diagram | In mechanical engineering, a kinematic diagram or kinematic scheme (also called a joint map or skeleton diagram) illustrates the connectivity of links and joints of a mechanism or machine rather than the dimensions or shape of the parts. Often links are presented as geometric objects, such as lines, triangles or squares, that support schematic versions of the joints of the mechanism or machine.
For example, the figures show the kinematic diagrams (i) of the slider-crank that forms a piston and crank-shaft in an engine, and (ii) of the first three joints for a PUMA manipulator.
|- style="text-align:center;"
| ||
|- style="text-align:center;"
| PUMA robot || and its kinematic diagram
Linkage graph
A kinematic diagram can be formulated as a graph by representing the joints of the mechanism as vertices and the links as edges of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design.
An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion.
Elements of machines
Elements of kinematics diagrams include the frame, which is the frame of reference for all the moving components, as well as links (kinematic pairs), and joints. Primary Joints include pins, sliders and other elements that allow pure rotation or pure linear motion. Higher order joints also exist that allow a combination of rotation or linear motion. Kinematic diagrams also include points of interest, and other important components.
See also
Free body diagram
Kinematic synthesis
Left-hand–right-hand activity chart
References
Mechanisms (engineering)
Diagrams
Classical mechanics | Kinematic diagram | [
"Physics",
"Engineering"
] | 378 | [
"Mechanics",
"Classical mechanics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
9,274,067 | https://en.wikipedia.org/wiki/Marangoni%20number | The Marangoni number (Ma) is, as usually defined, the dimensionless number that compares the rate of transport due to Marangoni flows, with the rate of transport of diffusion. The Marangoni effect is flow of a liquid due to gradients in the surface tension of the liquid. Diffusion is of whatever is creating the gradient in the surface tension. Thus as the Marangoni number compares flow and diffusion timescales it is a type of Péclet number.
The Marangoni number is defined as:
A common example is surface tension gradients caused by temperature gradients. Then the relevant diffusion process is that of thermal energy (heat). Another is surface gradients caused by variations in the concentration of surfactants, where the diffusion is now that of surfactant molecules.
The number is named after Italian scientist Carlo Marangoni, although its use dates from the 1950s and it was neither discovered nor used by Carlo Marangoni.
The Marangoni number for a simple liquid of viscosity with a surface tension change over a distance parallel to the surface, can be estimated as follows. Note that we assume that is the only length scale in the problem, which in practice implies that the liquid be at least deep. The transport rate is usually estimated using the equations of Stokes flow, where the fluid velocity is obtained by equating the stress gradient to the viscous dissipation. A surface tension is a force per unit length, so the resulting stress must scale as , while the viscous stress scales as , for the speed of the Marangoni flow. Equating the two we have a flow speed . As Ma is a type of Péclet number, it is a velocity times a length, divided by a diffusion constant, , Here this is the diffusion constant of whatever is causing the surface tension difference. So,
Marangoni number due to thermal gradients
A common application is to a layer of liquid, such as water, when there is a temperature difference across this layer. This could be due to the liquid evaporating or being heated from below. There is a surface tension at the surface of a liquid that depends on temperature, typically as the temperature increases the surface tension decreases. Thus if due to a small fluctuation temperature, one part of the surface is hotter than another, there will be flow from the hotter part to the colder part, driven by this difference in surface tension, this flow is called the Marangoni effect. This flow will transport thermal energy, and the Marangoni number compares the rate at which thermal energy is transported by this flow to the rate at which thermal energy diffuses.
For a liquid layer of thickness , viscosity and thermal diffusivity , with a surface tension which changes with temperature at a rate , the Marangoni number can be calculated using the following formula:
When Ma is small thermal diffusion dominates and there is no flow, but for large Ma, flow (convection) occurs, driven by the gradients in the surface tension. This is called Bénard-Marangoni convection.
References
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics | Marangoni number | [
"Physics",
"Chemistry",
"Engineering"
] | 647 | [
"Thermodynamic properties",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
975,309 | https://en.wikipedia.org/wiki/Cementation%20process | The cementation process is an obsolete technology for making steel by carburization of iron. Unlike modern steelmaking, it increased the amount of carbon in the iron. It was apparently developed before the 17th century. Derwentcote Steel Furnace, built in 1720, is the earliest surviving example of a cementation furnace. Another example in the UK is the cementation furnace in Doncaster Street, Sheffield.
Origins
The process was described in a treatise published in Prague in 1574. It was invented by Johann Nussbaum of Magdeburg, who began operations at Nuremberg (with partners) in 1601. The process was patented in England by William Ellyot and Mathias Meysey in 1614. At that date, the "invention" could consist merely of the introduction of a new industry or product, or even a mere monopoly. They evidently soon transferred the patent to Sir Basil Brooke, but he was forced to surrender it in 1619. A clause in the patent prohibiting the import of steel was found to be undesirable because he could not supply as much good steel as was needed. Brooke's furnaces were probably in his manor of Madeley at Coalbrookdale (which certainly existed before the English Civil War) where two cementation furnaces have been excavated.<ref>P. Belford and R. A. Ross, 'English steelmaking in the seventeenth century: excavation of two cementation furnaces at Coalbrookdale' Historical Metallurgy 41(2) (2007), 105-123.</ref> He probably used bar iron from the Forest of Dean, where he was a partner in farming the King's ironworks in two periods.
By 1631, it was recognised that Swedish iron was the best raw material and then or later particularly certain marks (brands) such as double bullet (so called from the mark OO) from Österby and hoop L from Leufsta (now Lövsta), whose mark consisted of an L in a circle, both belonging to Louis De Geer and his descendants. These were among the first ironworks in Sweden to use the Walloon process of fining iron, producing what was known in England as oregrounds iron. It was so called from the Swedish port of Öregrund, north of Stockholm, in whose hinterland most of the ironworks lay. The ore used came ultimately from the Dannemora mine.
Process
The process begins with wrought iron and charcoal. It uses one or more long stone pots inside a furnace. Typically, in Sheffield, each pot was 14 feet by 4 feet and 3.5 feet deep. Iron bars and charcoal are packed in alternating layers, with a top layer of charcoal and then refractory matter to make the pot or "coffin" airtight. Some manufacturers used a mixture of powdered charcoal, soot and mineral salts, called cement powder. In larger works, up to 16 tons of iron were treated in each cycle, though it can be done on a small scale, such as in a small furnace or blacksmith's forge.
Depending on the thickness of the iron bars, the pots were then heated from below for a week or more. Bars were regularly examined and when the correct condition was reached the heat was withdrawn and the pots were left until cool—usually around fourteen days. The iron had gained a little over 1% in mass from the carbon in the charcoal, and had become heterogeneous bars of blister steel.
The bars were then shortened, bound, heated and forge welded together to become shear steel. It would be cut and re welded multiple times, with each new weld producing a more homogeneous, higher quality steel. This would be done at most 3-4 times, as more is unnecessary and could potentially cause carbon loss from the steel.
Alternatively they could be broken up and melted in a crucible using a crucible furnace with a flux to become crucible steel (at the time also called cast steel), a process devised by Benjamin Huntsman in Sheffield in the 1740s.
Similar processes
Brass production
In the early modern period, brass, an alloy of copper and zinc, was usually produced by a cementation process in which metallic copper was heated with calamine, a zinc ore, to make calamine brass.
Notes
References
K. C. Barraclough, Steel before Bessemer I: Blister Steel: The Birth of an Industry (1985).
K. C. Barraclough, "Swedish Iron and Sheffield Steel", History of Technology 12 (1990), 1–39.
Dorian Gerhold, "The steel industry in England, 1614-1740", in R.W. Hoyle (ed.), "Histories of people and landscape: essays on the Sheffield region in memory of David Hey" (2021), 65-86
P. W. King, "The Cartel in Oregrounds Iron", Journal of Industrial History 6 (2003), 25–48.
R. J. MacKenzie and J. A Whiteman, "Why pay more? An archaeometallurgical examination of 19th century Swedish Wrought iron and Sheffield blister steel", Historical Metallurgy'' 40(2) (2006), 138–49.
Steelmaking
Metallurgical processes
Obsolete technologies | Cementation process | [
"Chemistry",
"Materials_science"
] | 1,080 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
975,450 | https://en.wikipedia.org/wiki/Argument%20of%20periapsis | The argument of periapsis (also called argument of perifocus or argument of pericenter), symbolized as ω (omega), is one of the orbital elements of an orbiting body. Parametrically, ω is the angle from the body's ascending node to its periapsis, measured in the direction of motion.
For specific types of orbits, terms such as argument of perihelion (for heliocentric orbits), argument of perigee (for geocentric orbits), argument of periastron (for orbits around stars), and so on, may be used (see apsis for more information).
An argument of periapsis of 0° means that the orbiting body will be at its closest approach to the central body at the same moment that it crosses the plane of reference from South to North. An argument of periapsis of 90° means that the orbiting body will reach periapsis at its northmost distance from the plane of reference.
Adding the argument of periapsis to the longitude of the ascending node gives the longitude of the periapsis. However, especially in discussions of binary stars and exoplanets, the terms "longitude of periapsis" or "longitude of periastron" are often used synonymously with "argument of periapsis".
Calculation
In astrodynamics the argument of periapsis ω can be calculated as follows:
If ez < 0 then ω → 2 − ω.
where:
n is a vector pointing towards the ascending node (i.e. the z-component of n is zero),
e is the eccentricity vector (a vector pointing towards the periapsis).
In the case of equatorial orbits (which have no ascending node), the argument is strictly undefined. However, if the convention of setting the longitude of the ascending node Ω to 0 is followed, then the value of ω follows from the two-dimensional case:
If the orbit is clockwise (i.e. (r × v)z < 0) then ω → 2 − ω.
where:
ex and ey are the x- and y-components of the eccentricity vector e.
In the case of circular orbits it is often assumed that the periapsis is placed at the ascending node and therefore ω = 0. However, in the professional exoplanet community, ω = 90° is more often assumed for circular orbits, which has the advantage that the time of a planet's inferior conjunction (which would be the time the planet would transit if the geometry were favorable) is equal to the time of its periastron.
See also
Apsidal precession
Kepler orbit
Orbital mechanics
Orbital node
References
External links
Argument Of Perihelion in Swinburne University Astronomy Website
Orbits
Angle | Argument of periapsis | [
"Physics"
] | 572 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Angle"
] |
976,365 | https://en.wikipedia.org/wiki/Divided%20differences | In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.
Divided differences is a recursive division process. Given a sequence of data points , the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.
Definition
Given n + 1 data points
where the are assumed to be pairwise distinct, the forward divided differences are defined as:
To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values:
Notation
Note that the divided difference depends on the values and , but the notation hides the dependency on the x-values. If the data points are given by a function f,
one sometimes writes the divided difference in the notation
Other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are:
Example
Divided differences for and the first few values of :
Thus, the table corresponding to these terms upto two columns has the following form:
Properties
Linearity
Leibniz rule
Divided differences are symmetric: If is a permutation then
Polynomial interpolation in the Newton form: if is a polynomial function of degree , and is the divided difference, then
If is a polynomial function of degree , then
Mean value theorem for divided differences: if is n times differentiable, then for a number in the open interval determined by the smallest and largest of the 's.
Matrix form
The divided difference scheme can be put into an upper triangular matrix:
Then it holds
if is a scalar
This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes x form a commutative ring.
Since is a triangular matrix, its eigenvalues are obviously .
Let be a Kronecker delta-like function, that is Obviously , thus is an eigenfunction of the pointwise function multiplication. That is is somehow an "eigenmatrix" of : . However, all columns of are multiples of each other, the matrix rank of is 1. So you can compose the matrix of all eigenvectors of from the -th column of each . Denote the matrix of eigenvectors with . Example The diagonalization of can be written as
Polynomials and power series
The matrix
contains the divided difference scheme for the identity function with respect to the nodes , thus contains the divided differences for the power function with exponent .
Consequently, you can obtain the divided differences for a polynomial function by applying to the matrix : If
and
then
This is known as Opitz' formula.
Now consider increasing the degree of to infinity, i.e. turn the Taylor polynomial into a Taylor series.
Let be a function which corresponds to a power series.
You can compute the divided difference scheme for by applying the corresponding matrix series to :
If
and
then
Alternative characterizations
Expanded form
With the help of the polynomial function this can be written as
Peano form
If and , the divided differences can be expressed as
where is the -th derivative of the function and is a certain B-spline of degree for the data points , given by the formula
This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and is the Peano kernel for the divided differences, all named after Giuseppe Peano.
Forward and backward differences
When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.
Given n+1 data points
with
the forward differences are defined as
whereas the backward differences are defined as:
Thus the forward difference table is written as:whereas the backwards difference table is written as:
The relationship between divided differences and forward differences is
whereas for backward differences:
See also
Difference quotient
Neville's algorithm
Polynomial interpolation
Mean value theorem for divided differences
Nörlund–Rice integral
Pascal's triangle
References
External links
An implementation in Haskell.
Finite differences
de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen | Divided differences | [
"Mathematics"
] | 926 | [
"Mathematical analysis",
"Finite differences"
] |
976,673 | https://en.wikipedia.org/wiki/Hilbert%E2%80%93Schmidt%20operator | In mathematics, a Hilbert–Schmidt operator, named after David Hilbert and Erhard Schmidt, is a bounded operator that acts on a Hilbert space and has finite Hilbert–Schmidt norm
where is an orthonormal basis. The index set need not be countable. However, the sum on the right must contain at most countably many non-zero terms, to have meaning. This definition is independent of the choice of the orthonormal basis.
In finite-dimensional Euclidean space, the Hilbert–Schmidt norm is identical to the Frobenius norm.
‖·‖ is well defined
The Hilbert–Schmidt norm does not depend on the choice of orthonormal basis. Indeed, if and are such bases, then
If then As for any bounded operator, Replacing with in the first formula, obtain The independence follows.
Examples
An important class of examples is provided by Hilbert–Schmidt integral operators.
Every bounded operator with a finite-dimensional range (these are called operators of finite rank) is a Hilbert–Schmidt operator.
The identity operator on a Hilbert space is a Hilbert–Schmidt operator if and only if the Hilbert space is finite-dimensional.
Given any and in , define by , which is a continuous linear operator of rank 1 and thus a Hilbert–Schmidt operator;
moreover, for any bounded linear operator on (and into ), .
If is a bounded compact operator with eigenvalues of , where each eigenvalue is repeated as often as its multiplicity, then is Hilbert–Schmidt if and only if , in which case the Hilbert–Schmidt norm of is .
If , where is a measure space, then the integral operator with kernel is a Hilbert–Schmidt operator and .
Space of Hilbert–Schmidt operators
The product of two Hilbert–Schmidt operators has finite trace-class norm; therefore, if A and B are two Hilbert–Schmidt operators, the Hilbert–Schmidt inner product can be defined as
The Hilbert–Schmidt operators form a two-sided *-ideal in the Banach algebra of bounded operators on .
They also form a Hilbert space, denoted by or , which can be shown to be naturally isometrically isomorphic to the tensor product of Hilbert spaces
where is the dual space of .
The norm induced by this inner product is the Hilbert–Schmidt norm under which the space of Hilbert–Schmidt operators is complete (thus making it into a Hilbert space).
The space of all bounded linear operators of finite rank (i.e. that have a finite-dimensional range) is a dense subset of the space of Hilbert–Schmidt operators (with the Hilbert–Schmidt norm).
The set of Hilbert–Schmidt operators is closed in the norm topology if, and only if, is finite-dimensional.
Properties
Every Hilbert–Schmidt operator is a compact operator.
A bounded linear operator is Hilbert–Schmidt if and only if the same is true of the operator , in which case the Hilbert–Schmidt norms of T and |T| are equal.
Hilbert–Schmidt operators are nuclear operators of order 2, and are therefore compact operators.
If and are Hilbert–Schmidt operators between Hilbert spaces then the composition is a nuclear operator.
If is a bounded linear operator then we have .
is a Hilbert–Schmidt operator if and only if the trace of the nonnegative self-adjoint operator is finite, in which case .
If is a bounded linear operator on and is a Hilbert–Schmidt operator on then , , and . In particular, the composition of two Hilbert–Schmidt operators is again Hilbert–Schmidt (and even a trace class operator).
The space of Hilbert–Schmidt operators on is an ideal of the space of bounded operators that contains the operators of finite-rank.
If is a Hilbert–Schmidt operator on then where is an orthonormal basis of H, and is the Schatten norm of for . In Euclidean space, is also called the Frobenius norm.
See also
References
Linear operators
Operator theory | Hilbert–Schmidt operator | [
"Mathematics"
] | 796 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Linear operators"
] |
977,206 | https://en.wikipedia.org/wiki/Stabilator | A stabilator is a fully movable aircraft horizontal stabilizer. It serves the usual functions of longitudinal stability, control and stick force requirements otherwise performed by the separate parts of a conventional horizontal stabilizer (which is fixed) and elevator (which is adjustable). Apart from reduced drag, particularly at high Mach numbers, it is a useful device for changing the aircraft balance within wide limits, and for reducing stick forces.
Stabilator is a portmanteau of stabilizer and elevator. It is also known as an all-moving tailplane (British English), all-movable tail(plane), all-moving stabilizer, all-flying tail (American English), all-flying horizontal tail, full-flying stabilizer, and slab tailplane.
General aviation
Because it involves a moving balanced surface, a stabilator can allow the pilot to generate a given pitching moment with a lower control force. Due to the high forces involved in tail balancing loads, stabilators are designed to pivot about their aerodynamic center (near the tail's mean quarter-chord). This is the point at which the pitching moment is constant regardless of the angle of attack, and thus any movement of the stabilator can be made without added pilot effort. However, to be certified by the appropriate regulatory agency, an airplane must show an increasing resistance to an increasing pilot input (movement). To provide this resistance, stabilators on small aircraft contain an anti-servo tab (usually acting also as a trim tab) that deflects in the same direction as the stabilator, thus providing an aerodynamic force resisting the pilot's input. General aviation aircraft with stabilators include the Piper Cherokee and the Cessna 177. The Glaser-Dirks DG-100 glider initially used a stabilator without an anti-servo tab to increase resistance: as a result, the pitch movement of the glider is very sensitive. Later models used a conventional stabilizer and elevator.
Military
All-flying tailplanes were used on many pioneer aircraft and the popular Morane-Saulnier G, H and L monoplanes from France as well as the early Fokker Eindecker monoplane and Halberstadt D.II biplane fighters from Germany all flew with them, although at the cost of stability: none of these aircraft, with the possible exception of the biplane Halberstadts, could be flown hands-off.
Stabilators were developed to achieve adequate pitch control in supersonic flight, and are almost universal on modern military combat aircraft.
The British wartime Miles M.52 supersonic project was designed with stabilators. Though the design only flew as a scale rocket, its all-flying tail was tested on the Miles Falcon. The contemporary American supersonic project, the Bell X-1, used separately-adjustable horizontal stabilizer and elevators allowing movement as a single surface or elevator deflection at a fixed tailplane setting.
Entering service in 1951, the Boeing B-47 Stratojet was the world's first purposely built jet bomber to include one piece stabilator design. A stabilator was considered for the Boeing B-52 Stratofortress but rejected due to the unreliability of hydraulics at the time.
The North American F-86 Sabre, the first U.S. Air Force aircraft which could go supersonic (although in a shallow dive) was introduced with a conventional horizontal stabilizer with elevators, which was eventually replaced with a stabilator.
When stabilators can move differentially to perform the roll control function of ailerons, as they do on many modern fighter aircraft they are known as elevons or rolling tails. A canard surface, looking like a stabilator but not stabilizing like a tailplane, can also be mounted in front of the main wing in a canard configuration (Curtiss-Wright XP-55 Ascender).
Stabilators on military aircraft have the same problem of too light control forces (inducing overcontrol) as general aviation aircraft. Unlike light aircraft, supersonic aircraft are not fitted with anti-servo tabs, which would add unacceptable drag. In older jet fighter aircraft, a resisting force was generated within the control system, either by springs or a resisting hydraulic force, rather than by an external anti-servo tab. For example, the North American F-100 Super Sabre, used gearing and a variable stiffness spring attached to the control stick to provide an acceptable resistance to pilot input. In modern fighters, control inputs are processed by computers ("fly by wire"), and there is no direct connection between the pilot's stick and the stabilator.
Airliners
Most modern airliners do not have a stabilator. Instead they have an adjustable horizontal stabilizer and a separate elevator control. The movable horizontal stabilizer is adjusted to keep the pitch axis in trim during flight as the speed changes, or as fuel is burned and the center of gravity moves. These adjustments are commanded by the autopilot when it is engaged, or by the human pilot if the plane is being flown manually. Adjustable stabilizers are not the same as stabilators: a stabilator is controlled by the pilot's control yoke or stick, whereas an adjustable stabilizer is controlled by the trim system.
In the Boeing 737, the adjustable stabilizer trim system is powered by an electrically operated jackscrew.
One example of an airliner with a genuine stabilator used for flight control is the Lockheed L-1011.
See also
Canard
Index of aviation articles
References
External links
Stabilators (NASA) – Includes Java applet
Aerodynamics
Aircraft controls
Aircraft wing design
ja:スタビレーター | Stabilator | [
"Chemistry",
"Engineering"
] | 1,173 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
977,522 | https://en.wikipedia.org/wiki/Rocket-powered%20aircraft | A rocket-powered aircraft or rocket plane is an aircraft that uses a rocket engine for propulsion, sometimes in addition to airbreathing jet engines. Rocket planes can achieve much higher speeds than similarly sized jet aircraft, but typically for at most a few minutes of powered operation, followed by a gliding flight. Unhindered by the need for oxygen from the atmosphere, they are suitable for very high-altitude flight. They are also capable of delivering much higher acceleration and shorter takeoffs. Many rocket aircraft may be drop launched from transport planes, as take-off from ground may leave them with insufficient time to reach high altitudes.
Rockets have been used simply to assist the main propulsion in the form of jet assisted take off (JATO) also known as rocket-assisted takeoff (RATO or RATOG). Not all rocket planes are of the conventional takeoff like "normal" aircraft. Some types have been air-launched from another plane, while other types have taken off vertically – nose in the air and tail to the ground ("tail-sitters").
Because of the use of heavy propellants and other practical difficulties of operating rockets, the majority of rocket planes have been built for experimental or research use, as interceptor fighters and space aircraft.
History
Background
Peruvian polymath Pedro Paulet conceptualized the Avión Torpedo in 1902 – a liquid-propellant rocket-powered aircraft that featured a canopy fixed to a delta tiltwing – spending decades seeking donors for the aircraft while serving as a diplomat in Europe and Latin America. Paulet's concept of using liquid-propellant was decades ahead of rocket engineers at the time who utilized black powder as a propellant. Reports of Paulet's rocket aircraft concept first appeared in 1927 after Charles Lindbergh crossed the Atlantic Ocean in an aircraft. Paulet publicly criticized Austrian rocket pioneer Max Valier's proposal about a rocket-powered aircraft completing the journey faster using black powder, arguing that his liquid-propellant rocket aircraft from thirty years earlier would be a better option.
Paulet would go on to visit the German rocket association Verein für Raumschiffahrt (VfR) and on March 15, 1928, Valier applauded Paulet's liquid-propelled rocket design in the VfR publication Die Rakete, saying the engine had "amazing power". In May 1928, Paulet was present to observe the demonstration of a rocket car of the Opel RAK program of Fritz von Opel and Max Valier, and after meeting with the German rocket enthusiasts. VfR members began to view black powder as a hindrance for rocket propulsion, with Valier himself believing that Paulet's engine was necessary for future rocket development. Paulet would soon be approached by Nazi Germany to help develop rocket technology, though he refused to assist and never shared the formula for his propellant. The Nazi government would then appropriate Paulet's work while a Soviet spy in the VfR, Alexander Boris Scherchevsky, possibly shared plans with the Soviet Union.
On 11 June 1928, as part of the Opel RAK program of Fritz von Opel and Max Valier, Lippisch Ente became the first aircraft to fly under rocket power. During the following year, the Opel RAK.1 became the first purpose-built rocket plane to fly with Fritz von Opel himself as the pilot. The Opel RAK.1 flight is also considered the world's first public flight of a manned rocket plane since it took place before a large crowd and with world media in attendance.
On 28 June 1931, another ground-breaking rocket flight was conducted by the Italian aviator and inventor Ettore Cattaneo, who created another privately built rocket plane. It flew and landed without particular problems. Following this flight, the King of Italy Victor Emmanuel III appointed Cattaneo count of Taliedo; due to his pioneering role in rocket flight, his likeness is displayed in the Space Museum of Saint Petersburg as well as in the Museum of Science and Tech of Milan.
World War II
The Heinkel He 176 was the world's first aircraft to be propelled solely by a liquid-propellant rocket engine. It performed its first powered flight on 20 June 1939 with Erich Warsitz at the controls. The He 176, while demonstrated to the Reich Air Ministry did not attract much official support, leading to Heinkel abandoning its rocket propulsion endeavours; the sole aircraft was briefly displayed at the Berlin Air Museum and was destroyed by an Allied bombing raid in 1943.
The first rocket plane ever to be mass-produced was the Messerschmitt Me 163 Komet interceptor, introduced by Germany towards the final years of the conflict as one of several efforts to develop effective rocket-powered aircraft. The Luftwaffe's first dedicated Me 163 fighter wing, Jagdgeschwader 400 (JG 400) was established in 1944, and was principally tasked with providing additional protection for the manufacturing plants producing synthetic gasoline, which were prominent targets for Allied air raids. It was planned to station further defensive units of rocket fighters around Berlin, the Ruhr and the German Bight.
A typical Me 163 tactic was to fly vertically upward through the bombers at , climb to , then dive through the formation again, firing as they went. This approach afforded the pilot two brief chances to fire a few rounds from his cannons before gliding back to his airfield. It was often difficult to supply the needed fuel for operating the rocket motors. In the final days of the Third Reich, the Me 163 was withdrawn in favor of the more successful Messerschmitt Me 262, which used jet propulsion instead.
Other German rocket-powered aircraft were pursued as well, including the Bachem Ba 349 "Natter", a vertical takeoff manned rocket interceptor aircraft that flew in prototype form. Further projects never even reached the prototype stage, such as the Zeppelin Rammer, the Fliegende Panzerfaust and the Focke-Wulf Volksjäger. Having a much larger size than any other rocket-powered endeavor of the conflict, the Silbervogel antipodal bomber spaceplane was planned by the Germans, however, later calculations showed that design would not have worked, instead being destroyed during reentry. The Me 163 Komet is the only type of rocket-powered fighter to see combat in history, and one of only two types of rocket-powered aircraft seeing any combat.
Japan, who was allied to Nazi Germany, secured the design schematics of the Me 163 Komet. After considerable effort, it successfully established its own production capability, which was used to produce a limited number of its own copies, known as the Mitsubishi J8M, which performed its first powered flight on 7 July 1945. Furthermore, Japan attempted to develop its own domestically designed rocket-powered interceptor, the Mizuno Shinryu; neither the J8M or the Shinryu ever saw combat. The Japanese also produced approximately 850 Yokosuka MXY-7 Ohka rocket-powered suicide attack aircraft during the Second World War, a number were deployed in the Battle of Okinawa. Postwar analysis concluded that the Ohkas impact was negligible, and that no U.S. Navy capital ships had been hit during the attacks due to the effective defensive tactics that were employed.
Other experimental aircraft included the Soviet Bereznyak-Isayev BI-1 that flew in 1942 while the Northrop XP-79 was originally planned with rocket engines but switched to jet engines for its first and only flight in 1945. A rocket-assisted P-51D Mustang was developed by North American Aviation that could attain . The engine ran on fumaric acid and aniline which was stored in two under wing drop tanks. The plane was tested in flight in April 1945. The rocket engine could run for about a minute. Similarly, the Messerschmitt Me 262 "Heimatschützer" series used a combination of rocket and jet propulsion to allow for shorter take-offs, faster climb rate, and even greater speeds.
Cold War era
During 1946, the Soviet Mikoyan-Gurevich I-270 was constructed in response to a Soviet Air Forces requirement issued during the previous year for a rocket-powered interceptor aircraft in the point-defence role. The design of the I-270 incorporated several pieces of technology that had been developed by Sergei Korolev between 1932 and 1943.
During 1947, a key milestone in aviation history was reached by the rocket-powered Bell X-1, which became the first aircraft to break the speed of sound in level flight, and would be the first of a series of NACA/NASA rocket-powered aircraft. Amongst these experimental aircraft were the North American X-15 and X-15A2 designs, which were operated for around a decade and eventually attained a maximum speed of Mach 6.7 as well as a peak altitude in excess of 100 km, setting new records in the process.
During the 1950s, the British developed several mixed power designs to cover the performance gap that existed in then-current turbojet designs. The rocket was the main engine for delivering the speed and height required for high speed interception of high level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure that the aircraft was able to make a powered landing rather than risking an unpredictable gliding return. One design was the Avro 720, which was primarily propelled by an 8,000 lbf (36 kN) Armstrong Siddeley Screamer rocket engine that ran on kerosene fuel mixed with liquid oxygen as the oxidizing agent. Work on the Avro 720 was abandoned shortly after the Air Ministry's decision to terminate development of the Screamer rocket engine, allegedly due to official concerns regarding the practicality of using liquid oxygen, which boils at -183 °C (90 K) and is a fire hazard, within an operational environment.
Work reached a more advanced stage with the Avro 720's rival, the Saunders-Roe SR.53. The propulsion system of this aircraft used hydrogen peroxide as a combined fuel and oxidiser, which was viewed as less problematic than the Avro 720's liquid oxygen. On 16 May 1957, Squadron Leader John Booth DFC was at the controls of XD145 for the first test flight, following up with the maiden flight of the second prototype XD151, on 6 December 1957. During the subsequent flight test programme, these two prototypes flew 56 separate test flights, during which a maximum speed of Mach 1.33 was recorded. Furthermore, since late 1953, Saunders-Roe had worked upon a derivative of the SR.53, which was separately designated as the SR.177; the principal change was the presence of an onboard radar, lacking on the SR.53 and the Avro 720 as it not being a requirement of the specification, but left the pilot dependent on his own vision other than radio-based directions supplied from ground-based radar control.
Both the SR.53 and its SR.177 cousin were relatively close to attain production status when wider political factors bore down upon the programme. During 1957, a massive re-thinking of air defence philosophy in Britain occurred, which was embodied in the 1957 Defence White Paper. This paper called for manned combat aircraft to be replaced by missiles, and thus the prospects of an order from the RAF evaporated overnight. While both the Royal Navy and Germany remained potential customers for the SR.177, the confidence of both parties was shaken by the move. Further factors, such as the Lockheed bribery scandals to compel overseas nations to order the Lockheed F-104 Starfighter, also served to undermine the sale prospects of the SR.177, costing potential customers such as Germany and Japan.
Throughout the late 1940s and 1950s, the French Air Staff also had considerable interest in rocket-powered aircraft. According to author Michel van Pelt, French Air Force officials were against a pure rocket-powered flight but favoured a mixed-propulsion approach, using a combination of rocket and turbojet engines. While the Société d'Etudes pour la Propulsion par Réaction (SEPR) set about developing France's own domestic rocket engines, the French aircraft manufacturer SNCASE was aware of the French Air Force's keenness for a capable point defence interceptor aircraft, and thus begun work on the SNCASE SE.212 Durandal. In comparison to other French mixed-power experimental aircraft, such as the competing SNCASO Trident prototype interceptor, it was a heavier aircraft, intended to fly primarily on its jet engine rather than its rocket motor. A pair of prototype aircraft were constructed; on 20 April 1956, the first performed its maiden flight, initially flying only using jet power. It was the second prototype that first made use of the rocket motor during April 1957. During flight testing, a maximum speed of was attained at an altitude of, even without using the extra power of the rocket motor; this rose to 1667 km/h at 11,800 m while the rocket was active. A total of 45 test flights were performed prior to work on the programme being terminated.
At the request of the French Air Staff, the French aircraft company SNCASO also developed its own point defence interceptor, the SNCASO Trident. It was primarily powered by a single SEPR-built rocket engine and augmented with a set of wing-tip mounted turbojet engines; operationally, both rocket and turbojet engines were to be used to perform a rapid climb and interception at high altitudes, while the jet engines alone would be used to return to base. On 2 March 1953, the first prototype Trident I conducted the type's maiden flight; flown by test pilot Jacques Guignard, the aircraft used the entire length of the runway to get airborne, being powered only by its turbojet engines. On 1 September 1953, second Trident I prototype crashed during its first flight after struggling to gain altitude after takeoff and colliding with an electricity pylon. Despite the loss, the French Air Force were impressed by the Trident's performance and were keen to have an improved model into service. On 21 May 1957, the first Trident II, 001, was destroyed during a test flight out of Centre d'Essais en Vol (Flight Test Center); caused when highly volatile rocket fuel and oxidiser, Furaline ( C13H12N2O) and Nitric acid (HNO3) respectively, accidentally mixed and exploded, resulting in the death of test pilot Charles Goujon. Two months later, all work was halted on the programme.
The advancement of the turbojet engine output, the advent of missiles, and advances in radar had made a return to mixed power unnecessary.
The development of Soviet rockets and satellites was the driving force behind the development of NASA's space program. In the early 1960s, American research into the Boeing X-20 Dyna-Soar spaceplane was cancelled due to lack of purpose; later the studies contributed to the Space Shuttle, which in turn motivated the Soviet Buran. Another similar program was ISINGLASS which was to be a rocket plane launched from a Boeing B-52 Stratofortress carrier, which was intended to achieve Mach 22, but this was never funded. ISINGLASS was intended to overfly the USSR. No images of the vehicle configuration have been released.
The Lunar Landing Research Vehicle was a mixed powered vehicle- a jet engine cancelled 5/6 of the force due to gravity, and the rocket power was able to simulate the Apollo lunar lander.
Various versions of the Reaction Motors XLR11 rocket engine powered the X-1 and X-15, but also the Martin Marietta X-24A, Martin Marietta X-24B, Northrop HL-10, Northrop M2-F2, Northrop M2-F3, and the Republic XF-91 Thunderceptor, either as a primary or auxiliary engine.
The Northrop HL-10, Northrop M2-F2 and Northrop M2-F3 were examples of a lifting body, which are aircraft which have very little if any wing and simply obtain lift from the body of the vehicle. Another example is backslider rockets in amateur rocketry.
Post Cold War era
The EZ-Rocket research and test airplane was first flown in 2001. After evaluating the EZ-Rocket, the Rocket Racing League developed three separate rocket racer aircraft over the following decade.
During 2003, another privately developed rocket-powered aircraft performed its first flight. SpaceShipOne functions both as a rocket-powered aircraft—with wings and aerodynamic control surfaces—as well as a spaceplane—with RCS thrusters for control in the vacuum of space. For their work, the SpaceShipOne team were awarded the Space Achievement Award.
In April 2019, the Chinese company Space Transportation carried out a test of a 3,700-kilogram technology demonstrator named Jiageng-1. The 8.7-meter-long plane has a wingspan of 2.5 meters and it is a part of development of the larger, future Tianxing-I-1 vertical takeoff, horizontal landing reusable launch vehicle.
Planned rocket-powered aircraft
Aerial Regional-scale Environmental Survey
Skylon (spacecraft)
XCOR Lynx
Zero Emission Hyper Sonic Transport
See also
List of rocket aircraft
List of vehicle speed records
Rocket Racing League (RRL)
Zero-length launch, launching air-breathing aircraft with rockets
References
Citations
Bibliography
"Armstrong Siddeley Screamer". Flight, No. 2478, Vol 70, 27 July 1956. pp. 160–164.
Bachem, Erich. "Einige grundsätzliche Probleme des Senkrechstarts. Probleme aus der Astronautischen Grundlagenforschung" (in German). Proceedings of the Third International Congress on Astronautics. Stuttgart: Gesellschaft für Weltraumforschung, September 1952.
Bille, Matt and Erika Lishock. The First Space Race: Launching the World's First Satellites. College Station, Texas: Texas A&M University Press, 2004. .
"Cancelled Projects:The list Up-Dated". Flight, 17 August 1967, p. 262.
Caidin, Martin. Wings into Space: The History and Future of Winged Space Flight. New York: Holt, Rinehart and Winston Inc., 1964.
Dornberger, Walter R. "The Rocket-Propelled Commercial Airliner". Dyna-Soar: Hypersonic Strategic Weapons System, Research Report No 135.. Minneapolis, Minnesota: University of Minnesota, Institute of Technology, 1956.
Galland, Adolf. The First and the Last. New York: Ballantine Books, 1957.
Geiger, Clarence J. History of the X-20A Dyna-Soar. Vol. 1: AFSC Historical Publications Series 63-50-I, Document ID ASD-TR-63-50-I. Wright-Patterson AFB, Ohio: Aeronautical Systems Division Information Office, 1963.
Godwin, Robert, ed. Dyna-Soar: Hypersonic Strategic Weapons System. Burlington, ON: Apogee Books, 2003. .
Gunston, Bill. Fighters of the Fifties. Cambridge, England: Patrick Stephens Limited, 1981. .
Jackson, A. J. Avro Aircraft since 1908. London:Putnam, 1990. .
Jackson, Robert. "Combat Aircraft Prototypes since 1945", New York: Arco/Prentice Hall Press, 1986, LCCN 85-18725, .
Jones, Barry. "Saro's Mixed-Power Saga". Aeroplane Monthly, November 1994. London:IPC. ISSN 0143-7240. pp. 32–39.
Lommel, Horst. Der erste bemannte Raketenstart der Welt (2nd ed.) (in German). Stuttgart: Motorbuch Verlag, 1998. .
London, Pete. "Saunders-Roe's Rocket Fighters." Aircraft, Vol. 43, no. 7, July 2010.
Mason, Francis K. The British Fighter since 1912. Annapolis, Maryland, USA:Naval Institute Press, 1992. .
"Mixed-Power Interceptor". Flight, 24 May 1957, pp. 697–700.
Pelt, Michel van. Rocketing into the Future: The History and Technology of Rocket Planes. Springer Science & Business Media, 2012. .
Späte, Wolfgang. Top Secret Bird: Luftwaffe's Me-163 Komet. Missoula, Montana: Pictorial Histories Publishing Co., 1989. .
Winchester, Jim. "TSR.2." Concept Aircraft: Prototypes, X-Planes and Experimental Aircraft. Kent, UK: Grange Books plc., 2005. .
Wood, Derek. Project Cancelled: The Disaster of Britain's Abandoned Aircraft Projects. London, UK: Jane's, 2nd edition, 1986. .
Yenne, Bill. The Encyclopedia of US Spacecraft. London: Bison Books, 1985. .
External links
The official Erich Warsitz website (world’s first jet pilot) about the world’s first liquid-fuelled rocket aircraft, the legendary Heinkel He 176
Aircraft configurations
German inventions
Vehicles introduced in 1928 | Rocket-powered aircraft | [
"Engineering"
] | 4,301 | [
"Aircraft configurations",
"Aerospace engineering"
] |
978,650 | https://en.wikipedia.org/wiki/Triple%20product | In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.
Scalar triple product
The scalar triple product (also called the mixed product, box product, or triple scalar product) is defined as the dot product of one of the vectors with the cross product of the other two.
Geometric interpretation
Geometrically, the scalar triple product
is the (signed) volume of the parallelepiped defined by the three vectors given.
Properties
The scalar triple product is unchanged under a circular shift of its three operands (a, b, c):
Swapping the positions of the operators without re-ordering the operands leaves the triple product unchanged. This follows from the preceding property and the commutative property of the dot product:
Swapping any two of the three operands negates the triple product. This follows from the circular-shift property and the anticommutativity of the cross product:
The scalar triple product can also be understood as the determinant of the matrix that has the three vectors either as its rows or its columns (a matrix has the same determinant as its transpose):
If the scalar triple product is equal to zero, then the three vectors a, b, and c are coplanar, since the parallelepiped defined by them would be flat and have no volume.
If any two vectors in the scalar triple product are equal, then its value is zero:
Also:
The simple product of two triple products (or the square of a triple product), may be expanded in terms of dot products:This restates in vector notation that the product of the determinants of two 3×3 matrices equals the determinant of their matrix product. As a special case, the square of a triple product is a Gram determinant.
The ratio of the triple product and the product of the three vector norms is known as a polar sine:which ranges between −1 and 1.
Scalar or pseudoscalar
Although the scalar triple product gives the volume of the parallelepiped, it is the signed volume, the sign depending on the orientation of the frame or the parity of the permutation of the vectors. This means the product is negated if the orientation is reversed, for example by a parity transformation, and so is more properly described as a pseudoscalar if the orientation can change.
This also relates to the handedness of the cross product; the cross product transforms as a pseudovector under parity transformations and so is properly described as a pseudovector. The dot product of two vectors is a scalar but the dot product of a pseudovector and a vector is a pseudoscalar, so the scalar triple product (of vectors) must be pseudoscalar-valued.
If T is a proper rotation then
but if T is an improper rotation then
Scalar or scalar density
Strictly speaking, a scalar does not change at all under a coordinate transformation. (For example, the factor of 2 used for doubling a vector does not change if the vector is in spherical vs. rectangular coordinates.) However, if each vector is transformed by a matrix then the triple product ends up being multiplied by the determinant of the transformation matrix, which could be quite arbitrary for a non-rotation. That is, the triple product is more properly described as a scalar density.
As an exterior product
In exterior algebra and geometric algebra the exterior product of two vectors is a bivector, while the exterior product of three vectors is a trivector. A bivector is an oriented plane element and a trivector is an oriented volume element, in the same way that a vector is an oriented line element.
Given vectors a, b and c, the product
is a trivector with magnitude equal to the scalar triple product, i.e.
,
and is the Hodge dual of the scalar triple product. As the exterior product is associative brackets are not needed as it does not matter which of or is calculated first, though the order of the vectors in the product does matter. Geometrically the trivector a ∧ b ∧ c corresponds to the parallelepiped spanned by a, b, and c, with bivectors , and matching the parallelogram faces of the parallelepiped.
As a trilinear function
The triple product is identical to the volume form of the Euclidean 3-space applied to the vectors via interior product. It also can be expressed as a contraction of vectors with a rank-3 tensor equivalent to the form (or a pseudotensor equivalent to the volume pseudoform); see below.
Vector triple product
The vector triple product is defined as the cross product of one vector with the cross product of the other two. The following relationship holds:
.
This is known as triple product expansion, or Lagrange's formula, although the latter name is also used for several other formulas. Its right hand side can be remembered by using the mnemonic "ACB − ABC", provided one keeps in mind which vectors are dotted together. A proof is provided below. Some textbooks write the identity as such that a more familiar mnemonic "BAC − CAB" is obtained, as in “back of the cab”.
Since the cross product is anticommutative, this formula may also be written (up to permutation of the letters) as:
From Lagrange's formula it follows that the vector triple product satisfies:
which is the Jacobi identity for the cross product. Another useful formula follows:
These formulas are very useful in simplifying vector calculations in physics. A related identity regarding gradients and useful in vector calculus is Lagrange's formula of vector cross-product identity:
This can be also regarded as a special case of the more general Laplace–de Rham operator .
Proof
The component of is given by:
Similarly, the and components of are given by:
By combining these three components we obtain:
Using geometric algebra
If geometric algebra is used the cross product b × c of vectors is expressed as their exterior product b∧c, a bivector. The second cross product cannot be expressed as an exterior product, otherwise the scalar triple product would result. Instead a left contraction can be used, so the formula becomes
The proof follows from the properties of the contraction. The result is the same vector as calculated using a × (b × c).
Interpretations
Tensor calculus
In tensor notation, the triple product is expressed using the Levi-Civita symbol:
and
referring to the -th component of the resulting vector. This can be simplified by performing a contraction on the Levi-Civita symbols,
where is the Kronecker delta function ( when and when ) and is the generalized Kronecker delta function. We can reason out this identity by recognizing that the index will be summed out leaving only and . In the first term, we fix and thus . Likewise, in the second term, we fix and thus .
Returning to the triple cross product,
Vector calculus
Consider the flux integral of the vector field across the parametrically-defined surface : . The unit normal vector to the surface is given by , so the integrand is a scalar triple product.
See also
Quadruple product
Vector algebra relations
Notes
References
External links
Khan Academy video of the proof of the triple product expansion
Articles containing proofs
Mathematical identities
Multilinear algebra
Operations on vectors
Ternary operations | Triple product | [
"Mathematics"
] | 1,562 | [
"Mathematical problems",
"Articles containing proofs",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
978,840 | https://en.wikipedia.org/wiki/Cryostat | A cryostat (from cryo meaning cold and stat meaning stable) is a device used to maintain low cryogenic temperatures of samples or devices mounted within the cryostat. Low temperatures may be maintained within a cryostat by using various refrigeration methods, most commonly using cryogenic fluid bath such as liquid helium. Hence it is usually assembled into a vessel, similar in construction to a vacuum flask or Dewar. Cryostats have numerous applications within science, engineering, and medicine.
Types
Closed-cycle cryostats
Closed-cycle cryostats consist of a chamber through which cold helium vapour is pumped. An external mechanical refrigerator extracts the warmer helium exhaust vapour, which is cooled and recycled. Closed-cycle cryostats consume a relatively large amount of electrical power, but need not be refilled with helium and can run continuously for an indefinite period. Objects may be cooled by attaching them to a metallic cold plate inside a vacuum chamber which is in thermal contact with the helium vapour chamber.
Continuous-flow cryostats
Continuous-flow cryostats are cooled by liquid cryogens (typically liquid helium or nitrogen) from a storage dewar. As the cryogen boils within the cryostat, it is continuously replenished by a steady flow from the storage dewar. Temperature control of the sample within the cryostat is typically performed by controlling the flow rate of cryogen into the cryostat together with a heating wire attached to a PID temperature control loop. The length of time over which cooling may be maintained is dictated by the volume of cryogens available.
Owing to the scarcity of liquid helium, some laboratories have facilities to capture and recover helium as it escapes from the cryostat, although these facilities are also costly to operate.
Bath cryostats
Bath cryostats are similar in construction to vacuum flasks filled with liquid helium. A cold plate is placed in thermal contact with the liquid helium bath. The liquid helium may be replenished as it boils away, at intervals between a few hours and several months, depending on the volume and construction of the cryostat. The boil-off rate is minimised by shielding the bath with either cold helium vapour, or vacuum shield with walls constructed from super insulator material. The helium vapour which boils away from the bath very effectively cools thermal shields around the outside of the bath. In the older designs there may be additional liquid nitrogen bath, or several concentric layers of shielding, with gradually increasing temperatures. However, the invention of super insulator materials has made this technology obsolete.
Multistage cryostats
In order to achieve temperatures lower than liquid helium at atmospheric pressure, additional cooler stages may be added to the cryostat. Temperatures down to 1 K can be reached by attaching the cold plate to a 1-K pot, which is a container of the He-4 isotope that may be pumped to low vapor pressure via a vacuum pump. Temperatures just below 0.300 K may be achieved using He-3, the rare isotope of helium, as the working fluid in a helium pot.
Temperatures down to 1 mK can be reached by employing dilution refrigerator or dry dilution refrigerator typically in addition to the main stage and 1 K pot. Temperatures below that can be reached using magnetic refrigeration.
Applications
Magnetic resonance imaging and research magnet types
Cryostats used in MRI machines are designed to hold a cryogen, typically helium, in a liquid state with minimal evaporation (boil-off). The liquid helium bath is designed to keep the superconducting magnet's bobbin of superconductive wire in its superconductive state. In this state, the wire has no electrical resistance and very large currents are maintained with low power input. To maintain superconductivity, the bobbin must be kept below its transition temperature by being immersed in the liquid helium. If, for any reason, the wire becomes resistive, i.e. loses superconductivity, a condition known as a "quench", the liquid helium evaporates, instantly raising pressure within the vessel. A burst disk, usually made of carbon, is placed within the chimney or vent pipe so that during a pressure excursion, the gaseous helium can be safely vented out of the MRI suite. Modern MRI cryostats use a mechanical refrigerator (cryocooler) to re-condense the helium gas and return it to the bath, to maintain cryogenic conditions and to conserve helium.
Typically cryostats are manufactured with two vessels, one inside the other. The outer vessel is evacuated with the vacuum acting as a thermal insulator. The inner vessel contains the cryogen and is supported within the outer vessel by structures made from low-conductivity materials. An intermediate shield between the outer and inner vessels intercepts the heat radiated from the outer vessel. This heat is removed by a cryocooler. Older helium cryostats used a liquid nitrogen vessel as this radiation shield and had the liquid helium in an inner, third, vessel. Nowadays few units using multiple cryogens are made with the trend being towards 'cryogen-free' cryostats in which all heat loads are removed by cryocoolers.
Biological microtome type
Cryostats are used in medicine to cut histological slides. They are usually used in a process called frozen section histology (see Frozen section procedure). The cryostat is essentially an ultrafine "deli-slicer", called a microtome, placed in a freezer. The cryostat is usually a stationary upright freezer, with an external wheel for rotating the microtome. The temperature can be varied, depending on the tissue being cut usually from −20 °C to −30 °C. The freezer is either powered by electricity, or by a refrigerant like liquid nitrogen. Small portable cryostats are available and can run off generators or vehicle inverters. To minimize unnecessary warming all necessary mechanical movements of the microtome can be achieved by hand via a wheel mounted outside the chamber. Newer microtomes have electric push button advancement of the tissue. The precision of the cutting is in micrometres. Tissue are sectioned as thin as 1 micrometre. Usual histology slides are mounted with a thickness of about 7 micrometres. Specimens that are soft at room temperature are mounted on a cutting medium (often made of egg white) on a metal "chuck", and frozen to cutting temperature (for example at −20 °C). Once frozen, the specimen on the chuck is mounted on the microtome. The crank is rotated and the specimen advances toward the cutting blade. Once the specimen is cut to a satisfactory quality, it is mounted on a warm (room temperature) clear glass slide, where it will instantaneously melt and adhere. The glass slide and specimen is dried with a dryer or air dried, and stained. The entire process from mounting to reading the slide takes from 10 to 20 minutes, allowing rapid diagnosis in the operating room, for the surgical excision of cancer. The cryostat can be used to cut histology and tissue slide (e.g., for enzyme localization) outside of medicine, but the quality of the section is poor compared to standard fixed section wax mounted histology.
See also
Lambda point refrigerator
References
Containers
Cryogenics
Vacuum flasks | Cryostat | [
"Physics"
] | 1,518 | [
"Applied and interdisciplinary physics",
"Vacuum flasks",
"Vacuum",
"Cryogenics",
"Matter"
] |
978,913 | https://en.wikipedia.org/wiki/Brittleness | A material is brittle if, when subjected to stress, it fractures with little elastic deformation and without significant plastic deformation. Brittle materials absorb relatively little energy prior to fracture, even those of high strength. Breaking is often accompanied by a sharp snapping sound.
When used in materials science, it is generally applied to materials that fail when there is little or no plastic deformation before failure. One proof is to match the broken halves, which should fit exactly since no plastic deformation has occurred.
Brittleness in different materials
Polymers
Mechanical characteristics of polymers can be sensitive to temperature changes near room temperatures. For example, poly(methyl methacrylate) is extremely brittle at temperature 4˚C, but experiences increased ductility with increased temperature.
Amorphous polymers are polymers that can behave differently at different temperatures. They may behave like a glass at low temperatures (the glassy region), a rubbery solid at intermediate temperatures (the leathery or glass transition region), and a viscous liquid at higher temperatures (the rubbery flow and viscous flow region). This behavior is known as viscoelastic behavior. In the glassy region, the amorphous polymer will be rigid and brittle. With increasing temperature, the polymer will become less brittle.
Metals
Some metals show brittle characteristics due to their slip systems. The more slip systems a metal has, the less brittle it is, because plastic deformation can occur along many of these slip systems. Conversely, with fewer slip systems, less plastic deformation can occur, and the metal will be more brittle. For example, HCP (hexagonal close packed) metals have few active slip systems, and are typically brittle.
Ceramics
Ceramics are generally brittle due to the difficulty of dislocation motion, or slip. There are few slip systems in crystalline ceramics that a dislocation is able to move along, which makes deformation difficult and makes the ceramic more brittle.
Ceramic materials generally exhibit ionic bonding. Because of the ions’ electric charge and their repulsion of like-charged ions, slip is further restricted.
Changing brittle materials
Materials can be changed to become more brittle or less brittle.
Toughening
When a material has reached the limit of its strength, it usually has the option of either deformation or fracture. A naturally malleable metal can be made stronger by impeding the mechanisms of plastic deformation (reducing grain size, precipitation hardening, work hardening, etc.), but if this is taken to an extreme, fracture becomes the more likely outcome, and the material can become brittle. Improving material toughness is, therefore, a balancing act.
Naturally brittle materials, such as glass, are not difficult to toughen effectively. Most such techniques involve one of two mechanisms: to deflect or absorb the tip of a propagating crack or to create carefully controlled residual stresses so that cracks from certain predictable sources will be forced closed. The first principle is used in laminated glass where two sheets of glass are separated by an interlayer of polyvinyl butyral. The polyvinyl butyral, as a viscoelastic polymer, absorbs the growing crack. The second method is used in toughened glass and pre-stressed concrete. A demonstration of glass toughening is provided by Prince Rupert's Drop. Brittle polymers can be toughened by using metal particles to initiate crazes when a sample is stressed, a good example being high-impact polystyrene or HIPS. The least brittle structural ceramics are silicon carbide (mainly by virtue of its high strength) and transformation-toughened zirconia.
A different philosophy is used in composite materials, where brittle glass fibers, for example, are embedded in a ductile matrix such as polyester resin. When strained, cracks are formed at the glass–matrix interface, but so many are formed that much energy is absorbed and the material is thereby toughened. The same principle is used in creating metal matrix composites.
Effect of pressure
Generally, the brittle strength of a material can be increased by pressure. This happens as an example in the brittle–ductile transition zone at an approximate depth of in the Earth's crust, at which rock becomes less likely to fracture, and more likely to deform ductilely (see rheid).
Crack growth
Supersonic fracture is crack motion faster than the speed of sound in a brittle material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart (Markus J. Buehler and Huajian Gao) and IBM Almaden Research Center in San Jose, California (Farid F. Abraham).
See also
Charpy impact test
Ductility
Forensic engineering
Fractography
Izod impact strength test
Strengthening mechanisms of materials
Toughness
References
Continuum mechanics
Materials science
Physical properties | Brittleness | [
"Physics",
"Materials_science",
"Engineering"
] | 959 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"nan",
"Physical properties"
] |
979,158 | https://en.wikipedia.org/wiki/Enrico%20Betti | Enrico Betti Glaoui (21 October 1823 – 11 August 1892) was an Italian mathematician, now remembered mostly for his 1871 paper on topology that led to the later naming after him of the Betti numbers. He worked also on the theory of equations, giving early expositions of Galois theory. He also discovered Betti's theorem, a result in the theory of elasticity.
Biography
Betti was born in Pistoia, Tuscany. He graduated from the University of Pisa in 1846 under (1792–1857). In Pisa, he was also a student of Ottaviano-Fabrizio Mossotti and Carlo Matteucci. After a time teaching, he held an appointment there from 1857. In 1858 he toured Europe with Francesco Brioschi and Felice Casorati, meeting Bernhard Riemann. Later he worked in the area of theoretical physics opened up by Riemann's work. He was also closely involved in academic politics, and the politics of the new Italian state.
Works
E. Betti, Sopra gli spazi di un numero qualunque di dimensioni, Ann. Mat. Pura Appl. 2/4 (1871), 140–158. (Betti's most well known paper).
Opere matematiche di Enrico Betti, pubblicate per cura della R. Accademia de' lincei (2vols.) (U. Hoepli, Milano, 1903–1913)
See also
Betti cohomology
Betti group
Betti numbers
Notes
Further reading
External links
An Italian short biography of Enrico Betti in Edizione Nazionale Mathematica Italiana online.
1823 births
1892 deaths
People from Pistoia
19th-century Italian mathematicians
Topologists
University of Pisa alumni
Academic staff of the University of Pisa
Members of the Göttingen Academy of Sciences and Humanities
Scientists from the Grand Duchy of Tuscany | Enrico Betti | [
"Mathematics"
] | 391 | [
"Topologists",
"Topology"
] |
979,199 | https://en.wikipedia.org/wiki/Beta%20movement | The term beta movement is used for the optical illusion of apparent motion in which the very short projection of one figure and a subsequent very short projection of a more or less similar figure in a different location are experienced as one figure moving.
The illusion of motion caused by animation and film is sometimes believed to rely on beta movement, as an alternative to the older explanation known as persistence of vision. However, the human visual system can't distinguish between the short-range apparent motion of film and real motion, while the long-range apparent motion of beta movement is recognised as different and processed in a different way.
History
Observations of apparent motion through quick succession of images go back to the 19th century. In 1833, Joseph Plateau introduced what became known as the phenakistiscope, an early animation device based on a stroboscopic effect. The principle of this "philosophical toy" would inspire the development of cinematography at the end of the century. Most authors who have since described the illusion of seeing motion in the fast succession of stationary images, maintained that the effect is due to persistence of vision, either in the form of afterimages on the retina or with a mental process filling in the intervals between the images.
In 1875, Sigmund Exner showed that, under the right conditions, people will see two quick, spatially separated but stationary electrical sparks as a single light moving from place to place, while quicker flashes were interpreted as motion between two stationary lights. Exner argued that the impression of the moving light was a perception (from a mental process) and the motion between the stationary lights as pure sense.
In 1912, Max Wertheimer wrote an influential article that would lead to the foundation of Gestalt psychology. In the discussed experiments, he asked test subjects what they saw when viewing successive tachistoscope projections of two similar shapes at two alternating locations on a screen. The results differed depending on the frequency of the flashes of the tachistoscope. At low frequencies, successive appearances of similar figures at different spots were perceived. At medium frequencies, it seemed like one figure moved from one position to the following position, regarded as "optimale Bewegung" (optimal motion) by Wertheimer. No shape was seen in between the two locations. At higher speeds, when test subjects believed to see both of the fast blinking figures more or less simultaneously, a moving objectless phenomenon was seen between and around the projected figures. Wertheimer used the Greek letter φ (phi) to designate illusions of motion and thought of the high-frequency objectless illusion as a "pure phi phenomenon", which he supposed was a more direct sensory experience of motion. Wertheimer's work became famous due to his demonstrations of the phi phenomenon, while the optimal motion illusion was regarded as the phenomenon well-known from movies.
In 1913, Friedrich Kenkel defined different types of the motion illusions found in the experiments of Wertheimer and subsequent experiments by Kurt Koffka (who had been one of Wertheimer's test subjects). Kenkel, a co-worker of Koffka, gave the optimal illusion of motion (with the appearance of one figure moving from one place to the next) the designation "β-Bewegung" (beta movement).
Confusion about phi phenomenon and beta movement
Wertheimer's pure phi phenomenon and beta movement are often confused in explanations of film and animation, but they are quite different perceptually and neither really explains the short-range apparent motion seen in film.
In beta movement, two stimuli, and , appear in succession, but are perceived as the motion of a single object, , into position . In phi movement, the two stimuli and appear in succession, but are perceived as the motion of a vague shadowy something passing over and . There are many factors that determine whether one will experience beta movement or the phi phenomenon in a particular circumstance. They include the luminance of the stimuli in contrast to the background, the size of the stimuli, how far apart they are, how long each one is displayed, and precisely how much time passes between them (or the extent to which they overlap in time).
See also
Ternus illusion
Phi phenomenon
Stroboscopic effect
Apparent motion
Persistence of vision
References
1833 introductions
Optical illusions | Beta movement | [
"Physics"
] | 877 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
979,251 | https://en.wikipedia.org/wiki/Orbital%20maneuver | In spaceflight, an orbital maneuver (otherwise known as a burn) is the use of propulsion systems to change the orbit of a spacecraft.
For spacecraft far from Earth, an orbital maneuver is called a deep-space maneuver (DSM).
When a spacecraft is not conducting a maneuver, especially in a transfer orbit, it is said to be coasting.
General
Rocket equation
The Tsiolkovsky rocket equation, or ideal rocket equation, can be useful for analysis of maneuvers by vehicles using rocket propulsion. A rocket applies acceleration to itself (a thrust) by expelling part of its mass at high speed. The rocket itself moves due to the conservation of momentum.
Delta-v
The applied change in velocity of each maneuver is referred to as delta-v ().
The delta-v for all the expected maneuvers are estimated for a mission are summarized in a delta-v budget. With a good approximation of the delta-v budget designers can estimate the propellant required for planned maneuvers.
Propulsion
Impulsive maneuvers
An impulsive maneuver is the mathematical model of a maneuver as an instantaneous change in the spacecraft's velocity (magnitude and/or direction) as illustrated in figure 1. It is the limit case of a burn to generate a particular amount of delta-v, as the burn time tends to zero.
In the physical world no truly instantaneous change in velocity is possible as this would require an "infinite force" applied during an "infinitely short time" but as a mathematical model it in most cases describes the effect of a maneuver on the orbit very well.
The off-set of the velocity vector after the end of real burn from the velocity vector at the same time resulting from the theoretical impulsive maneuver is only caused by the difference in gravitational force along the two paths (red and black in figure 1) which in general is small.
In the planning phase of space missions designers will first approximate their intended orbital changes using impulsive maneuvers that greatly reduces the complexity of finding the correct orbital transitions.
Low thrust propulsion
Applying a low thrust over a longer period of time is referred to as a non-impulsive maneuver. 'Non-impulsive' refers to the momentum changing slowly over a long time, as in electrically powered spacecraft propulsion, rather than by a short impulse.
Another term is finite burn, where the word "finite" is used to mean "non-zero", or practically, again: over a longer period.
For a few space missions, such as those including a space rendezvous, high fidelity models of the trajectories are required to meet the mission goals. Calculating a "finite" burn requires a detailed model of the spacecraft and its thrusters. The most important of details include: mass, center of mass, moment of inertia, thruster positions, thrust vectors, thrust curves, specific impulse, thrust centroid offsets, and fuel consumption.
Assists
Oberth effect
In astronautics, the Oberth effect is where the use of a rocket engine when travelling at high speed generates much more useful energy than one at low speed. Oberth effect occurs because the propellant has more usable energy (due to its kinetic energy on top of its chemical potential energy) and it turns out that the vehicle is able to employ this kinetic energy to generate more mechanical power. It is named after Hermann Oberth, the Austro-Hungarian-born, German physicist and a founder of modern rocketry, who apparently first described the effect.
The Oberth effect is used in a powered flyby or Oberth maneuver where the application of an impulse, typically from the use of a rocket engine, close to a gravitational body (where the gravity potential is low, and the speed is high) can give much more change in kinetic energy and final speed (i.e. higher specific energy) than the same impulse applied further from the body for the same initial orbit.
Since the Oberth maneuver happens in a very limited time (while still at low altitude), to generate a high impulse the engine necessarily needs to achieve high thrust (impulse is by definition the time multiplied by thrust). Thus the Oberth effect is far less useful for low-thrust engines, such as ion thrusters.
Historically, a lack of understanding of this effect led investigators to conclude that interplanetary travel would require completely impractical amounts of propellant, as without it, enormous amounts of energy are needed.
Gravity assist
In astrodynamics a gravity assist maneuver, gravitational slingshot or swing-by is the use of the relative movement and gravity of a planet or other celestial body to alter the trajectory of a spacecraft, typically in order to save propellant, time, and expense. Gravity assistance can be used to accelerate, decelerate and/or re-direct the path of a spacecraft.
The "assist" is provided by the motion (orbital angular momentum) of the gravitating body as it pulls on the spacecraft. The technique was first proposed as a mid-course maneuver in 1961, and used by interplanetary probes from Mariner 10 onwards, including the two Voyager probes' notable fly-bys of Jupiter and Saturn.
Transfer orbits
Orbit insertion maneuvers leave a spacecraft in a destination orbit. In contrast, orbit injection maneuvers occur when a spacecraft enters a transfer orbit, e.g. trans-lunar injection (TLI), trans-Mars injection (TMI) and trans-Earth injection (TEI). These are generally larger than small trajectory correction maneuvers. Insertion, injection and sometimes initiation are used to describe entry into a descent orbit, e.g. the Powered Descent Initiation maneuver used for Apollo lunar landings.
Hohmann transfer
In orbital mechanics, the Hohmann transfer orbit is an elliptical orbit used to transfer between two circular orbits of different altitudes, in the same plane.
The orbital maneuver to perform the Hohmann transfer uses two engine impulses which move a spacecraft onto and off the transfer orbit. This maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book Die Erreichbarkeit der Himmelskörper (The Accessibility of Celestial Bodies). Hohmann was influenced in part by the German science fiction author Kurd Laßwitz and his 1897 book Two Planets.
Bi-elliptic transfer
In astronautics and aerospace engineering, the bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver.
The bi-elliptic transfer consists of two half elliptic orbits. From the initial orbit, a delta-v is applied boosting the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point, a second delta-v is applied sending the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third delta-v is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally requires a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
Low energy transfer
A low energy transfer, or low energy trajectory, is a route in space which allows spacecraft to change orbits using very little fuel. These routes work in the Earth-Moon system and also in other systems, such as traveling between the satellites of Jupiter. The drawback of such trajectories is that they take much longer to complete than higher energy (more fuel) transfers such as Hohmann transfer orbits.
Low energy transfer are also known as weak stability boundary trajectories, or ballistic capture trajectories.
Low energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little expenditure of delta-v.
Orbital inclination change
Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes).
In general, inclination changes can require a great deal of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life.
Maximum efficiency of inclination change is achieved at apoapsis, (or apogee), where orbital velocity is the lowest. In some cases, it may require less total delta v to raise the spacecraft into a higher orbit, change the orbit plane at the higher apogee, and then lower the spacecraft to its original altitude.
Constant-thrust trajectory
Constant-thrust and constant-acceleration trajectories involve the spacecraft firing its engine in a prolonged constant burn. In the limiting case where the vehicle acceleration is high compared to the local gravitational acceleration, the spacecraft points straight toward the target (accounting for target motion), and remains accelerating constantly under high thrust until it reaches its target. In this high-thrust case, the trajectory approaches a straight line. If it is required that the spacecraft rendezvous with the target, rather than performing a flyby, then the spacecraft must flip its orientation halfway through the journey, and decelerate the rest of the way.
In the constant-thrust trajectory, the vehicle's acceleration increases during thrusting period, since the fuel use means the vehicle mass decreases. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust must decrease during the trajectory.
This trajectory requires that the spacecraft maintain a high acceleration for long durations. For interplanetary transfers, days, weeks or months of constant thrusting may be required. As a result, there are no currently available spacecraft propulsion systems capable of using this trajectory. It has been suggested that some forms of nuclear (fission or fusion based) or antimatter powered rockets would be capable of this trajectory.
More practically, this type of maneuver is used in low thrust maneuvers, for example with ion engines, Hall-effect thrusters, and others. These types of engines have very high specific impulse (fuel efficiency) but currently are only available with fairly low absolute thrust.
Rendezvous and docking
Orbit phasing
In astrodynamics orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly.
Space rendezvous and docking
A space rendezvous is a sequence of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous is commonly followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them.
See also
Clohessy-Wiltshire equations for co-orbit analysis
Collision avoidance (spacecraft)
Flyby (spaceflight)
Spacecraft propulsion
Orbital spaceflight
References
External links
Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse
Astrodynamics | Orbital maneuver | [
"Engineering"
] | 2,379 | [
"Astrodynamics",
"Aerospace engineering"
] |
979,306 | https://en.wikipedia.org/wiki/Orbital%20station-keeping | In astrodynamics, orbital station-keeping is keeping a spacecraft at a fixed distance from another spacecraft or celestial body. It requires a series of orbital maneuvers made with thruster burns to keep the active craft in the same orbit as its target. For many low Earth orbit satellites, the effects of non-Keplerian forces, i.e. the deviations of the gravitational force of the Earth from that of a homogeneous sphere, gravitational forces from Sun/Moon, solar radiation pressure and air drag, must be counteracted.
For spacecraft in a halo orbit around a Lagrange point, station-keeping is even more fundamental, as such an orbit is unstable; without an active control with thruster burns, the smallest deviation in position or velocity would result in the spacecraft leaving orbit completely.
Perturbations
The deviation of Earth's gravity field from that of a homogeneous sphere and gravitational forces from the Sun and Moon will in general perturb the orbital plane. For a Sun-synchronous orbit, the precession of the orbital plane caused by the oblateness of the Earth is a desirable feature that is part of mission design but the inclination change caused by the gravitational forces of the Sun and Moon is undesirable. For geostationary spacecraft, the inclination change caused by the gravitational forces of the Sun and Moon must be counteracted by a rather large expense of fuel, as the inclination should be kept sufficiently small for the spacecraft to be tracked by non-steerable antennae.
For spacecraft in a low orbit, the effects of atmospheric drag must often be compensated for, often to avoid re-entry; for missions requiring the orbit to be accurately synchronized with the Earth’s rotation, this is necessary to prevent a shortening of the orbital period.
Solar radiation pressure will in general perturb the eccentricity (i.e. the eccentricity vector); see Orbital perturbation analysis (spacecraft). For some missions, this must be actively counter-acted with maneuvers. For geostationary spacecraft, the eccentricity must be kept sufficiently small for a spacecraft to be tracked with a non-steerable antenna. Also for Earth observation spacecraft for which a very repetitive orbit with a fixed ground track is desirable, the eccentricity vector should be kept as fixed as possible. A large part of this compensation can be done by using a frozen orbit design, but often thrusters are needed for fine control maneuvers.
Low Earth orbit
For spacecraft in a very low orbit, the atmospheric drag is sufficiently strong to cause a re-entry before the intended end of mission if orbit raising maneuvers are not executed from time to time.
An example of this is the International Space Station (ISS), which has an operational altitude above Earth's surface of between 400 and 430 km (250-270 mi). Due to atmospheric drag the space station is constantly losing orbital energy. In order to compensate for this loss, which would eventually lead to a re-entry of the station, it has to be reboosted to a higher orbit from time to time. The chosen orbital altitude is a trade-off between the average thrust needed to counter-act the air drag and the impulse needed to send payloads and people to the station.
GOCE which orbited at 255 km (later reduced to 235 km) used ion thrusters to provide up to 20 mN of thrust to compensate for the drag on its frontal area of about 1 m2.
Earth observation spacecraft
For Earth observation spacecraft typically operated in an altitude above the Earth surface of about 700 – 800 km the air-drag is very faint and a re-entry due to air-drag is not a concern. But if the orbital period should be synchronous with the Earth's rotation to maintain a fixed ground track, the faint air-drag at this high altitude must also be counter-acted by orbit raising maneuvers in the form of thruster burns tangential to the orbit. These maneuvers will be very small, typically in the order of a few mm/s of delta-v. If a frozen orbit design is used these very small orbit raising maneuvers are sufficient to also control the eccentricity vector.
To maintain a fixed ground track it is also necessary to make out-of-plane maneuvers to compensate for the inclination change caused by Sun/Moon gravitation. These are executed as thruster burns orthogonal to the orbital plane. For Sun-synchronous spacecraft having a constant geometry relative to the Sun, the inclination change due to the solar gravitation is particularly large; a delta-v in the order of 1–2 m/s per year can be needed to keep the inclination constant.
Geostationary orbit
For geostationary spacecraft, thruster burns orthogonal to the orbital plane must be executed to compensate for the effect of the lunar/solar gravitation that perturbs the orbit pole with typically 0.85 degrees per year. The delta-v needed to compensate for this perturbation keeping the inclination to the equatorial plane amounts to in the order 45 m/s per year. This part of the GEO station-keeping is called North-South control.
The East-West control is the control of the orbital period and the eccentricity vector performed by making thruster burns tangential to the orbit. These burns are then designed to keep the orbital period perfectly synchronous with the Earth rotation and to keep the eccentricity sufficiently small. Perturbation of the orbital period results from the imperfect rotational symmetry of the Earth relative the North/South axis, sometimes called the ellipticity of the Earth equator. The eccentricity (i.e. the eccentricity vector) is perturbed by the solar radiation pressure. The fuel needed for this East-West control is much less than what is needed for the North-South control.
To extend the life-time of geostationary spacecraft with little fuel left one sometimes discontinues the North-South control only continuing with the East-West control. As seen from an observer on the rotating Earth the spacecraft will then move North-South with a period of 24 hours. When this North-South movement gets too large a steerable antenna is needed to track the spacecraft. An example of this is Artemis.
To save weight, it is crucial for GEO satellites to have the most fuel-efficient propulsion system. Almost all modern satellites are therefore employing a high specific impulse system like plasma or ion thrusters.
Lagrange points
Orbits of spacecraft are also possible around Lagrange points—also referred to as libration points—five equilibrium points that exist in relation to two larger solar system bodies. For example, there are five of these points in the Sun-Earth system, five in the Earth-Moon system, and so on. Spacecraft may orbit around these points with a minimum of propellant required for station-keeping purposes. Two orbits that have been used for such purposes include halo and Lissajous orbits.
One important Lagrange point is Earth-Sun , and three heliophysics missions have been orbiting L1 since approximately 2000. Station-keeping propellant use can be quite low, facilitating missions that can potentially last decades should other spacecraft systems remain operational. The three spacecraft—Advanced Composition Explorer (ACE), Solar Heliospheric Observatory (SOHO), and the Global Geoscience WIND satellite—each have annual station-keeping propellant requirements of approximately 1 m/s or less.
Earth-Sun —approximately 1.5 million kilometers from Earth in the anti-sun direction—is another important Lagrange point, and the ESA Herschel space observatory operated there in a Lissajous orbit during 2009–2013, at which time it ran out of coolant for the space telescope. Small station-keeping orbital maneuvers were executed approximately monthly to maintain the spacecraft in the station-keeping orbit.
The James Webb Space Telescope will use propellant to maintain its halo orbit around the Earth-Sun L2, which provides an upper limit to its designed lifetime: it is being designed to carry enough for ten years. However, the precision of trajectory following launch by an Ariane 5 is credited with potentially doubling the lifetime of the telescope by leaving more hydrazine propellant on-board than expected.
The CAPSTONE orbiter and the planned Lunar Gateway is stationed along a 9:2 synodically resonant Near Rectilinear Halo Orbit (NRHO) around the Earth-Moon L2 Lagrange point.
See also
Delta-v budget
Orbital perturbation analysis
Reboost
Teleoperator Retrieval System (robotic device for attaching to another spacecraft and boosting or changing its orbit)
References
External links
Station-keeping at the Encyclopedia of Astrobiology, Astronomy, and Spaceflight
XIPS Xenon Ion Propulsion Systems
Jules Verne boosts ISS orbit Jules Verne boosts ISS orbit (report from the European Space Agency)
Orbital maneuvers
Astrodynamics
Earth orbits | Orbital station-keeping | [
"Engineering"
] | 1,805 | [
"Astrodynamics",
"Aerospace engineering"
] |
979,374 | https://en.wikipedia.org/wiki/Orbital%20inclination%20change | Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta-v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes).
In general, inclination changes can take a very large amount of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life. Planetary flybys are the most efficient way to achieve large inclination changes, but they are only effective for interplanetary missions.
Efficiency
The simplest way to perform a plane change is to perform a burn around one of the two crossing points of the initial and final planes. The delta-v required is the vector change in velocity between the two planes at that point.
However, maximum efficiency of inclination changes are achieved at apoapsis, (or apogee), where orbital velocity is the lowest. In some cases, it can require less total delta-v to raise the satellite into a higher orbit, change the orbit plane at the higher apogee, and then lower the satellite to its original altitude.
For the most efficient example mentioned above, targeting an inclination at apoapsis also changes the argument of periapsis. However, targeting in this manner limits the mission designer to changing the plane only along the line of apsides.
For Hohmann transfer orbits, the initial orbit and the final orbit are 180 degrees apart. Because the transfer orbital plane has to include the central body, such as the Sun, and the initial and final nodes, this can require two 90 degree plane changes to reach and leave the transfer plane. In such cases it is often more efficient to use a broken plane maneuver where an additional burn is done so that plane change only occurs at the intersection of the initial and final orbital planes, rather than at the ends.
Inclination entangled with other orbital elements
An important subtlety of performing an inclination change is that Keplerian orbital inclination is defined by the angle between ecliptic North and the vector normal to the orbit plane, (i.e. the angular momentum vector). This means that inclination is always positive and is entangled with other orbital elements primarily the argument of periapsis which is in turn connected to the longitude of the ascending node. This can result in two very different orbits with precisely the same inclination.
Calculation
In a pure inclination change, only the inclination of the orbit is changed while all other orbital characteristics (radius, shape, etc.) remains the same as before. Delta-v () required for an inclination change () can be calculated as follows:
where:
is the orbital eccentricity
is the argument of periapsis
is the true anomaly
is the mean motion
is the semi-major axis
For more complicated maneuvers which may involve a combination of change in inclination and orbital radius, the delta-v is the vector difference between the velocity vectors of the initial orbit and the desired orbit at the transfer point. These types of combined maneuvers are commonplace, as it is more efficient to perform multiple orbital maneuvers at the same time if these maneuvers have to be done at the same location.
According to the law of cosines, the minimum Delta-v () required for any such combined maneuver can be calculated with the following equation
Here and are the initial and target velocities.
Circular orbit inclination change
Where both orbits are circular (i.e. ) and have the same radius the Delta-v () required for an inclination change () can be calculated using:
where is the orbital velocity and has the same units as .
Other ways to change inclination
Some other ways to change inclination that do not require burning propellant (or help reduce the amount of propellant required) include
aerodynamic lift (for bodies within an atmosphere, such as the Earth)
solar sails
Transits of other bodies such as the Moon can also be done.
None of these methods will change the delta-V required, they are simply alternate means of achieving the same end result and, ideally, will reduce propellant usage.
See also
Orbital inclination
Orbital maneuver
References
Astrodynamics
Orbital maneuvers | Orbital inclination change | [
"Engineering"
] | 907 | [
"Astrodynamics",
"Aerospace engineering"
] |
4,095,359 | https://en.wikipedia.org/wiki/Logic%20of%20information | The logic of information, or the logical theory of information, considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce. In this line of work, the concept of information serves to integrate the aspects of signs and expressions that are separately covered, on the one hand, by the concepts of denotation and extension, and on the other hand, by the concepts of connotation and comprehension.
Peirce began to develop these ideas in his lectures "On the Logic of Science" at Harvard University (1865) and the Lowell Institute (1866).
See also
Charles Sanders Peirce bibliography
Information theory
Inquiry
Philosophy of information
Pragmatic maxim
Pragmatic theory of information
Pragmatic theory of truth
Pragmaticism
Pragmatism
Scientific method
Semeiotic
Semiosis
Semiotics
Semiotic information theory
Sign relation
Sign relational complex
Triadic relation
References
Luciano Floridi, The Logic of Information, presentation, discussion, Télé-université (Université du Québec), 11 May 2005, Montréal, Canada.
Luciano Floridi, The logic of being informed, Logique et Analyse. 2006, 49.196, 433–460.
External links
Peirce, C.S. (1867), "Upon Logical Comprehension and Extension", Eprint
Information theory
Semiotics
Logic
Charles Sanders Peirce | Logic of information | [
"Mathematics",
"Technology",
"Engineering"
] | 274 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
4,095,925 | https://en.wikipedia.org/wiki/Mass-to-light%20ratio | In astrophysics and physical cosmology the mass-to-light ratio, normally designated with the Greek letter upsilon, , is the quotient between the total mass of a spatial volume (typically on the scales of a galaxy or a cluster) and its luminosity.
These ratios are calculated relative to the Sun as a baseline ratio which is a constant = 5133 kg/W: equal to the solar mass divided by the solar luminosity , . The mass-to-light ratios of galaxies and clusters are all much greater than due in part to the fact that most of the matter in these objects does not reside within stars and observations suggest that a large fraction is present in the form of dark matter.
Luminosities are obtained from photometric observations, correcting the observed brightness of the object for the distance dimming and extinction effects. In general, unless a complete spectrum of the radiation emitted by the object is obtained, a model must be extrapolated through either power law or blackbody fits. The luminosity thus obtained is known as the bolometric luminosity.
Masses are often calculated from the dynamics of the virialized system or from gravitational lensing. Typical mass-to-light ratios for galaxies range from 2 to 10 while on the largest scales, the mass to light ratio of the observable universe is approximately 100 , in concordance with the current best fit cosmological model.
References
External links
Concepts in astrophysics
Physical cosmology
Ratios | Mass-to-light ratio | [
"Physics",
"Astronomy",
"Mathematics"
] | 306 | [
"Astronomical sub-disciplines",
"Concepts in astrophysics",
"Theoretical physics",
"Astrophysics",
"Ratios",
"Arithmetic",
"Physical cosmology"
] |
4,099,290 | https://en.wikipedia.org/wiki/Transgene | A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene.
The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions.
Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems.
History
The idea of shaping an organism to fit a specific need is not a new science. However, until the late 1900s farmers and scientists could breed new strains of a plant or organism only from closely related species because the DNA had to be compatible for offspring to be able to reproduce.
In the 1970 and 1980s, scientists passed this hurdle by inventing procedures for combining the DNA of two vastly different species with genetic engineering. The organisms produced by these procedures were termed transgenic. Transgenesis is the same as gene therapy in the sense that they both transform cells for a specific purpose. However, they are completely different in their purposes, as gene therapy aims to cure a defect in cells, and transgenesis seeks to produce a genetically modified organism by incorporating the specific transgene into every cell and changing the genome. Transgenesis will therefore change the germ cells, not only the somatic cells, in order to ensure that the transgenes are passed down to the offspring when the organisms reproduce. Transgenes alter the genome by blocking the function of a host gene; they can either replace the host gene with one that codes for a different protein, or introduce an additional gene.
The first transgenic organism was created in 1974 when Annie Chang and Stanley Cohen expressed Staphylococcus aureus genes in Escherichia coli. In 1978, yeast cells were the first eukaryotic organisms to undergo gene transfer. Mouse cells were first transformed in 1979, followed by mouse embryos in 1980. Most of the very first transmutations were performed by microinjection of DNA directly into cells. Scientists were able to develop other methods to perform the transformations, such as incorporating transgenes into retroviruses and then infecting cells; using electroinfusion, which takes advantage of an electric current to pass foreign DNA through the cell wall; biolistics, which is the procedure of shooting DNA bullets into cells; and also delivering DNA into the newly fertilized egg.
The first transgenic animals were only intended for genetic research to study the specific function of a gene, and by 2003, thousands of genes had been studied.
Use in plants
A variety of transgenic plants have been designed for agriculture to produce genetically modified crops, such as corn, soybean, rapeseed oil, cotton, rice and more. , these GMO crops were planted on 170 million hectares globally.
Golden rice
One example of a transgenic plant species is golden rice. In 1997, five million children developed xerophthalmia, a medical condition caused by vitamin A deficiency, in Southeast Asia alone. Of those children, a quarter million went blind. To combat this, scientists used biolistics to insert the daffodil phytoene synthase gene into Asia indigenous rice cultivars. The daffodil insertion increased the production of β-carotene. The product was a transgenic rice species rich in vitamin A, called golden rice. Little is known about the impact of golden rice on xerophthalmia because anti-GMO campaigns have prevented the full commercial release of golden rice into agricultural systems in need.
Transgene escape
The escape of genetically-engineered plant genes via hybridization with wild relatives was first discussed and examined in Mexico and Europe in the mid-1990s. There is agreement that escape of transgenes is inevitable, even "some proof that it is happening". Up until 2008 there were few documented cases.
Corn
Corn sampled in 2000 from the Sierra Juarez, Oaxaca, Mexico contained a transgenic 35S promoter, while a large sample taken by a different method from the same region in 2003 and 2004 did not. A sample from another region from 2002 also did not, but directed samples taken in 2004 did, suggesting transgene persistence or re-introduction. A 2009 study found recombinant proteins in 3.1% and 1.8% of samples, most commonly in southeast Mexico. Seed and grain import from the United States could explain the frequency and distribution of transgenes in west-central Mexico, but not in the southeast. Also, 5.0% of corn seed lots in Mexican corn stocks expressed recombinant proteins despite the moratorium on GM crops.
Cotton
In 2011, transgenic cotton was found in Mexico among wild cotton, after 15 years of GMO cotton cultivation.
Rapeseed (canola)
Transgenic rapeseed Brassicus napus – hybridized with a native Japanese species, Brassica rapa – was found in Japan in 2011 after having been identified in 2006 in Québec, Canada. They were persistent over a six-year study period, without herbicide selection pressure and despite hybridization with the wild form. This was the first report of the introgression—the stable incorporation of genes from one gene pool into another—of an herbicide-resistance transgene from Brassica napus into the wild form gene pool.
Creeping bentgrass
Transgenic creeping bentgrass, engineered to be glyphosate-tolerant as "one of the first wind-pollinated, perennial, and highly outcrossing transgenic crops", was planted in 2003 as part of a large (about 160 ha) field trial in central Oregon near Madras, Oregon. In 2004, its pollen was found to have reached wild growing bentgrass populations up to 14 kilometres away. Cross-pollinating Agrostis gigantea was even found at a distance of 21 kilometres. The grower, Scotts Company could not remove all genetically engineered plants, and in 2007, the U.S. Department of Agriculture fined Scotts $500,000 for noncompliance with regulations.
Risk assessment
The long-term monitoring and controlling of a particular transgene has been shown not to be feasible. The European Food Safety Authority published a guidance for risk assessment in 2010.
Use in mice
Genetically modified mice are the most common animal model for transgenic research. Transgenic mice are currently being used to study a variety of diseases including cancer, obesity, heart disease, arthritis, anxiety, and Parkinson's disease. The two most common types of genetically modified mice are knockout mice and oncomice. Knockout mice are a type of mouse model that uses transgenic insertion to disrupt an existing gene's expression. In order to create knockout mice, a transgene with the desired sequence is inserted into an isolated mouse blastocyst using electroporation. Then, homologous recombination occurs naturally within some cells, replacing the gene of interest with the designed transgene. Through this process, researchers were able to demonstrate that a transgene can be integrated into the genome of an animal, serve a specific function within the cell, and be passed down to future generations.
Oncomice are another genetically modified mouse species created by inserting transgenes that increase the animal's vulnerability to cancer. Cancer researchers utilize oncomice to study the profiles of different cancers in order to apply this knowledge to human studies.
Use in Drosophila
Multiple studies have been conducted concerning transgenesis in Drosophila melanogaster, the fruit fly. This organism has been a helpful genetic model for over 100 years, due to its well-understood developmental pattern. The transfer of transgenes into the Drosophila genome has been performed using various techniques, including P element, Cre-loxP, and ΦC31 insertion. The most practiced method used thus far to insert transgenes into the Drosophila genome utilizes P elements. The transposable P elements, also known as transposons, are segments of bacterial DNA that are translocated into the genome, without the presence of a complementary sequence in the host's genome. P elements are administered in pairs of two, which flank the DNA insertion region of interest. Additionally, P elements often consist of two plasmid components, one known as the P element transposase and the other, the P transposon backbone. The transposase plasmid portion drives the transposition of the P transposon backbone, containing the transgene of interest and often a marker, between the two terminal sites of the transposon. Success of this insertion results in the nonreversible addition of the transgene of interest into the genome. While this method has been proven effective, the insertion sites of the P elements are often uncontrollable, resulting in an unfavorable, random insertion of the transgene into the Drosophila genome.
To improve the location and precision of the transgenic process, an enzyme known as Cre has been introduced. Cre has proven to be a key element in a process known as recombinase-mediated cassette exchange (RMCE). While it has shown to have a lower efficiency of transgenic transformation than the P element transposases, Cre greatly lessens the labor-intensive abundance of balancing random P insertions. Cre aids in the targeted transgenesis of the DNA gene segment of interest, as it supports the mapping of the transgene insertion sites, known as loxP sites. These sites, unlike P elements, can be specifically inserted to flank a chromosomal segment of interest, aiding in targeted transgenesis. The Cre transposase is important in the catalytic cleavage of the base pairs present at the carefully positioned loxP sites, permitting more specific insertions of the transgenic donor plasmid of interest.
To overcome the limitations and low yields that transposon-mediated and Cre-loxP transformation methods produce, the bacteriophage ΦC31 has recently been utilized. Recent breakthrough studies involve the microinjection of the bacteriophage ΦC31 integrase, which shows improved transgene insertion of large DNA fragments that are unable to be transposed by P elements alone. This method involves the recombination between an attachment (attP) site in the phage and an attachment site in the bacterial host genome (attB). Compared to usual P element transgene insertion methods, ΦC31 integrates the entire transgene vector, including bacterial sequences and antibiotic resistance genes. Unfortunately, the presence of these additional insertions has been found to affect the level and reproducibility of transgene expression.
Use in livestock and aquaculture
One agricultural application is to selectively breed animals for particular traits: Transgenic cattle with an increased muscle phenotype has been produced by overexpressing a short hairpin RNA with homology to the myostatin mRNA using RNA interference.
Transgenes are being used to produce milk with high levels of proteins or silk from the milk of goats. Another agricultural application is to selectively breed animals, which are resistant to diseases or animals for biopharmaceutical production.
Future potential
The application of transgenes is a rapidly growing area of molecular biology. As of 2005 it was predicted that in the next two decades, 300,000 lines of transgenic mice will be generated. Researchers have identified many applications for transgenes, particularly in the medical field. Scientists are focusing on the use of transgenes to study the function of the human genome in order to better understand disease, adapting animal organs for transplantation into humans, and the production of pharmaceutical products such as insulin, growth hormone, and blood anti-clotting factors from the milk of transgenic cows.
As of 2004 there were five thousand known genetic diseases, and the potential to treat these diseases using transgenic animals is, perhaps, one of the most promising applications of transgenes. There is a potential to use human gene therapy to replace a mutated gene with an unmutated copy of a transgene in order to treat the genetic disorder. This can be done through the use of Cre-Lox or knockout. Moreover, genetic disorders are being studied through the use of transgenic mice, pigs, rabbits, and rats. Transgenic rabbits have been created to study inherited cardiac arrhythmias, as the rabbit heart markedly better resembles the human heart as compared to the mouse. More recently, scientists have also begun using transgenic goats to study genetic disorders related to fertility.
Transgenes may be used for xenotransplantation from pig organs. Through the study of xeno-organ rejection, it was found that an acute rejection of the transplanted organ occurs upon the organ's contact with blood from the recipient due to the recognition of foreign antibodies on endothelial cells of the transplanted organ. Scientists have identified the antigen in pigs that causes this reaction, and therefore are able to transplant the organ without immediate rejection by removal of the antigen. However, the antigen begins to be expressed later on, and rejection occurs. Therefore, further research is being conducted.
Transgenic microorganisms capable of producing catalytic proteins or enzymes which increase the rate of industrial reactions.
Ethical controversy
Transgene use in humans is currently fraught with issues. Transformation of genes into human cells has not been perfected yet. The most famous example of this involved certain patients developing T-cell leukemia after being treated for X-linked severe combined immunodeficiency (X-SCID). This was attributed to the close proximity of the inserted gene to the LMO2 promoter, which controls the transcription of the LMO2 proto-oncogene.
See also
Hybrid
Fusion protein
Gene pool
Gene flow
Introgression
Nucleic acid hybridization
Mouse models of breast cancer metastasis
References
Further reading
Genetic engineering
Gene delivery | Transgene | [
"Chemistry",
"Engineering",
"Biology"
] | 3,154 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"Molecular biology techniques",
"Molecular biology",
"Gene delivery"
] |
4,100,584 | https://en.wikipedia.org/wiki/QuickWin | QuickWin was a library from Microsoft that made it possible to compile command line MS-DOS programs as Windows 3.1 applications, displaying their output in a window.
Since the release of Windows NT, Microsoft has included support for console applications in the Windows operating system itself via the Windows Console, eliminating the need for QuickWin. But Intel Visual Fortran still uses that library.
Borland's equivalent in Borland C++ 5 was called EasyWin.
There is a program called QuickWin on CodeProject, which does a similar thing.
See also
Command-line interface
References
Computer libraries | QuickWin | [
"Technology"
] | 122 | [
"IT infrastructure",
"Computer libraries"
] |
4,101,208 | https://en.wikipedia.org/wiki/Takt%20time | Takt time, or simply takt, is a manufacturing term to describe the required product assembly duration that is needed to match the demand. Often confused with cycle time, takt time is a tool used to design work and it measures the average time interval between the start of production of one unit and the start of production of the next unit when items are produced sequentially. For calculations, it is the time to produce parts divided by the number of parts demanded in that time interval. The takt time is based on customer demand; if a process or a production line are unable to produce at takt time, either demand leveling, additional resources, or process re-engineering is needed to ensure on-time delivery.
For example, if the customer demand is 10 units per week, then, given a 40-hour workweek and steady flow through the production line, the average duration between production starts should be 4 hours, ideally. This interval is further reduced to account for things like machine downtime and scheduled employee breaks.
Etymology
Takt time is a borrowing of the Japanese word , which in turn was borrowed from the German word , meaning 'cycle time'. The word was likely introduced to Japan by German engineers in the 1930s.
The word originates from the Latin word "tactus" meaning "touch, sense of touch, feeling". Some earlier meanings include: (16th century) "beat triggered by regular contact, clock beat", then in music "beat indicating the rhythm" and (18th century) "regular unit of note values".
History
Takt time has played an important role in production systems even before the industrial revolution. From 16th-century shipbuilding in Venice, mass-production of Model T by Henry Ford, synchronizing airframe movement in the German aviation industry and many more. Cooperation between the German aviation industry and Mitsubishi brought takt to Japan, where Toyota incorporated it in the Toyota Production System (TPS).
James P. Womack and Daniel T. Jones in The Machine That Changed the World (1990) and Lean Thinking (1996) introduced the world to the concept of "lean". Through this, Takt was connected to lean systems. In the Toyota Production System (TPS), takt time is a central element of the just-in-time pillar (JIT) of this production system.
Definition
Assuming a product is made one unit at a time at a constant rate during the net available work time, the takt time is the amount of time that must elapse between two consecutive unit completions in order to meet the demand.
Takt time can be first determined with the formula:
Where
T = Takt time (or takt), e.g. [work time between two consecutive units]
Ta = Net time available to work during the period, e.g. [work time per period]
D = Demand (customer demand) during the period, e.g. [units required per period]
Net available time is the amount of time available for work to be done. This excludes break times and any expected stoppage time (for example scheduled maintenance, team briefings, etc.).
Example:If there are a total of 8 hours (or 480 minutes) in a shift (gross time) less 30 minutes lunch, 30 minutes for breaks (2 × 15 mins), 10 minutes for a team briefing and 10 minutes for basic maintenance checks, then the net Available Time to Work = 480 - 30 - 30 - 10 - 10 = 400 minutes.
If customer demand were 400 units a day and one shift was being run, then the line would be required to output at a minimum rate of one part per minute in order to be able to keep up with customer demand.
Takt time may be adjusted according to requirements within a company. For example, if one department delivers parts to several manufacturing lines, it often makes sense to use similar takt times on all lines to smooth outflow from the preceding station. Customer demand can still be met by adjusting daily working time, reducing down times on machines, and so on.
Implementation
Takt time is common in production lines that move a product along a line of stations that each performs a set of predefined tasks.
Manufacturing: casting of parts, drilling holes, or preparing a workplace for another task
Control tasks: testing of parts or adjusting machinery
Administration: answering standard inquiries or call center operation
Construction Management: scheduling process steps within a phase of the project
Takt in construction
With the adoption of lean thinking in the construction industry, takt time has found its way into the project-based production systems of the industry. Starting with construction methods that have highly repetitive products like bridge construction, tunnel construction, and repetitive buildings like hotels and residential high-rises, implementation of takt is increasing.
According to Koskela (1992), an ideal production system has continuous flow and creates value for the customer while transforming raw materials into products. Construction projects use critical path method (CPM) or program evaluation and review technique (PERT) for planning and scheduling. These methods do not generate flow in the production and tend to be vulnerable to variation in the system. Due to common cost and schedule overruns, industry professionals and academia have started to regard CPM and PERT as outdated methods that often fail to anticipate uncertainties and allocate resources accurately and optimally in a dynamic construction environment. This has led to increasing developments and implementation of takt.
Space scheduling
Takt, as used in takt planning or takt-time planning (TTP) for construction, is considered one of the several ways of planning and scheduling construction projects based on their utilization of space rather than just time, as done traditionally in the critical path method. Also, to visualize and create flow of work on a construction site, utilization of space becomes essential. Some other space scheduling methods include:
Linear scheduling method (LSM) and vertical production method (VPM) which are used to schedule horizontal and vertical repetitive projects respectively,
Line-of-balance (LOB) method used for any type of repetitive projects.
Location-based management system (LBMS) uses flowlines with the production rates of the crews, as they move through locations with an objective of optimizing work continuity.
Comparison with manufacturing
In manufacturing, the product being built keeps moving on the assembly line, while the workstations are stationary. On contrary, construction product, i.e. the building or infrastructure facilities being constructed, is stationary and the workers move from one location to another.
Takt planning needs an accurate definition of work at each workstation, which in construction is done through defining spaces, called "zones". Due to the non-repetitive distribution of work in construction, achieving work completion within the defined takt for each zone, becomes difficult. Capacity buffer is used to deal with this variability in the system.
The rationale behind defining these zones and setting the takt is not standardized and varies as per the style of the planner. Work density method (WDM) is one of the methods being used to assist in this process. Work density is expressed as a unit of time per unit of area. For a certain work area, work density describes how much time a trade will require to do their work in that area (zone), based on:
the product's design, i.e., what is in the construction project drawings and specifications
the scope of the trade's work,
the specific task in their schedule (depending on work already in place and work that will follow later in the same or another process),
the means and methods the trade will use (e.g., when prefabricating off-site, the work density on-site is expected to decrease),
while accounting for crew capabilities and size.
Benefits of takt time
Once a takt system is implemented there are a number of benefits:
The product moves along a line, so bottlenecks (stations that need more time than planned) are easily identified when the product does not move on in time.
Correspondingly, stations that don't operate reliably (suffer a frequent breakdown, etc.) are easily identified.
The takt leaves only a certain amount of time to perform the actual value added work. Therefore, there is a strong motivation to get rid of all non-value-adding tasks (like machine set-up, gathering of tools, transporting products, etc.)
Workers and machines perform sets of similar tasks, so they don't have to adapt to new processes every day, increasing their productivity.
There is no place in the takt system for removal of a product from the assembly line at any point before completion, so opportunities for shrink and damage in transit are minimized.
Problems of takt time
Once a takt system is implemented there are a number of problems:
When customer demand rises so much that takt time has to come down, quite a few tasks have to be either reorganized to take even less time to fit into the shorter takt time, or they have to be split up between two stations (which means another station has to be squeezed into the line and workers have to adapt to the new setup)
When one station in the line breaks down for whatever reason the whole line comes to a grinding halt, unless there are buffer capacities for preceding stations to get rid of their products and following stations to feed from. A built-in buffer of three to five percent downtime allows needed adjustments or recovery from failures.
Short takt time can put considerable stress on the "moving parts" of a production system or subsystem. In automated systems/subsystems, increased mechanical stress increases the likelihood of a breakdown, and in non-automated systems/subsystems, personnel face both increased physical stress (which increases the risk of repetitive motion (also "stress" or "strain") injury), intensified emotional stress, and lowered motivation, sometimes to the point of increased absenteeism.
Tasks have to be leveled to make sure tasks don't bulk in front of certain stations due to peaks in workload. This decreases the flexibility of the system as a whole.
The concept of takt time doesn't account for human factors such as an operator needing an unexpected bathroom break or a brief rest period between units (especially for processes involving significant physical labor). In practice, this means that the production processes must be realistically capable of operation above peak takt and demand must be leveled in order to avoid wasted line capacity
See also
Turnaround time
Lean manufacturing
Toyota Production System
Muri
Lean construction
Factory Physics, a book on manufacturing management
Clock-face scheduling, sometimes referred to as Taktplan
References
External links
Lean Manufacturing site about Takt time
Six Sigma site about Takt time
On Line business processes simulator
Takt Time - a vision for Lean Manufacturing
Understanding Takt Time and Cycle Time
Further reading
Ohno, Taiichi, Toyota Production System: Beyond Large-Scale Production, Productivity Press (1988).
Baudin, Michel, Lean Assembly: The Nuts and Bolts of Making Assembly Operations Flow, Productivity Press (2002).
Ortiz, Chris A., Kaizen Assembly: Designing, Constructing, and Managing a Lean Assembly Line, CRC Press.
Production and manufacturing
Lean manufacturing | Takt time | [
"Engineering"
] | 2,281 | [
"Lean manufacturing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.