id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,581,752 | https://en.wikipedia.org/wiki/Protein%20design | Protein design is the rational design of new protein molecules to design novel activity, behavior, or purpose, and to advance basic understanding of protein function. Proteins can be designed from scratch (de novo design) or by making calculated variants of a known protein structure and its sequence (termed protein redesign). Rational protein design approaches make protein-sequence predictions that will fold to specific structures. These predicted sequences can then be validated experimentally through methods such as peptide synthesis, site-directed mutagenesis, or artificial gene synthesis.
Rational protein design dates back to the mid-1970s. Recently, however, there were numerous examples of successful rational design of water-soluble and even transmembrane peptides and proteins, in part due to a better understanding of different factors contributing to protein structure stability and development of better computational methods.
Overview and history
The goal in rational protein design is to predict amino acid sequences that will fold to a specific protein structure. Although the number of possible protein sequences is vast, growing exponentially with the size of the protein chain, only a subset of them will fold reliably and quickly to one native state. Protein design involves identifying novel sequences within this subset. The native state of a protein is the conformational free energy minimum for the chain. Thus, protein design is the search for sequences that have the chosen structure as a free energy minimum. In a sense, it is the reverse of protein structure prediction. In design, a tertiary structure is specified, and a sequence that will fold to it is identified. Hence, it is also termed inverse folding. Protein design is then an optimization problem: using some scoring criteria, an optimized sequence that will fold to the desired structure is chosen.
When the first proteins were rationally designed during the 1970s and 1980s, the sequence for these was optimized manually based on analyses of other known proteins, the sequence composition, amino acid charges, and the geometry of the desired structure. The first designed proteins are attributed to Bernd Gutte, who designed a reduced version of a known catalyst, bovine ribonuclease, and tertiary structures consisting of beta-sheets and alpha-helices, including a binder of DDT. Urry and colleagues later designed elastin-like fibrous peptides based on rules on sequence composition. Richardson and coworkers designed a 79-residue protein with no sequence homology to a known protein. In the 1990s, the advent of powerful computers, libraries of amino acid conformations, and force fields developed mainly for molecular dynamics simulations enabled the development of structure-based computational protein design tools. Following the development of these computational tools, great success has been achieved over the last 30 years in protein design. The first protein successfully designed completely de novo was done by Stephen Mayo and coworkers in 1997, and, shortly after, in 1999 Peter S. Kim and coworkers designed dimers, trimers, and tetramers of unnatural right-handed coiled coils. In 2003, David Baker's laboratory designed a full protein to a fold never seen before in nature. Later, in 2008, Baker's group computationally designed enzymes for two different reactions. In 2010, one of the most powerful broadly neutralizing antibodies was isolated from patient serum using a computationally designed protein probe. Due to these and other successes (e.g., see examples below), protein design has become one of the most important tools available for protein engineering. There is great hope that the design of new proteins, small and large, will have uses in biomedicine and bioengineering.
Underlying models of protein structure and function
Protein design programs use computer models of the molecular forces that drive proteins in in vivo environments. In order to make the problem tractable, these forces are simplified by protein design models. Although protein design programs vary greatly, they have to address four main modeling questions: What is the target structure of the design, what flexibility is allowed on the target structure, which sequences are included in the search, and which force field will be used to score sequences and structures.
Target structure
Protein function is heavily dependent on protein structure, and rational protein design uses this relationship to design function by designing proteins that have a target structure or fold. Thus, by definition, in rational protein design the target structure or ensemble of structures must be known beforehand. This contrasts with other forms of protein engineering, such as directed evolution, where a variety of methods are used to find proteins that achieve a specific function, and with protein structure prediction where the sequence is known, but the structure is unknown.
Most often, the target structure is based on a known structure of another protein. However, novel folds not seen in nature have been made increasingly possible. Peter S. Kim and coworkers designed trimers and tetramers of unnatural coiled coils, which had not been seen before in nature. The protein Top7, developed in David Baker's lab, was designed completely using protein design algorithms, to a completely novel fold. More recently, Baker and coworkers developed a series of principles to design ideal globular-protein structures based on protein folding funnels that bridge between secondary structure prediction and tertiary structures. These principles, which build on both protein structure prediction and protein design, were used to design five different novel protein topologies.
Sequence space
In rational protein design, proteins can be redesigned from the sequence and structure of a known protein, or completely from scratch in de novo protein design. In protein redesign, most of the residues in the sequence are maintained as their wild-type amino-acid while a few are allowed to mutate. In de novo design, the entire sequence is designed anew, based on no prior sequence.
Both de novo designs and protein redesigns can establish rules on the sequence space: the specific amino acids that are allowed at each mutable residue position. For example, the composition of the surface of the RSC3 probe to select HIV-broadly neutralizing antibodies was restricted based on evolutionary data and charge balancing. Many of the earliest attempts on protein design were heavily based on empiric rules on the sequence space. Moreover, the design of fibrous proteins usually follows strict rules on the sequence space. Collagen-based designed proteins, for example, are often composed of Gly-Pro-X repeating patterns. The advent of computational techniques allows designing proteins with no human intervention in sequence selection.
Structural flexibility
In protein design, the target structure (or structures) of the protein are known. However, a rational protein design approach must model some flexibility on the target structure in order to increase the number of sequences that can be designed for that structure and to minimize the chance of a sequence folding to a different structure. For example, in a protein redesign of one small amino acid (such as alanine) in the tightly packed core of a protein, very few mutants would be predicted by a rational design approach to fold to the target structure, if the surrounding side-chains are not allowed to be repacked.
Thus, an essential parameter of any design process is the amount of flexibility allowed for both the side-chains and the backbone. In the simplest models, the protein backbone is kept rigid while some of the protein side-chains are allowed to change conformations. However, side-chains can have many degrees of freedom in their bond lengths, bond angles, and χ dihedral angles. To simplify this space, protein design methods use rotamer libraries that assume ideal values for bond lengths and bond angles, while restricting χ dihedral angles to a few frequently observed low-energy conformations termed rotamers.
Rotamer libraries are derived from the statistical analysis of many protein structures. Backbone-independent rotamer libraries describe all rotamers. Backbone-dependent rotamer libraries, in contrast, describe the rotamers as how likely they are to appear depending on the protein backbone arrangement around the side chain. Most protein design programs use one conformation (e.g., the modal value for rotamer dihedrals in space) or several points in the region described by the rotamer; the OSPREY protein design program, in contrast, models the entire continuous region.
Although rational protein design must preserve the general backbone fold a protein, allowing some backbone flexibility can significantly increase the number of sequences that fold to the structure while maintaining the general fold of the protein. Backbone flexibility is especially important in protein redesign because sequence mutations often result in small changes to the backbone structure. Moreover, backbone flexibility can be essential for more advanced applications of protein design, such as binding prediction and enzyme design. Some models of protein design backbone flexibility include small and continuous global backbone movements, discrete backbone samples around the target fold, backrub motions, and protein loop flexibility.
Energy function
Rational protein design techniques must be able to discriminate sequences that will be stable under the target fold from those that would prefer other low-energy competing states. Thus, protein design requires accurate energy functions that can rank and score sequences by how well they fold to the target structure. At the same time, however, these energy functions must consider the computational challenges behind protein design. One of the most challenging requirements for successful design is an energy function that is both accurate and simple for computational calculations.
The most accurate energy functions are those based on quantum mechanical simulations. However, such simulations are too slow and typically impractical for protein design. Instead, many protein design algorithms use either physics-based energy functions adapted from molecular mechanics simulation programs, knowledge based energy-functions, or a hybrid mix of both. The trend has been toward using more physics-based potential energy functions.
Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.
Statistical potentials, in contrast to physics-based potentials, have the advantage of being fast to compute, of accounting implicitly of complex effects and being less sensitive to small changes in the protein structure. These energy functions are based on deriving energy values from frequency of appearance on a structural database.
Protein design, however, has requirements that can sometimes be limited in molecular mechanics force-fields. Molecular mechanics force-fields, which have been
used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.
Challenges for effective design energy functions
Water makes up most of the molecules surrounding proteins and is the main driver of protein structure. Thus, modeling the interaction between water and protein is vital in protein design. The number of water molecules that interact with a protein at any given time is huge and each one has a large number of degrees of freedom and interaction partners. Instead, protein design programs model most of such water molecules as a continuum, modeling both the hydrophobic effect and solvation polarization.
Individual water molecules can sometimes have a crucial structural role in the core of proteins, and in protein–protein or protein–ligand interactions. Failing to model such waters can result in mispredictions of the optimal sequence of a protein–protein interface. As an alternative, water molecules can be added to rotamers.
As an optimization problem
The goal of protein design is to find a protein sequence that will fold to a target structure. A protein design algorithm must, thus, search all the conformations of each sequence, with respect to the target fold, and rank sequences according to the lowest-energy conformation of each one, as determined by the protein design energy function. Thus, a typical input to the protein design algorithm is the target fold, the sequence space, the structural flexibility, and the energy function, while the output is one or more sequences that are predicted to fold stably to the target structure.
The number of candidate protein sequences, however, grows exponentially with the number of protein residues; for example, there are 20100 protein sequences of length 100. Furthermore, even if amino acid side-chain conformations are limited to a few rotamers (see Structural flexibility), this results in an exponential number of conformations for each sequence. Thus, in our 100 residue protein, and assuming that each amino acid has exactly 10 rotamers, a search algorithm that searches this space will have to search over 200100 protein conformations.
The most common energy functions can be decomposed into pairwise terms between rotamers and amino acid types, which casts the problem as a combinatorial one, and powerful optimization algorithms can be used to solve it. In those cases, the total energy of each conformation belonging to each sequence can be formulated as a sum of individual and pairwise terms between residue positions. If a designer is interested only in the best sequence, the protein design algorithm only requires the lowest-energy conformation of the lowest-energy sequence. In these cases, the amino acid identity of each rotamer can be ignored and all rotamers belonging to different amino acids can be treated the same. Let ri be a rotamer at residue position i in the protein chain, and E(ri) the potential energy between the internal atoms of the rotamer. Let E(ri, rj) be the potential energy between ri and rotamer rj at residue position j. Then, we define the optimization problem as one of finding the conformation of minimum energy (ET):
The problem of minimizing ET is an NP-hard problem. Even though the class of problems is NP-hard, in practice many instances of protein design can be solved exactly or optimized satisfactorily through heuristic methods.
Algorithms
Several algorithms have been developed specifically for the protein design problem. These algorithms can be divided into two broad classes: exact algorithms, such as dead-end elimination, that lack runtime guarantees but guarantee the quality of the solution; and heuristic algorithms, such as Monte Carlo, that are faster than exact algorithms but have no guarantees on the optimality of the results. Exact algorithms guarantee that the optimization process produced the optimal according to the protein design model. Thus, if the predictions of exact algorithms fail when these are experimentally validated, then the source of error can be attributed to the energy function, the allowed flexibility, the sequence space or the target structure (e.g., if it cannot be designed for).
Some protein design algorithms are listed below. Although these algorithms address only the most basic formulation of the protein design problem, Equation (), when the optimization goal changes because designers introduce improvements and extensions to the protein design model, such as improvements to the structural flexibility allowed (e.g., protein backbone flexibility) or including sophisticated energy terms, many of the extensions on protein design that improve modeling are built atop these algorithms. For example, Rosetta Design incorporates sophisticated energy terms, and backbone flexibility using Monte Carlo as the underlying optimizing algorithm. OSPREY's algorithms build on the dead-end elimination algorithm and A* to incorporate continuous backbone and side-chain movements. Thus, these algorithms provide a good perspective on the different kinds of algorithms available for protein design.
In 2020 scientists reported the development of an AI-based process using genome databases for evolution-based designing of novel proteins. They used deep learning to identify design-rules. In 2022, a study reported deep learning software that can design proteins that contain prespecified functional sites.
With mathematical guarantees
Dead-end elimination
The dead-end elimination (DEE) algorithm reduces the search space of the problem iteratively by removing rotamers that can be provably shown to be not part of the global lowest energy conformation (GMEC). On each iteration, the dead-end elimination algorithm compares all possible pairs of rotamers at each residue position, and removes each rotamer r′i that can be shown to always be of higher energy than another rotamer ri and is thus not part of the GMEC:
Other powerful extensions to the dead-end elimination algorithm include the pairs elimination criterion, and the generalized dead-end elimination criterion. This algorithm has also been extended to handle continuous rotamers with provable guarantees.
Although the Dead-end elimination algorithm runs in polynomial time on each iteration, it cannot guarantee convergence. If, after a certain number of iterations, the dead-end elimination algorithm does not prune any more rotamers, then either rotamers have to be merged or another search algorithm must be used to search the remaining search space. In such cases, the dead-end elimination acts as a pre-filtering algorithm to reduce the search space, while other algorithms, such as A*, Monte Carlo, Linear Programming, or FASTER are used to search the remaining search space.
Branch and bound
The protein design conformational space can be represented as a tree, where the protein residues are ordered in an arbitrary way, and the tree branches at each of the rotamers in a residue. Branch and bound algorithms use this representation to efficiently explore the conformation tree: At each branching, branch and bound algorithms bound the conformation space and explore only the promising branches.
A popular search algorithm for protein design is the A* search algorithm. A* computes a lower-bound score on each partial tree path that lower bounds (with guarantees) the energy of each of the expanded rotamers. Each partial conformation is added to a priority queue and at each iteration the partial path with the lowest lower bound is popped from the queue and expanded. The algorithm stops once a full conformation has been enumerated and guarantees that the conformation is the optimal.
The A* score f in protein design consists of two parts, f=g+h. g is the exact energy of the rotamers that have already been assigned in the partial conformation. h is a lower bound on the energy of the rotamers that have not yet been assigned. Each is designed as follows, where d is the index of the last assigned residue in the partial conformation.
Integer linear programming
The problem of optimizing ET (Equation ()) can be easily formulated as an integer linear program (ILP). One of the most powerful formulations uses binary variables to represent the presence of a rotamer and edges in the final solution, and constraints the solution to have exactly one rotamer for each residue and one pairwise interaction for each pair of residues:
s.t.
ILP solvers, such as CPLEX, can compute the exact optimal solution for large instances of protein design problems. These solvers use a linear programming relaxation of the problem, where qi and qij are allowed to take continuous values, in combination with a branch and cut algorithm to search only a small portion of the conformation space for the optimal solution. ILP solvers have been shown to solve many instances of the side-chain placement problem.
Message-passing based approximations to the linear programming dual
ILP solvers depend on linear programming (LP) algorithms, such as the Simplex or barrier-based methods to perform the LP relaxation at each branch. These LP algorithms were developed as general-purpose optimization methods and are not optimized for the protein design problem (Equation ()). In consequence, the LP relaxation becomes the bottleneck of ILP solvers when the problem size is large. Recently, several alternatives based on message-passing algorithms have been designed specifically for the optimization of the LP relaxation of the protein design problem. These algorithms can approximate both the dual or the primal instances of the integer programming, but in order to maintain guarantees on optimality, they are most useful when used to approximate the dual of the protein design problem, because approximating the dual guarantees that no solutions are missed. Message-passing based approximations include the tree reweighted max-product message passing algorithm, and the message passing linear programming algorithm.
Optimization algorithms without guarantees
Monte Carlo and simulated annealing
Monte Carlo is one of the most widely used algorithms for protein design. In its simplest form, a Monte Carlo algorithm selects a residue at random, and in that residue a randomly chosen rotamer (of any amino acid) is evaluated. The new energy of the protein, Enew is compared against the old energy Eold and the new rotamer is accepted with a probability of:
where β is the Boltzmann constant and the temperature T can be chosen such that in the initial rounds it is high and it is slowly annealed to overcome local minima.
FASTER
The FASTER algorithm uses a combination of deterministic and stochastic criteria to optimize amino acid sequences. FASTER first uses DEE to eliminate rotamers that are not part of the optimal solution. Then, a series of iterative steps optimize the rotamer assignment.
Belief propagation
In belief propagation for protein design, the algorithm exchanges messages that describe the belief that each residue has about the probability of each rotamer in neighboring residues. The algorithm updates messages on every iteration and iterates until convergence or until a fixed number of iterations. Convergence is not guaranteed in protein design. The message mi→ j(rj that a residue i sends to every rotamer (rj at neighboring residue j is defined as:
Both max-product and sum-product belief propagation have been used to optimize protein design.
Applications and examples of designed proteins
Enzyme design
The design of new enzymes is a use of protein design with huge bioengineering and biomedical applications. In general, designing a protein structure can be different from designing an enzyme, because the design of enzymes must consider many states involved in the catalytic mechanism. However protein design is a prerequisite of de novo enzyme design because, at the very least, the design of catalysts requires a scaffold in which the catalytic mechanism can be inserted.
Great progress in de novo enzyme design, and redesign, was made in the first decade of the 21st century. In three major studies, David Baker and coworkers de novo designed enzymes for the retro-aldol reaction, a Kemp-elimination reaction, and for the Diels-Alder reaction. Furthermore, Stephen Mayo and coworkers developed an iterative method to design the most efficient known enzyme for the Kemp-elimination reaction. Also, in the laboratory of Bruce Donald, computational protein design was used to switch the specificity of one of the protein domains of the nonribosomal peptide synthetase that produces Gramicidin S, from its natural substrate phenylalanine to other noncognate substrates including charged amino acids; the redesigned enzymes had activities close to those of the wild-type.
Semi-rational design
Semi-rational design is a purposeful modification method based on a certain understanding of the sequence, structure, and catalytic mechanism of enzymes. This method is between irrational design and rational design. It uses known information and means to perform evolutionary modification on the specific functions of the target enzyme. The characteristic of semi-rational design is that it does not rely solely on random mutation and screening, but combines the concept of directed evolution. It creates a library of random mutants with diverse sequences through mutagenesis, error-prone RCR, DNA recombination, and site-saturation mutagenesis. At the same time, it uses the understanding of enzymes and design principles to purposefully screen out mutants with desired characteristics.
The methodology of semi-rational design emphasizes the in-depth understanding of enzymes and the control of the evolutionary process. It allows researchers to use known information to guide the evolutionary process, thereby improving efficiency and success rate. This method plays an important role in protein function modification because it can combine the advantages of irrational design and rational design, and can explore unknown space and use known knowledge for targeted modification.
Semi-rational design has a wide range of applications, including but not limited to enzyme optimization, modification of drug targets, evolution of biocatalysts, etc. Through this method, researchers can more effectively improve the functional properties of proteins to meet specific biotechnology or medical needs. Although this method has high requirements for information and technology and is relatively difficult to implement, with the development of computing technology and bioinformatics, the application prospects of semi-rational design in protein engineering are becoming more and more broad.
Design for affinity
Protein–protein interactions are involved in most biotic processes. Many of the hardest-to-treat diseases, such as Alzheimer's, many forms of cancer (e.g., TP53), and human immunodeficiency virus (HIV) infection involve protein–protein interactions. Thus, to treat such diseases, it is desirable to design protein or protein-like therapeutics that bind one of the partners of the interaction and, thus, disrupt the disease-causing interaction. This requires designing protein-therapeutics for affinity toward its partner.
Protein–protein interactions can be designed using protein design algorithms because the principles that rule protein stability also rule protein–protein binding. Protein–protein interaction design, however, presents challenges not commonly present in protein design. One of the most important challenges is that, in general, the interfaces between proteins are more polar than protein cores, and binding involves a tradeoff between desolvation and hydrogen bond formation. To overcome this challenge, Bruce Tidor and coworkers developed a method to improve the affinity of antibodies by focusing on electrostatic contributions. They found that, for the antibodies designed in the study, reducing the desolvation costs of the residues in the interface increased the affinity of the binding pair.
Scoring binding predictions
Protein design energy functions must be adapted to score binding predictions because binding involves a trade-off between the lowest-energy conformations of the free proteins (EP and EL) and the lowest-energy conformation of the bound complex (EPL):
.
The K* algorithm approximates the binding constant of the algorithm by including conformational entropy into the free energy calculation. The K* algorithm considers only the lowest-energy conformations of the free and bound complexes (denoted by the sets P, L, and PL) to approximate the partition functions of each complex:
Design for specificity
The design of protein–protein interactions must be highly specific because proteins can interact with a large number of proteins; successful design requires selective binders. Thus, protein design algorithms must be able to distinguish between on-target (or positive design) and off-target binding (or negative design). One of the most prominent examples of design for specificity is the design of specific bZIP-binding peptides by Amy Keating and coworkers for 19 out of the 20 bZIP families; 8 of these peptides were specific for their intended partner over competing peptides. Further, positive and negative design was also used by Anderson and coworkers to predict mutations in the active site of a drug target that conferred resistance to a new drug; positive design was used to maintain wild-type activity, while negative design was used to disrupt binding of the drug. Recent computational redesign by Costas Maranas and coworkers was also capable of experimentally switching the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.
Protein resurfacing
Protein resurfacing consists of designing a protein's surface while preserving the overall fold, core, and boundary regions of the protein intact. Protein resurfacing is especially useful to alter the binding of a protein to other proteins. One of the most important applications of protein resurfacing was the design of the RSC3 probe to select broadly neutralizing HIV antibodies at the NIH Vaccine Research Center. First, residues outside of the binding interface between the gp120 HIV envelope protein and the formerly discovered b12-antibody were selected to be designed. Then, the sequence spaced was selected based on evolutionary information, solubility, similarity with the wild-type, and other considerations. Then the RosettaDesign software was used to find optimal sequences in the selected sequence space. RSC3 was later used to discover the broadly neutralizing antibody VRC01 in the serum of a long-term HIV-infected non-progressor individual.
Design of globular proteins
Globular proteins are proteins that contain a hydrophobic core and a hydrophilic surface. Globular proteins often assume a stable structure, unlike fibrous proteins, which have multiple conformations. The three-dimensional structure of globular proteins is typically easier to determine through X-ray crystallography and nuclear magnetic resonance than both fibrous proteins and membrane proteins, which makes globular proteins more attractive for protein design than the other types of proteins. Most successful protein designs have involved globular proteins. Both RSD-1, and Top7 were de novo designs of globular proteins. Five more protein structures were designed, synthesized, and verified in 2012 by the Baker group. These new proteins serve no biotic function, but the structures are intended to act as building-blocks that can be expanded to incorporate functional active sites. The structures were found computationally by using new heuristics based on analyzing the connecting loops between parts of the sequence that specify secondary structures.
Design of membrane proteins
Several transmembrane proteins have been successfully designed, along with many other membrane-associated peptides and proteins. Recently, Costas Maranas and his coworkers developed an automated tool to redesign the pore size of Outer Membrane Porin Type-F (OmpF) from E.coli to any desired sub-nm size and assembled them in membranes to perform precise angstrom scale separation.
Other applications
One of the most desirable uses for protein design is for biosensors, proteins that will sense the presence of specific compounds. Some attempts in the design of biosensors include sensors for unnatural molecules including TNT. More recently, Kuhlman and coworkers designed a biosensor of the PAK1.
In a sense, protein design is a subset of battery design.
See also
References
Further reading
Protein engineering
Protein structure | Protein design | [
"Chemistry"
] | 6,287 | [
"Protein structure",
"Structural biology"
] |
1,582,429 | https://en.wikipedia.org/wiki/Birnaviridae | Birnaviridae is a family of double-stranded RNA viruses. Salmonid fish, birds and insects serve as natural hosts. There are currently 11 species in this family, divided among seven genera. Diseases associated with this family include infectious pancreatic necrosis in salmonid fish, which causes significant losses to the aquaculture industry, with chronic infection in adult salmonid fish and acute viral disease in young salmonid fish.
Structure
Viruses in family Birnaviridae are non-enveloped, with icosahedral single-shelled geometries, and T=13 symmetry. The diameter is around 70 nm.
Genome
The genome is composed of linear, bi-segmented, double-stranded RNA. It is around 5.9–6.9 kbp in length and codes for five to six proteins. Birnaviruses encode the following proteins:
RNA-directed RNA polymerase (VP1), which lacks the highly conserved Gly-Asp-Asp (GDD) sequence, a component of the proposed catalytic site of this enzyme family that exists in the conserved motif VI of the palm domain of other RNA-directed RNA polymerases.
The large RNA segment, segment A, of birnaviruses codes for a polyprotein (N-VP2-VP4-VP3-C) that is processed into the major structural proteins of the virion: VP2, VP3 (a minor structural component of the virus), and into the putative protease VP4. VP4 protein is involved in generating VP2 and VP3. recombinant VP3 is more immunogenic than recombinant VP2.
Infectious pancreatic necrosis virus (IPNV), a birnavirus, is an important pathogen in fish farms. Analyses of viral proteins showed that VP2 is the major structural and immunogenic polypeptide of the virus. All neutralizing monoclonal antibodies are specific to VP2 and bind to continuous or discontinuous epitopes. The variable domain of VP2 and the 20 adjacent amino acids of the conserved C-terminal are probably the most important in inducing an immune response for the protection of animals.
Non structural protein VP5 is found in RNA segment A. The function of this small viral protein is unknown. It is believed to be involved in influencing apoptosis, but studies are not completely concurring. The protein can not be found in the virion.
Viral Replication
Viral replication is cytoplasmic. Entry into the host cell is achieved by cell receptor endocytosis. Replication follows the double-stranded RNA virus replication model in the cytoplasm. Double-stranded RNA virus transcription is the method of transcription in cytoplasm. The virus is released by budding. Salmonid fish (Aquabirnavirus), young sexually immature chickens (Avibirnavirus), insects (Entomobirnavirus), and blotched snakehead fish (Blosnavirus) are the natural hosts. Transmission routes are contact.
Taxonomy
The following genera are recognized:
Aquabirnavirus
Avibirnavirus
Blosnavirus
Dronavirus
Entomobirnavirus
Ronavirus
Telnavirus
References
External links
ICTV Report: Birnaviridae
Viralzone: Birnaviridae
Protein families
Virus families
Riboviria | Birnaviridae | [
"Biology"
] | 687 | [
"Protein families",
"Viruses",
"Riboviria",
"Protein classification"
] |
1,583,111 | https://en.wikipedia.org/wiki/Squamous%20metaplasia | Squamous metaplasia is a benign non-cancerous change (metaplasia) of surfacing lining cells (epithelium) to a squamous morphology.
Location
Common sites for squamous metaplasia include the bladder and cervix. Smokers often exhibit squamous metaplasia in the linings of their airways. These changes don't signify a specific disease, but rather usually represent the body's response to stress or irritation. Vitamin A deficiency or overdose can also lead to squamous metaplasia.
Uterine cervix
In regard to the cervix, squamous metaplasia can sometimes be found in the endocervix, as it is composed of simple columnar epithelium, whereas the ectocervix is composed of stratified squamous non-keratinized epithelium.
Significance
Squamous metaplasia may be seen in the context of benign lesions (e.g., atypical polypoid adenomyoma), chronic irritation, or cancer (e.g., endometrioid endometrial carcinoma), as well as pleomorphic adenoma.
See also
Metaplasia
Dysplasia
Barrett esophagus - a columnar cell metaplasia of squamous epithelium
Subareolar abscess
References
Histopathology | Squamous metaplasia | [
"Chemistry"
] | 297 | [
"Histopathology",
"Microscopy"
] |
1,584,239 | https://en.wikipedia.org/wiki/SCK%20CEN | SCK CEN (the Belgian Nuclear Research Centre), until 2020 shortened as SCK•CEN, is the Belgian nuclear research centre located in Mol, Belgium. SCK CEN is a global leader in the field of nuclear research, services, and education.
History
SCK CEN was founded in 1952 and originally named Studiecentrum voor de Toepassingen van de Kernenergie (Research Centre for the Applications of Nuclear Energy), abbreviated to STK. Land was bought in the municipality of Mol, and over the next years many technical, administrative, medical, and residential buildings were constructed on the site. From 1956 to 1964 four nuclear research reactors became operational: the BR 1, BR 2, BR 3, the first pressurized water reactor in Europe, and VENUS.
In 1963 SCK CEN already employed 1600 people, a number that would remain about the same over the next decades. In 1970 SCK CEN widened its field of activities outside the nuclear sector, but the emphasis remained on nuclear research. In 1991 SCK CEN was split and a new institute, VITO (Vlaamse Instelling voor Technologisch Onderzoek; Flemish institute for technological research), took over the non-nuclear activities. SCK CEN currently has about 850 employees.
In the 1980s, SCK CEN employees were bribed to receive and store high-level nuclear waste from the West German firm Transnuklear.
In 2017, the International Atomic Energy Agency designated SCK CEN as one of the four International Centres based on Research Reactor (ICERR).
Organisation profile
SCK CEN is a foundation of public utility with a legal status according to private law, under the guidance of the Belgian Federal Ministry in charge of energy. SCK CEN has more than 800 employees and an annual budget of €180 million. The organization receives 25% of its funding directly from government grants, 5% indirectly via activities for the dismantling of declassified installations and 70% from contract work and services.
Since 1991, the organization's statutory mission gives priority to research on problems of societal concern:
Safety of nuclear installations
Radiation protection
Medical and industrial applications of radiation
The back end of the nuclear fuel cycle (nuclear reprocessing and management of radioactive waste)
Nuclear decommissioning and decontamination of nuclear sites
The fight against nuclear proliferation
To these domains, SCK CEN contributes with research and development, training, communication, and services. This is done with a view to sustainable development, and hence taking into account environmental, economical and social factors.
Chairmen of the Board of Governors (since 1952)
Count Pierre Ryckmans (1952-1959)
Count Marc de Hemptinne (1959-1963)
Professor Julien Hoste (1963-1963)
General Letor (1963-1971)
Mr. André Baeyens (1971-1975)
Baron Frans Van den Bergh (1975-1986)
Mr. Ivo Van Vaerenbergh (1986-1989)
Professor Roger Van Geen (1991-1995)
Professor Frank Deconinck (1996-2013)
Baron Derrick Gosselin (since 2013)
Reactors
BR1
The Belgian Reactor 1 (BR1) is the first research reactor to have been built and commissioned in Belgium. This natural uranium air-cooled graphite-moderated reactor was commissioned in 1956. Its maximal thermal power is 4 MW, but it is presently only operated at 700 kW. Its natural uranium inventory could allow the reactor to run without refueling during several centuries (~ 300 years). At first, this research reactor was used primarily for research into reactor and neutron physics, for neutron activation analysis, and for a minor production of radionuclides. Now, it is being used for the irradiation of components, the calibration of measuring instruments, and for performing analyses and training nuclear students. BR1 operates by order of other research centres, universities and the industry.
BR2
Commissioned in 1962, The Belgian Reactor 2 (BR2) is a materials testing reactor. It is a high-flux reactor (~ 10 neutron・cm・s) in which neutrons are moderated by a beryllium matrix and cooled by light water pumped at low pressure (12-15 bar). Its core is very compact due to the particular shape of its beryllium matrix (paraboloid of revolution) allowing to install the fuel rods, the control rods, and the experiments in a very small volume (~ 1m). One reports that its very compact core architecture was quickly drawn on a beer mat during a discussion between nuclear physicists in a bar in New York during a very creative night at the end of the 1950s, or beginning 1960. At the demand of the US authorities, its nuclear fuel is presently based on low-enriched uranium (LEU) to minimize the risk of nuclear proliferation. Its thermal power (100 MW) is dissipated in the environment by water heated at modest temperature (40-48 °C). This research reactor is also used for the production of medical radio-isotopes. The BR2 research reactor produces on an annual basis more than 25% of the worldwide demand for molybdenum-99 and in peak periods even up to 65%.
BR3
The Belgian Reactor 3 was the first pressurised water reactor (PWR) in Europe. The reactor served as a prototype for the reactors in Doel and Tihange. It was taken into service in 1962 and permanently shut down in 1987.
Decommissioning
Decommissioning started in 2002. The European Commission selected BR3 as a pilot project to show the technical and economic feasibility of the dismantling of a reactor under real conditions.
VENUS
The research reactor VENUS, which stands for Vulcan Experimental Nuclear Study was commissioned in 1964. VENUS is used as an experimental installation for nuclear reactor physics studies of new reactor systems and for testing reactor calculations. The installation was re-built and modernised several times. As part of the GUINEVERE project, SCK CEN decided to re-build the VENUS reactor into a scale model of Accelerator Driven Systems (ADS). The particle accelerator was first connected in 2011. VENUS is a "zero power reactor": it has a power consumption of only 500 Watt.
MYRRHA
MYRRHA is a design of a Multi-purpose HYbrid Research Reactor for High-tech Applications. MYRRHA is the world's first research reactor driven by a particle accelerator.
INES incidents
After a leak in the hot cell of BR2 reactor, selenium-75 was released in the atmosphere on 15 May 2019. The event was classified by FANC at the level 1 of the international nuclear and radiological events scale (INES scale). 75Se (half-life = 119.8 days) was detected at low concentrations on aerosol filters from several air monitoring stations belonging to IRSN (Institut de Radioprotection et de Sûreté Nucléaire, France), in the Lille area and in the northwestern part of France. IRSN also performed an atmospheric dispersion modeling analysis. The dose assessment showed very low exposure levels (< 1 microsievert) without concern for public health in France.
The power of the BR2 reactor was insufficiently measured on January 27, 2021, because two of the three measuring chains were not functioning in accordance with the regulations and the third was defective. Since the installation had two independent sets of three measuring chains, any power variations could still be detected.
FANC has classified this incident at level 2 on the INES scale, not only because the operating conditions were not respected, but also because a similar incident had already occurred at SCK CEN in 2019. These two incidents were related to a lack of safety culture from the licensee leading to inappropriate operations.
Research activities
The Centres research activities are concentrated into the following main tracks.
HADES
In 1980, SCK CEN started the construction of an Underground Research Laboratory (URL) at 223 m below the ground level to study the feasibility of geological disposal in deep clay layers in the Boom Clay Formation at the Mol site. The underground laboratory was given the name HADES, god of the underworld in the Greek mythology. HADES is an acronym meaning: High Activity Disposal Experimental Site. Here, for more than 45 years, scientists perform research on the geomechanical, geochemical, mineralogical and microbiological characteristics of Boom Clay and on the interactions between the clay and the candidate materials for the waste packages. The underground laboratory HADES is now operated by the ESV EURIDICE, an economic partnership between SCK CEN and NIRAS.
Snow White
Since 2018, SCK CEN has commissioned a Snow White (JL-900) Early Warning System. This installation aspirates 900 m3 of air per hour across filters. These filters are replaced and analysed on a weekly basis. Because the system sucks up large quantities of air, SCK CEN can detect very low concentrations of radioactivity in the airborne dust. In this way, radioactive emissions, even when originating from abroad, do not remain unnoticed. Detection of low concentrations may indicate an abnormal emission, such as a hidden leak, or signal a nuclear incident. Snow White successfully detected airborne Cs-137 released during forest fires in the Chornobyl Exclusion Zone in Ukraine in 2020.
Nuclear Materials Science
Research is performed to improve the knowledge, understanding, and numerical simulation of the behaviour of materials under irradiation, and from there on predicting their performance. The aim is to develop, assess and validate new materials such as nuclear fuel, construction materials, and radioisotopes to be used in nuclear applications.
Advanced Nuclear Systems
Extensive contributions are made to extend the present Belgian expertise in the field of developments related to GEN IV reactor systems and ITER. In co-operation with the industry and international research teams, R&D efforts are made to develop and test innovative reactor technologies and instrumentation. This will contribute to the construction of an experimental fast spectrum installation (MYRRHA), allowing a.o. transmutation processes to be performed.
Environment, Health and Safety
Next to specialised R&D in the field of a.o. radiobiology and -ecology, environmental chemistry, decommissioning, radioactive waste management and disposal, SCK CEN also delivers high-quality measurement services such as radiation dosimetry, calibration, and spectrometry. Policy support, decision making, and research on the integration of social aspects into nuclear research contribute to meet complex problems related to radiation protection and energy policy.
The facility has for meteorological measurements a 121.1 metres tall guyed mast.
Education and Training – Academy (ACA)
Throughout its more than 60 years of research experience in the field of peaceful applications of nuclear science and technology, SCK CEN has also conducted education and training (). The ACA activities at SCK CEN cover a. o. reactor physics, reactor operation, reactor engineering, radiation protection, decommissioning, and waste management. Next to courses, SCK CEN also offers students the possibility to perform their research work at our laboratories and research reactors. Final-year students and Ph.D. candidates can enter a programme outlined together with a SCK CEN mentor and in close collaboration with a university promotor. Post-docs are mainly recruited in specialised research domains that reflect the priority programmes and R&D topics of our institute.
The Atoomwijk
The Atoomwijk was built to accommodate the employees. When the Flemish Institute for Technological Research was set up, a number of apartments were transferred, but the majority of the district is still owned by the study center. In addition to housing, the district also consists of sports infrastructure.
Increased risk of cancer?
On behalf of the Belgian Ministry of Social Affairs and Public Health, Sciensano conducted the Nucabel 2 study from 9 January 2017 to 30 June 2020. This national epidemiological study focuses on the possible health risks, mainly cancer, for people living in the vicinity of Belgian nuclear sites. The results of Nucabel 2 state that the incidence in the close vicinity (< 5 km) of the Mol-Dessel nuclear site is 3 times higher than the rest of Belgium. The results are statistically significant. Nevertheless, the number of observed cases remains low.
However, the results of this study - as the Sciensano researchers also indicate - cannot establish a causal link between the occurrence of cancer cases and the proximity of the Mol-Dessel site.
Additional information on the Nucabel 2 study:
The Sciensano study was a descriptive epidemiological study in which no attention was paid to:
other sources to which Belgians may be exposed, such as medical applications or background radiation;
the effective dose that would be emitted in Mol/Dessel;
individual factors, such as infections, genetics, and other risk factors.
After further questioning SCK CEN on points 1 and 2, the following emerged:
Every year, a Belgian is on average exposed to a dose of 4 millisieverts. Almost half of this comes from medical applications. This - like the exposure from natural background radiation - has not been taken into account. However, this represents a much larger dose burden for most critical members of the surrounding population. The doses from discharges from nuclear installations are so small that the dose burden - compared to natural and medical exposure - is almost negligible.
The effective dose of all atmospheric discharges and all exposure routes of the SCK CEN installations amounts to a maximum of 2 micro Sv (μSv) per year. This is therefore 1/50 of the limit of 100 micro Sv per year for the whole nuclear site and 500 times less than the effective dose of natural exposure in the Kempen.
See also
Edgar Sengier
European Atomic Energy Community (EURATOM)
Flemish Institute for Technological Research (VITO)
List of Cancer Clusters
Nuclear Energy in Belgium
References
External links
Official history brochure
SCK CEN’s public Institutional Repository
Nuclear research institutes
Radiation protection organizations
Research institutes in Belgium
Nuclear technology in Belgium
Buildings and structures in Antwerp Province
Mol, Belgium | SCK CEN | [
"Engineering"
] | 2,882 | [
"Nuclear research institutes",
"Nuclear organizations",
"Radiation protection organizations"
] |
1,584,291 | https://en.wikipedia.org/wiki/Guastavino%20tile | The Guastavino tile arch system is a version of Catalan vault introduced to the United States in 1885 by Spanish architect and builder Rafael Guastavino (1842–1908). It was patented in the United States by Guastavino in 1892.
Description
Guastavino vaulting is a technique for constructing robust, self-supporting arches and architectural vaults using interlocking terracotta tiles and layers of mortar to form a thin skin, with the tiles following the curve of the roof as opposed to horizontally (corbelling), or perpendicular to the curve (as in Roman vaulting). This is known as timbrel vaulting, because of supposed likeness to the skin of a timbrel or tambourine. It is also called Catalan vaulting (though Guastavino did not use this term) and "compression-only thin-tile vaulting".
Guastavino tile is found in some of the most prominent Beaux-Arts structures in New York and Massachusetts, as well as in major buildings across the United States. In New York City, these include the Grand Central Oyster Bar & Restaurant and the remnants of the Della Robbia Bar at the former Vanderbilt Hotel at 4 Park Avenue. It is also found in some non-Beaux-Arts structures such as the crossing of the Cathedral of St. John the Divine.
Construction
The Guastavino terracotta tiles are standardized, less than thick, and about across. They are usually set in three herringbone-pattern courses with a sandwich of thin layers of Portland cement. Unlike heavier stone construction, these tile domes could be built without centering. Supporting formwork was still required for structural arches which established a framework for the ceiling. The large openings framed by the support arches were then filled in with thin Guastavino tiles fabricated into domed surfaces. Each ceiling tile was cantilevered out over the open space, relying only on the quick-drying cements developed by the company. Akoustolith, a special sound-absorbing tile, was one of several trade names used by Guastavino.
Significance
Guastavino tile has both structural and aesthetic significance.
Structurally, the timbrel vault was based on traditional vernacular vaulting techniques already very familiar to Mediterranean architects, but not well known in America. Terracotta free-span timbrel vaults were far more economical and structurally resilient than the ancient Roman vaulting alternatives.
Guastavino wrote extensively about his system of "Cohesive Construction". As the name suggests, he believed that these timbrel vaults represented an innovation in structural engineering. The tile system provided solutions that were impossible with traditional masonry arches and vaults. Subsequent research has shown the timbrel vault is simply a masonry vault, much less thick than traditional arches, that produces less horizontal thrust due to its lighter weight. This permits flatter arch profiles, which would produce unacceptable horizontal thrust if constructed in thicker, heavier masonry.
Exhibitions
In 2012, a group of students under supervision of MIT professor John Ochsendorf built a full-scale reproduction of a small Guastavino vault. The resulting structure was exhibited, as well as a time lapse video documenting the construction process.
Ochsendorf also curated Palaces for the People, an exhibition featuring the history and legacy of Guastavino which was premiered in September 2012 at the Boston Public Library, Rafael Guastavino's first major architectural work in America. The exhibition then traveled to the National Building Museum in Washington DC, and an expanded version later appeared at the Museum of the City of New York. Ochsendorf, a winner of the MacArthur Foundation "genius grant", also wrote the book-length color-illustrated monograph Guastavino Vaulting: The Art of Structural Tile, and an online exhibition coordinated with the traveling exhibits.
In addition, Ochsendorf directs the Guastavino Project at MIT, which researches and maintains the Guastavino.net online archive of related materials.
Archival sources
The Guastavino company was headquartered in Woburn, Massachusetts, in a building of their own design which still stands. The records and drawings of the Guastavino Fireproof Construction Company are preserved by the Department of Drawings & Archives in the Avery Architectural and Fine Arts Library at Columbia University in New York City.
See also
Glazed architectural terra-cotta
List of architectural vaults
First Church of Christ, Scientist (Cambridge, Massachusetts)
Ed Koch Queensboro Bridge
Basilica of St. Lawrence, Asheville
Biltmore Estate
Grant's Tomb
Grand Central Oyster Bar & Restaurant
Notes
Further reading
External links
global database of Guastavino sites with photos. Created as a companion to a museum exhibition that traveled to three American museums, 2012–2014.
Guastavino.net: documenting Guastavino's work in the Boston area. This page provides copies of writings and patents by the Guastavinos as well.
Rafaelguastavino.com: documenting Guastavino's work in New York City
"CONSTRUCTION OF A VAULT", Massachusetts Institute of Technology (shows method of construction)
Tiling
Building materials
Masonry
Structural system
Architecture in Spain | Guastavino tile | [
"Physics",
"Technology",
"Engineering"
] | 1,044 | [
"Structural engineering",
"Masonry",
"Building engineering",
"Structural system",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
1,584,613 | https://en.wikipedia.org/wiki/Air%20cycle%20machine | An air cycle machine (ACM) is the refrigeration unit of the environmental control system (ECS) used in pressurized gas turbine-powered aircraft. Normally an aircraft has two or three of these ACM. Each ACM and its components are often referred as an air conditioning pack. The air cycle cooling process uses air instead of a phase changing material such as Freon in the gas cycle. No condensation or evaporation of a refrigerant is involved, and the cooled air output from the process is used directly for cabin ventilation or for cooling electronic equipment.
History
Air cycle machines were first developed in the 19th century for providing chilling on ships. The technique is a reverse Brayton cycle (the thermodynamic cycle of a gas turbine engine) and is also known as a Bell Coleman cycle or "Air-Standard Refrigeration Cycle".
Technical details
The usual compression, cooling and expansion seen in any refrigeration cycle is accomplished in the ACM by a centrifugal compressor, two air-to-air heat exchangers and an expansion turbine.
Bleed air from the engines, an auxiliary power unit, or a ground source, which can be in excess of 150 °C and at a pressure of perhaps , is directed into a primary heat exchanger. Outside air at ambient temperature and pressure is used as the coolant in this air-to-air heat exchanger. Once the hot air has been cooled, it is then compressed by the centrifugal compressor. This compression heats the air (the maximum air temperature at this point is about 250 °C) and it is sent to the secondary heat exchanger, which again uses outside air as the coolant. The pre-cooling through the first heat exchanger increases the efficiency of the ACM because it lowers the temperature of the air entering the compressor, so that less work is required to compress a given air mass (the energy required to compress a gas by a given ratio rises as the temperature of the incoming gas rises).
At this point, the temperature of the compressed cooled air is somewhat greater than the ambient temperature of the outside air. The compressed, cooled air then travels through the expansion turbine which extracts heat from the air as it expands, cooling it to below ambient temperature (down to −20 °C or −30 °C). It is possible for the ACM to produce air cooled to less than 0 °C even when outside air temperature is high (as might be experienced with the aircraft stationary on the ground in a hot climate). The work extracted by the expansion turbine is transmitted by a shaft to spin the pack's centrifugal compressor and an inlet fan which draws in the external air for the heat exchangers during ground running; ram air is used in flight. The power for the air conditioning pack comes from the reduction of the pressure of the incoming bleed air relative to that of the cooled air exiting the system; typical differentials are from about to about .
The next step is to dehumidify the air. Cooling the air has caused any water vapor it contains to condense into fog, which can be removed using a cyclonic separator. Historically, the water extracted by the separator was simply dumped overboard, but newer ACMs spray the water into the outside-air intakes for each heat exchanger, which gives the coolant a greater heat capacity and improves efficiency. (It also means that running the ACM on an airplane parked on the tarmac does not leave a puddle.)
The air can now be combined in a mixing chamber with a small amount of non-conditioned engine bleed air. This warms the air to the desired temperature, and then the air is vented into the cabin or to electronic equipment.
Manufacturers
Major manufacturers of ACM are Honeywell Aerospace, Liebherr Aerospace, Collins Aerospace, and PBS Velka Bites.
Nomenclature
Types
The types of air cycle machines may be identified as:
Simple cycle consisting of a turbine and fan on a common shaft
Two-wheel bootstrap consisting of a turbine and compressor on a common shaft
Three-wheel consisting of a turbine, compressor, and fan on a common shaft
Four-wheel/dual-spool consisting of two turbines, a compressor, and a fan on a common shaft
Abbreviations
The equipment is referred to variously as PAC, air conditioning pack, or A/C pack, but there is a lack of consistency and agreement as to the derivations and meanings:
Pack. as an abbreviation of package, applied to both pneumatic and non-pneumatic systems (Boeing, Airbus, Embraer, Bombardier and Lockheed)
PAC as an acronym meaning either Passenger Air Conditioning or pneumatic air conditioning (the latter being found on systems control panels of at least one business jet supplier)
PACK as an acronym for pneumatic air cycle kit or pressurization & air conditioning kit
See also
Brayton cycle
Refrigeration
Aerotoxic syndrome
References
External links
What is air cycle?
Legislation and guidance from the UK Government and the National Health Service (scroll to page 5 for schematic of ACM system)
Gas compressors
Aircraft components | Air cycle machine | [
"Chemistry"
] | 1,049 | [
"Gas compressors",
"Turbomachinery"
] |
1,584,732 | https://en.wikipedia.org/wiki/Thermochromism | Thermochromism is the property of substances to change color due to a change in temperature. A mood ring is an example of this property used in a consumer product although thermochromism also has more practical uses, such as baby bottles, which change to a different color when cool enough to drink, or kettles which change color when water is at or near boiling point. Thermochromism is one of several types of chromism.
Organic materials
Thermochromatic liquid crystals
The two common approaches are based on liquid crystals and leuco dyes. Liquid crystals are used in precision applications, as their responses can be engineered to accurate temperatures, but their color range is limited by their principle of operation. Leuco dyes allow wider range of colors to be used, but their response temperatures are more difficult to set with accuracy.
Some liquid crystals are capable of displaying different colors at different temperatures. This change is dependent on selective reflection of certain wavelengths by the crystallic structure of the material, as it changes between the low-temperature crystallic phase, through anisotropic chiral or twisted nematic phase, to the high-temperature isotropic liquid phase. Only the nematic mesophase has thermochromic properties; this restricts the effective temperature range of the material.
The twisted nematic phase has the molecules oriented in layers with regularly changing orientation, which gives them periodic spacing. The light passing through the crystal undergoes Bragg diffraction on these layers, and the wavelength with the greatest constructive interference is reflected back, which is perceived as a spectral color. A change in the crystal temperature can result in a change of spacing between the layers and therefore in the reflected wavelength. The color of the thermochromic liquid crystal can therefore continuously range from non-reflective (black) through the spectral colors to black again, depending on the temperature. Typically, the high temperature state will reflect blue-violet, while the low-temperature state will reflect red-orange. Since blue is a shorter wavelength than red, this indicates that the distance of layer spacing is reduced by heating through the liquid-crystal state.
Some such materials are cholesteryl nonanoate or cyanobiphenyls.
Mixtures with 3–5 °C span of temperatures and ranges from about 17–23 °C to about 37–40 °C can be composed from varying proportions of cholesteryl oleyl carbonate, cholesteryl nonanoate, and cholesteryl benzoate. For example, the mass ratio of 65:25:10 yields range of 17–23 °C, and 30:60:10 yields range of 37–40 °C.
Liquid crystals used in dyes and inks often come microencapsulated, in the form of suspension.
Liquid crystals are used in applications where the color change has to be accurately defined. They find applications in thermometers for room, refrigerator, aquarium, and medical use, and in indicators of level of propane in tanks. A popular application for thermochromic liquid crystals are the mood rings.
Liquid crystals are difficult to work with and require specialized printing equipment. The material itself is also typically more expensive than alternative technologies. High temperatures, ultraviolet radiation, some chemicals and/or solvents have a negative impact on their lifespan.
Leuco dyes
Thermochromic dyes are based on mixtures of leuco dyes with other suitable chemicals, displaying a color change (usually between the colorless leuco form and the colored form) that depends upon temperature. The dyes are rarely applied on materials directly; they are usually in the form of microcapsules with the mixture sealed inside. An illustrative example is the Hypercolor fashion, where microcapsules with crystal violet lactone, weak acid, and a dissociable salt dissolved in dodecanol are applied to the fabric. When the solvent is solid, the dye exists in its lactone leuco form, while when the solvent melts, the salt dissociates, the pH inside the microcapsule lowers, the dye becomes protonated, its lactone ring opens, and its absorption spectrum shifts drastically, therefore it becomes deeply violet. In this case the apparent thermochromism is in fact halochromism.
The dyes most commonly used are spirolactones, fluorans, spiropyrans, and fulgides. The acids include bisphenol A, parabens, 1,2,3-triazole derivates, and 4-hydroxycoumarin and act as proton donors, changing the dye molecule between its leuco form and its protonated colored form; stronger acids would make the change irreversible.
Leuco dyes have less accurate temperature response than liquid crystals. They are suitable for general indicators of approximate temperature ("too cool", "too hot", "about OK"), or for various novelty items. They are usually used in combination with some other pigment, producing a color change between the color of the base pigment and the color of the pigment combined with the color of the non-leuco form of the leuco dye. Organic leuco dyes are available for temperature ranges between about and , in wide range of colors. The color change usually happens in a 3 °C (5.4 °F) interval.
Leuco dyes are used in applications where temperature response accuracy is not critical: e.g. novelties, bath toys, flying discs, and approximate temperature indicators for microwave-heated foods. Microencapsulation allows their use in wide range of materials and products. The size of the microcapsules typically ranges between 3–5 μm (over 10 times larger than regular pigment particles), which requires some adjustments to printing and manufacturing processes.
An application of leuco dyes is in the Duracell battery state indicators. A layer of a leuco dye is applied on a resistive strip to indicate its heating, thus gauging the amount of current the battery is able to supply. The strip is triangular-shaped, changing its resistance along its length, therefore heating up a proportionally long segment with the amount of current flowing through it. The length of the segment above the threshold temperature for the leuco dye then becomes colored.
Exposure to ultraviolet radiation, solvents and high temperatures reduce the lifespan of leuco dyes. Temperatures above about typically cause irreversible damage to leuco dyes; a time-limited exposure of some types to about is allowed during manufacturing.
Thermochromic paints use liquid crystals or leuco dye technology. After absorbing a certain amount of light or heat, the crystallic or molecular structure of the pigment reversibly changes in such a way that it absorbs and emits light at a different wavelength than at lower temperatures. Thermochromic paints are seen quite often as a coating on coffee mugs, whereby once hot coffee is poured into the mugs, the thermochromic paint absorbs the heat and becomes colored or transparent, therefore changing the appearance of the mug. These are known as magic mugs or heat changing mugs. Another common example is the use of leuco dye in spoons used in ice cream parlors and frozen yogurt shops. Once dipped into the cold desserts, part of the spoon appears to change color.
Papers
Thermochromic papers are used for thermal printers. One example is the paper impregnated with the solid mixture of a fluoran dye with octadecylphosphonic acid. This mixture is stable in solid phase; however, when the octadecylphosphonic acid is melted, the dye undergoes a chemical reaction in the liquid phase, and assumes the protonated colored form. This state is then conserved when the matrix solidifies again, if the cooling process is fast enough. As the leuco form is more stable in lower temperatures and solid phase, the records on thermochromic papers slowly fade out over years.
Polymers
Thermochromism can appear in thermoplastics, duroplastics, gels or any kind of coatings. The polymer itself, an embedded thermochromic additive or a high ordered structure built by the interaction of the polymer with an incorporated non-thermochromic additive can be the origin of the thermochromic effect. Furthermore, from the physical point of view, the origin of the thermochromic effect can be multifarious. So it can come from changes of light reflection, absorption and/or scattering properties with temperature. The application of thermochromic polymers for adaptive solar protection is of great interest. For instance, polymer films with tunable thermochromic nanoparticles, reflective or transparent to sunlight depending on the temperature, have been used to create windows that optimize to the weather. A function by design strategy, e.g. applied for the development of non-toxic thermochromic polymers has come into the focus in the last decade.
Inks
Thermochromic inks or dyes are temperature sensitive compounds, developed in the 1970s, that temporarily change color with exposure to heat. They come in two forms, liquid crystals and leuco dyes. Leuco dyes are easier to work with and allow for a greater range of applications. These applications include: flat thermometers, battery testers, clothing, and the indicator on bottles of maple syrup that change color when the syrup is warm. The thermometers are often used on the exterior of aquariums, or to obtain a body temperature via the forehead. Coors Light uses thermochromic ink on its cans, changing from white to blue to indicate the can is cold.
Inorganic materials
Virtually all inorganic compounds are thermochromic to some extent. Most examples however involve only subtle changes in color. For example, titanium dioxide, zinc sulfide and zinc oxide are white at room temperature but when heated change to yellow. Similarly indium(III) oxide is yellow and darkens to yellow-brown when heated. Lead(II) oxide exhibits a similar color change on heating. The color change is linked to changes in the electronic properties (energy levels, populations) of these materials.
More dramatic examples of thermochromism are found in materials that undergo phase transition or exhibit charge-transfer bands near the visible region. Examples include
Cuprous mercury iodide (Cu2[HgI4]) undergoes a phase transition at 67 °C, reversibly changing from a bright red solid material at low temperature to a dark brown solid at high temperature, with intermediate red-purple states. The colors are intense and seem to be caused by Cu(I)–Hg(II) charge-transfer complexes.
Silver mercury iodide (Ag2[HgI4]) is yellow at low temperatures and orange above 47–51 °C, with intermediate yellow-orange states. The colors are intense and seem to be caused by Ag(I)–Hg(II) charge-transfer complexes.
Mercury(II) iodide is a crystalline material which at 126 °C undergoes reversible phase transition from red alpha phase to pale yellow beta phase.
Bis(dimethylammonium) tetrachloronickelate(II) ([(CH3)2NH2]2NiCl4) is a raspberry-red compound, which becomes blue at about 110 °C. On cooling, the compound becomes a light yellow metastable phase, which over 2–3 weeks turns back into original red. Many other tetrachloronickelates are also thermochromic.
Bis(diethylammonium) tetrachlorocuprate(II) ([(CH3CH2)2NH2]2CuCl4) is a bright green solid material, which at 52–53 °C reversibly changes color to yellow. The color change is caused by relaxation of the hydrogen bonds and subsequent change of geometry of the copper-chlorine complex from planar to deformed tetrahedral, with appropriate change of arrangement of the copper atom's d-orbitals. There is no stable intermediate, the crystals are either green or yellow.
Chromium(III) oxide and aluminium(III) oxide in a 1:9 ratio is red at room temperature and grey at 400 °C, due to changes in its crystal field.
Vanadium dioxide has been investigated for use as a "spectrally-selective" window coating to block infrared transmission and reduce the loss of building interior heat through windows. This material behaves like a semiconductor at lower temperatures, allowing more transmission, and like a conductor at higher temperatures, providing much greater reflectivity. The phase change between transparent semiconductive and reflective conductive phase occurs at 68 °C; doping the material with 1.9% of tungsten lowers the transition temperature to 29 °C.
Other thermochromic solid semiconductor materials include
CdxZn1−xSySe1−y (x = 0.5–1, y = 0.5–1),
ZnxCdyHg1−x−yOaSbSecTe1−a−b−c (x = 0–0.5, y = 0.5–1, a = 0–0.5, b = 0.5–1, c = 0–0.5),
HgxCdyZn1−x−ySbSe1−b (x = 0–1, y = 0–1, b = 0.5–1).
Many tetraorganodiarsine, -distibine, and -dibismuthine compounds are strongly thermochromic. The color changes arise because they form van der Waals chains when cold, and the intermolecular spacing is sufficiently short for orbital overlap. The energy levels of the resulting bands then depend on the intermolecular distance, which varies with temperature.
Some minerals are thermochromic as well; for example some chromium-rich pyropes, normally reddish-purplish, become green when heated to about 80 °C.
Irreversible inorganic thermochromes
Some materials change color irreversibly. These can be used for e.g. laser marking of materials.
Copper(I) iodide is a solid pale tan material transforming at 60–62 °C to orange color.
Ammonium metavanadate is a white material, turning to brown at 150 °C and then to black at 170 °C.
Manganese violet (Mn(NH4)2P2O7) is a violet material, a popular pigment, turning to white at 400 °C.
Applications in buildings
Thermochromic materials, in the form of coatings, can be applied in buildings as a technique of passive energy retrofit. Thermochromic coatings are characterized as active, dynamic and adaptive materials that can adjust their optical properties according to external stimuli, usually temperature. Thermochromic coating modulate their reflectance as a function of their temperature, making them an appropriate solution for combating cooling loads, without diminishing the building's thermal performance during the winter period.
Thermochromic materials are categorized into two subgroups, dye-based and non-dye-based thermochromic materials. However, the only class of dye-based thermochromic materials that are widely, commercially available and have been applicated and tested into buildings, are the leuco dyes.
References
Inks
Chromism
Heat transfer | Thermochromism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,239 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Spectrum (physical sciences)",
"Chromism",
"Materials science",
"Thermodynamics",
"Smart materials",
"Spectroscopy",
"Thermochromism"
] |
1,585,155 | https://en.wikipedia.org/wiki/Weierstrass%20factorization%20theorem | In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root.
The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
Motivation
It is clear that any finite set of points in the complex plane has an associated polynomial whose zeroes are precisely at the points of that set. The converse is a consequence of the fundamental theorem of algebra: any polynomial function in the complex plane has a factorization
where is a non-zero constant and is the set of zeroes of .
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers where the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that for each z, the factors must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed.
Weierstrass' elementary factors have these properties and serve the same purpose as the factors above.
The elementary factors
Consider the functions of the form for . At , they evaluate to and have a flat slope at order up to . Right after , they sharply fall to some small positive value. In contrast, consider the function which has no flat slope but, at , evaluates to exactly zero. Also note that for ,
[[File:First_5_Weierstrass_factors_on_the_unit_interval.svg|thumb|right|alt=First 5 Weierstrass factors on the unit interval.|Plot of for n = 0,...,4 and x in the interval [-1,1].]]
The elementary factors,
also referred to as primary factors'',
are functions that combine the properties of zero slope and zero value (see graphic):
For and , one may express it as
and one can read off how those properties are enforced.
The utility of the elementary factors lies in the following lemma:
Lemma (15.8, Rudin) for ,
The two forms of the theorem
Existence of entire function with specified zeroes
Let be a sequence of non-zero complex numbers such that .
If is any sequence of nonnegative integers such that for all ,
then the function
is entire with zeros only at points . If a number occurs in the sequence exactly times, then function has a zero at of multiplicity .
The sequence in the statement of the theorem always exists. For example, we could always take and have the convergence. Such a sequence is not unique: changing it at finite number of positions, or taking another sequence , will not break the convergence.
The theorem generalizes to the following: sequences in open subsets (and hence regions) of the Riemann sphere have associated functions that are holomorphic in those subsets and have zeroes at the points of the sequence.
Also the case given by the fundamental theorem of algebra is incorporated here. If the sequence is finite then we can take and obtain: .
The Weierstrass factorization theorem
Let be an entire function, and let be the non-zero zeros of repeated according to multiplicity; suppose also that has a zero at of order .
Then there exists an entire function and a sequence of integers such that
Examples of factorization
The trigonometric functions sine and cosine have the factorizations
while the gamma function has factorization
where is the Euler–Mascheroni constant. The cosine identity can be seen as special case of
for .
Hadamard factorization theorem
A special case of the Weierstraß factorization theorem occurs for entire functions of finite order. In this case the can be taken independent of and the function is a polynomial. Thus where are those roots of that are not zero (), is the order of the zero of at (the case being taken to mean ), a polynomial (whose degree we shall call ), and is the smallest non-negative integer such that the seriesconverges. This is called Hadamard's canonical representation. The non-negative integer is called the genus of the entire function . The order of satisfies
In other words: If the order is not an integer, then is the integer part of . If the order is a positive integer, then there are two possibilities: or .
For example, , and are entire functions of genus .
See also
Mittag-Leffler's theorem
Wallis product, which can be derived from this theorem applied to the sine function
Blaschke product
Notes
External links
Theorems in complex analysis | Weierstrass factorization theorem | [
"Mathematics"
] | 1,146 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
1,585,226 | https://en.wikipedia.org/wiki/Young%20symmetrizer | In mathematics, a Young symmetrizer is an element of the group algebra of the symmetric group whose natural action on tensor products of a complex vector space has as image an irreducible representation of the group of invertible linear transformations . All irreducible representations of are thus obtained. It is constructed from the action of on the vector space by permutation of the different factors (or equivalently, from the permutation of the indices of the tensor components). A similar construction works over any field but in characteristic p (in particular over finite fields) the image need not be an irreducible representation. The Young symmetrizers also act on the vector space of functions on Young tableau and the resulting representations are called Specht modules which again construct all complex irreducible representations of the symmetric group while the analogous construction in prime characteristic need not be irreducible. The Young symmetrizer is named after British mathematician Alfred Young.
Definition
Given a finite symmetric group Sn and specific Young tableau λ corresponding to a numbered partition of n, and consider the action of given by permuting the boxes of . Define two permutation subgroups and of Sn as follows:
and
Corresponding to these two subgroups, define two vectors in the group algebra as
and
where is the unit vector corresponding to g, and is the sign of the permutation. The product
is the Young symmetrizer corresponding to the Young tableau λ. Each Young symmetrizer corresponds to an irreducible representation of the symmetric group, and every irreducible representation can be obtained from a corresponding Young symmetrizer. (If we replace the complex numbers by more general fields the corresponding representations will not be irreducible in general.)
Construction
Let V be any vector space over the complex numbers. Consider then the tensor product vector space (n times). Let Sn act on this tensor product space by permuting the indices. One then has a natural group algebra representation on (i.e. is a right module).
Given a partition λ of n, so that , then the image of is
For instance, if , and , with the canonical Young tableau . Then the corresponding is given by
For any product vector of we then have
Thus the set of all clearly spans and since the span we obtain , where we wrote informally .
Notice also how this construction can be reduced to the construction for .
Let be the identity operator and the swap operator defined by , thus and . We have that
maps into , more precisely
is the projector onto .
Then
which is the projector onto .
The image of is
where μ is the conjugate partition to λ. Here, and are the symmetric and alternating tensor product spaces.
The image of in is an irreducible representation of Sn, called a Specht module. We write
for the irreducible representation.
Some scalar multiple of is idempotent, that is for some rational number Specifically, one finds . In particular, this implies that representations of the symmetric group can be defined over the rational numbers; that is, over the rational group algebra .
Consider, for example, S3 and the partition (2,1). Then one has
If V is a complex vector space, then the images of on spaces provides essentially all the finite-dimensional irreducible representations of GL(V).
See also
Representation theory of the symmetric group
Notes
References
William Fulton. Young Tableaux, with Applications to Representation Theory and Geometry. Cambridge University Press, 1997.
Lecture 4 of
Bruce E. Sagan. The Symmetric Group. Springer, 2001.
Representation theory of finite groups
Symmetric functions
Permutations | Young symmetrizer | [
"Physics",
"Mathematics"
] | 746 | [
"Functions and mappings",
"Permutations",
"Algebra",
"Mathematical objects",
"Combinatorics",
"Symmetric functions",
"Mathematical relations",
"Symmetry"
] |
1,585,348 | https://en.wikipedia.org/wiki/RL%20circuit | A resistor–inductor circuit (RL circuit), or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor, either in series driven by a voltage source or in parallel driven by a current source. It is one of the simplest analogue infinite impulse response electronic filters.
Introduction
The fundamental passive linear circuit elements are the resistor (R), capacitor (C) and inductor (L). These circuit elements can be combined to form an electrical circuit in four distinct ways: the RC circuit, the RL circuit, the LC circuit and the RLC circuit, with the abbreviations indicating which components are used. These circuits exhibit important types of behaviour that are fundamental to analogue electronics. In particular, they are able to act as passive filters.
In practice, however, capacitors (and RC circuits) are usually preferred to inductors since they can be more easily manufactured and are generally physically smaller, particularly for higher values of components.
Both RC and RL circuits form a single-pole filter. Depending on whether the reactive element (C or L) is in series with the load, or parallel with the load will dictate whether the filter is low-pass or high-pass.
Frequently RL circuits are used as DC power supplies for RF amplifiers, where the inductor is used to pass DC bias current and block the RF getting back into the power supply.
Complex impedance
The complex impedance (in ohms) of an inductor with inductance (in henries) is
The complex frequency is a complex number,
where
represents the imaginary unit: ,
is the exponential decay constant (in radians per second), and
is the angular frequency (in radians per second).
Eigenfunctions
The complex-valued eigenfunctions of any linear time-invariant (LTI) system are of the following forms:
From Euler's formula, the real-part of these eigenfunctions are exponentially-decaying sinusoids:
Sinusoidal steady state
Sinusoidal steady state is a special case in which the input voltage consists of a pure sinusoid (with no exponential decay). As a result,
and the evaluation of becomes
Series circuit
By viewing the circuit as a voltage divider, we see that the voltage across the inductor is:
and the voltage across the resistor is:
Current
The current in the circuit is the same everywhere since the circuit is in series:
Transfer functions
The transfer function to the inductor voltage is
Similarly, the transfer function to the resistor voltage is
The transfer function, to the current, is
Poles and zeros
The transfer functions have a single pole located at
In addition, the transfer function for the inductor has a zero located at the origin.
Gain and phase angle
The gains across the two components are found by taking the magnitudes of the above expressions:
and
and the phase angles are:
and
Phasor notation
These expressions together may be substituted into the usual expression for the phasor representing the output:
Impulse response
The impulse response for each voltage is the inverse Laplace transform of the corresponding transfer function. It represents the response of the circuit to an input voltage consisting of an impulse or Dirac delta function.
The impulse response for the inductor voltage is
where is the Heaviside step function and is the time constant.
Similarly, the impulse response for the resistor voltage is
Zero-input response
The zero-input response (ZIR), also called the natural response, of an RL circuit describes the behavior of the circuit after it has reached constant voltages and currents and is disconnected from any power source. It is called the zero-input response because it requires no input.
The ZIR of an RL circuit is:
Frequency domain considerations
These are frequency domain expressions. Analysis of them will show which frequencies the circuits (or filters) pass and reject. This analysis rests on a consideration of what happens to these gains as the frequency becomes very large and very small.
As :
As :
This shows that, if the output is taken across the inductor, high frequencies are passed and low frequencies are attenuated (rejected). Thus, the circuit behaves as a high-pass filter. If, though, the output is taken across the resistor, high frequencies are rejected and low frequencies are passed. In this configuration, the circuit behaves as a low-pass filter. Compare this with the behaviour of the resistor output in an RC circuit, where the reverse is the case.
The range of frequencies that the filter passes is called its bandwidth. The point at which the filter attenuates the signal to half its unfiltered power is termed its cutoff frequency. This requires that the gain of the circuit be reduced to
Solving the above equation yields
which is the frequency that the filter will attenuate to half its original power.
Clearly, the phases also depend on frequency, although this effect is less interesting generally than the gain variations.
As :
As :
So at DC (0 Hz), the resistor voltage is in phase with the signal voltage while the inductor voltage leads it by 90°. As frequency increases, the resistor voltage comes to have a 90° lag relative to the signal and the inductor voltage comes to be in-phase with the signal.
Time domain considerations
This section relies on knowledge of , the natural logarithmic constant.
The most straightforward way to derive the time domain behaviour is to use the Laplace transforms of the expressions for and given above. This effectively transforms . Assuming a step input (i.e., before and then afterwards):
Partial fractions expansions and the inverse Laplace transform yield:
Thus, the voltage across the inductor tends towards 0 as time passes, while the voltage across the resistor tends towards , as shown in the figures. This is in keeping with the intuitive point that the inductor will only have a voltage across as long as the current in the circuit is changing — as the circuit reaches its steady-state, there is no further current change and ultimately no inductor voltage.
These equations show that a series RL circuit has a time constant, usually denoted being the time it takes the voltage across the component to either fall (across the inductor) or rise (across the resistor) to within of its final value. That is, is the time it takes to reach and to reach .
The rate of change is a fractional per . Thus, in going from to , the voltage will have moved about 63% of the way from its level at toward its final value. So the voltage across the inductor will have dropped to about 37% after , and essentially to zero (0.7%) after about . Kirchhoff's voltage law implies that the voltage across the resistor will rise at the same rate. When the voltage source is then replaced with a short circuit, the voltage across the resistor drops exponentially with from towards 0. The resistor will be discharged to about 37% after , and essentially fully discharged (0.7%) after about . Note that the current, , in the circuit behaves as the voltage across the resistor does, via Ohm's Law.
The delay in the rise or fall time of the circuit is in this case caused by the back-EMF from the inductor which, as the current flowing through it tries to change, prevents the current (and hence the voltage across the resistor) from rising or falling much faster than the time-constant of the circuit. Since all wires have some self-inductance and resistance, all circuits have a time constant. As a result, when the power supply is switched on, the current does not instantaneously reach its steady-state value, . The rise instead takes several time-constants to complete. If this were not the case, and the current were to reach steady-state immediately, extremely strong inductive electric fields would be generated by the sharp change in the magnetic field — this would lead to breakdown of the air in the circuit and electric arcing, probably damaging components (and users).
These results may also be derived by solving the differential equation describing the circuit:
The first equation is solved by using an integrating factor and yields the current which must be differentiated to give ; the second equation is straightforward. The solutions are exactly the same as those obtained via Laplace transforms.
Short circuit equation
For short circuit evaluation, RL circuit is considered. The more general equation is:
With initial condition:
Which can be solved by Laplace transform:
Thus:
Then antitransform returns:
In case the source voltage is a Heaviside step function (DC):
Returns:
In case the source voltage is a sinusoidal function (AC):
Returns:
Parallel circuit
When both the resistor and the inductor are connected in parallel connection and supplied through a voltage source, this is known as a RL parallel circuit. The parallel RL circuit is generally of less interest than the series circuit unless fed by a current source. This is largely because the output voltage () is equal to the input voltage (); as a result, this circuit does not act as a filter for a voltage input signal.
With complex impedances:
This shows that the inductor lags the resistor (and source) current by 90°.
The parallel circuit is seen on the output of many amplifier circuits, and is used to isolate the amplifier from capacitive loading effects at high frequencies. Because of the phase shift introduced by capacitance, some amplifiers become unstable at very high frequencies, and tend to oscillate. This affects sound quality and component life, especially the transistors.
See also
LC circuit
RC circuit
RLC circuit
Electrical network
List of electronics topics
References
Analog circuits
Electronic filter topology | RL circuit | [
"Engineering"
] | 2,053 | [
"Analog circuits",
"Electronic engineering"
] |
1,585,648 | https://en.wikipedia.org/wiki/Metabolic%20engineering | Metabolic engineering is the practice of optimizing genetic and regulatory processes within cells to increase the cell's production of a certain substance. These processes are chemical networks that use a series of biochemical reactions and enzymes that allow cells to convert raw materials into molecules necessary for the cell's survival. Metabolic engineering specifically seeks to mathematically model these networks, calculate a yield of useful products, and pin point parts of the network that constrain the production of these products. Genetic engineering techniques can then be used to modify the network in order to relieve these constraints. Once again this modified network can be modeled to calculate the new product yield.
The ultimate goal of metabolic engineering is to be able to use these organisms to produce valuable substances on an industrial scale in a cost-effective manner. Current examples include producing beer, wine, cheese, pharmaceuticals, and other biotechnology products. Another possible area of use is the development of oil crops whose composition has been modified to improve their nutritional value. Some of the common strategies used for metabolic engineering are (1) overexpressing the gene encoding the rate-limiting enzyme of the biosynthetic pathway, (2) blocking the competing metabolic pathways, (3) heterologous gene expression, and (4) enzyme engineering.
Since cells use these metabolic networks for their survival, changes can have drastic effects on the cells' viability. Therefore, trade-offs in metabolic engineering arise between the cells ability to produce the desired substance and its natural survival needs. Therefore, instead of directly deleting and/or overexpressing the genes that encode for metabolic enzymes, the current focus is to target the regulatory networks in a cell to efficiently engineer the metabolism.
History and applications
In the past, to increase the productivity of a desired metabolite, a microorganism was genetically modified by chemically induced mutation, and the mutant strain that overexpressed the desired metabolite was then chosen. However, one of the main problems with this technique was that the metabolic pathway for the production of that metabolite was not analyzed, and as a result, the constraints to production and relevant pathway enzymes to be modified were unknown.
In 1990s, a new technique called metabolic engineering emerged. This technique analyzes the metabolic pathway of a microorganism, and determines the constraints and their effects on the production of desired compounds. It then uses genetic engineering to relieve these constraints. Some examples of successful metabolic engineering are the following: (i) Identification of constraints to lysine production in Corynebacterium glutamicum and insertion of new genes to relieve these constraints to improve production (ii) Engineering of a new fatty acid biosynthesis pathway, called reversed beta oxidation pathway, that is more efficient than the native pathway in producing fatty acids and alcohols which can potentially be catalytically converted to chemicals and fuels (iii) Improved production of DAHP an aromatic metabolite produced by E. coli that is an intermediate in the production of aromatic amino acids. It was determined through metabolic flux analysis that the theoretical maximal yield of DAHP per glucose molecule utilized, was 3/7. This is because some of the carbon from glucose is lost as carbon dioxide, instead of being utilized to produce DAHP. Also, one of the metabolites (PEP, or phosphoenolpyruvate) that are used to produce DAHP, was being converted to pyruvate (PYR) to transport glucose into the cell, and therefore, was no longer available to produce DAHP. In order to relieve the shortage of PEP and increase yield, Patnaik et al. used genetic engineering on E. coli to introduce a reaction that converts PYR back to PEP. Thus, the PEP used to transport glucose into the cell is regenerated, and can be used to make DAHP. This resulted in a new theoretical maximal yield of 6/7 – double that of the native E. coli system.
At the industrial scale, metabolic engineering is becoming more convenient and cost-effective. According to the Biotechnology Industry Organization, "more than 50 biorefinery facilities are being built across North America to apply metabolic engineering to produce biofuels and chemicals from renewable biomass which can help reduce greenhouse gas emissions". Potential biofuels include short-chain alcohols and alkanes (to replace gasoline), fatty acid methyl esters and fatty alcohols (to replace diesel), and fatty acid-and isoprenoid-based biofuels (to replace diesel).
Metabolic engineering continues to evolve in efficiency and processes aided by breakthroughs in the field of synthetic biology and progress in understanding metabolite damage and its repair or preemption. Early metabolic engineering experiments showed that accumulation of reactive intermediates can limit flux in engineered pathways and be deleterious to host cells if matching damage control systems are missing or inadequate. Researchers in synthetic biology optimize genetic pathways, which in turn influence cellular metabolic outputs. Recent decreases in cost of synthesized DNA and developments in genetic circuits help to influence the ability of metabolic engineering to produce desired outputs.
Metabolic flux analysis
An analysis of metabolic flux can be found at Flux balance analysis
Setting up a metabolic pathway for analysis
The first step in the process is to identify a desired goal to achieve through the improvement or modification of an organism's metabolism. Reference books and online databases are used to research reactions and metabolic pathways that are able to produce this product or result. These databases contain copious genomic and chemical information including pathways for metabolism and other cellular processes. Using this research, an organism is chosen that will be used to create the desired product or result. Considerations that are taken into account when making this decision are how close the organism's metabolic pathway is to the desired pathway, the maintenance costs associated with the organism, and how easy it is to modify the pathway of the organism. Escherichia coli (E. coli) is widely used in metabolic engineering to synthesize a wide variety of products such as amino acids because it is relatively easy to maintain and modify. If the organism does not contain the complete pathway for the desired product or result, then genes that produce the missing enzymes must be incorporated into the organism.
Analyzing a metabolic pathway
The completed metabolic pathway is modeled mathematically to find the theoretical yield of the product or the reaction fluxes in the cell. A flux is the rate at which a given reaction in the network occurs. Simple metabolic pathway analysis can be done by hand, but most require the use of software to perform the computations. These programs use complex linear algebra algorithms to solve these models. To solve a network using the equation for determined systems shown below, one must input the necessary information about the relevant reactions and their fluxes. Information about the reaction (such as the reactants and stoichiometry) are contained in the matrices Gx and Gm. Matrices Vm and Vx contain the fluxes of the relevant reactions. When solved, the equation yields the values of all the unknown fluxes (contained in Vx).
Determining the optimal genetic manipulations
After solving for the fluxes of reactions in the network, it is necessary to determine which reactions may be altered in order to maximize the yield of the desired product. To determine what specific genetic manipulations to perform, it is necessary to use computational algorithms, such as OptGene or OptFlux. They provide recommendations for which genes should be overexpressed, knocked out, or introduced in a cell to allow increased production of the desired product. For example, if a given reaction has particularly low flux and is limiting the amount of product, the software may recommend that the enzyme catalyzing this reaction should be overexpressed in the cell to increase the reaction flux. The necessary genetic manipulations can be performed using standard molecular biology techniques. Genes may be overexpressed or knocked out from an organism, depending on their effect on the pathway and the ultimate goal.
Experimental measurements
In order to create a solvable model, it is often necessary to have certain fluxes already known or experimentally measured. In addition, in order to verify the effect of genetic manipulations on the metabolic network (to ensure they align with the model), it is necessary to experimentally measure the fluxes in the network. To measure reaction fluxes, carbon flux measurements are made using carbon-13 isotopic labeling. The organism is fed a mixture that contains molecules where specific carbons are engineered to be carbon-13 atoms, instead of carbon-12. After these molecules are used in the network, downstream metabolites also become labeled with carbon-13, as they incorporate those atoms in their structures. The specific labeling pattern of the various metabolites is determined by the reaction fluxes in the network. Labeling patterns may be measured using techniques such as gas chromatography-mass spectrometry (GC-MS) along with computational algorithms to determine reaction fluxes.
See also
Bacterial transformation
Bioreactor
Genetic engineering
Synthetic biological circuit
Synthetic biology
References
External links
Biotechnology Industry Organization(BIO) website:
BIO Website
Biological engineering | Metabolic engineering | [
"Engineering",
"Biology"
] | 1,836 | [
"Biological engineering"
] |
8,912 | https://en.wikipedia.org/wiki/Drake%20equation | The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy.
The equation was formulated in 1961 by Frank Drake, not for purposes of quantifying the number of civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for extraterrestrial intelligence (SETI). The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life. It is more properly thought of as an approximation than as a serious attempt to determine a precise number.
Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the estimated values for several of its factors are highly conjectural, the combined multiplicative effect being that the uncertainty associated with any derived value is so large that the equation cannot be used to draw firm conclusions.
Equation
The Drake equation is:
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= the average rate of star formation in our Galaxy.
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= the length of time for which such civilizations release detectable signals into space.
This form of the equation first appeared in Drake's 1965 paper.
History
In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal Nature with the provocative title "Searching for Interstellar Communications". Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum.
Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10 followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million million has the right combination of chemicals, temperature, water, days and nights to support planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where life has been forged by evolution."
Seven months after Cocconi and Morrison published their article, Drake began searching for extraterrestrial intelligence in an experiment called Project Ozma. It was the first systematic search for signals from communicative extraterrestrial civilizations. Using the dish of the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, Drake monitored two nearby Sun-like stars: Epsilon Eridani and Tau Ceti, slowly scanning frequencies close to the 21 cm wavelength for six hours per day from April to July 1960. The project was well designed, inexpensive, and simple by today's standards. It detected no signals.
Soon thereafter, Drake hosted the first search for extraterrestrial intelligence conference on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting.
The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan, and radio-astronomer Otto Struve. These participants called themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall.
Usefulness
The Drake equation results in a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life. The last three parameters, , , and , are not known and are very difficult to estimate, with values ranging over many orders of magnitude (see ). Therefore, the usefulness of the Drake equation is not in the solving, but rather in the contemplation of all the various concepts which scientists must incorporate when considering the question of life elsewhere, and gives the question of life elsewhere a basis for scientific analysis. The equation has helped draw attention to some particular scientific problems related to life in the universe, for example abiogenesis, the development of multi-cellular life, and the development of intelligence itself.
Within the limits of existing human technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved significantly since the early 1960s. SETI efforts since 1961 have conclusively ruled out widespread alien emissions near the 21 cm wavelength of the hydrogen frequency.
Estimates
Original estimates
There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were:
= 1 yr−1 (1 star formed per year, on the average over the life of the galaxy; this was regarded as conservative)
= 0.2 to 0.5 (one fifth to one half of all stars formed will have planets)
= 1 to 5 (stars with planets will have between 1 and 5 planets capable of developing life)
= 1 (100% of these planets will develop life)
= 1 (100% of which will develop intelligent life)
= 0.1 to 0.2 (10–20% of which will be able to communicate)
= somewhere between 1000 and 100,000,000 years
Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results). Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that , and there were probably between 1000 and 100,000,000 planets with civilizations in the Milky Way Galaxy.
Current estimates
This section discusses and attempts to list the best current estimates for the parameters of the Drake equation.
Rate of star creation in this Galaxy,
Calculations in 2010, from NASA and the European Space Agency indicate that the rate of star formation in this Galaxy is about of material per year. To get the number of stars per year, we divide this by the initial mass function (IMF) for stars, where the average new star's mass is about . This gives a star formation rate of about 1.5–3 stars per year.
Fraction of those stars that have planets,
Analysis of microlensing surveys, in 2012, has found that may approach 1—that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star.
Average number of planets that might support life per star that has planets,
In November 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. Since there are about 100 billion stars in the galaxy, this implies is roughly 0.4. The nearest planet in the habitable zone is Proxima Centauri b, which is as close as about 4.2 light-years away.
The consensus at the Green Bank meeting was that had a minimum value between 3 and 5. Dutch science journalist Govert Schilling has opined that this is optimistic. Even if planets are in the habitable zone, the number of planets with the right proportion of elements is difficult to estimate. Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time.
The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets.
On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red dwarf stars might have habitable zones, although the flaring behavior of these stars might speak against this. The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moons Titan and Enceladus) adds further uncertainty to this figure.
The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for planets, including being in galactic zones with suitably low radiation, high star metallicity, and low enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have a planetary system with large gas giants which provide bombardment protection without a hot Jupiter; and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to generate seasonal variation.
Fraction of the above that actually go on to develop life,
Geological evidence from the Earth suggests that may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, without assuming that the underlying distribution of is the same for all planets in the Milky Way, there are zero degrees of freedom, permitting no valid estimates to be made. If life (or evidence of past life) were to be found on Mars, Europa, Enceladus or Titan that developed independently from life on Earth it would imply a value for close to 1. While this would raise the number of degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet. It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms." As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship".
In 2020, a paper by scholars at the University of Nottingham proposed an "Astrobiological Copernican" principle, based on the Principle of Mediocrity, and speculated that "intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution". In the authors' framework, , , and are all set to a probability of 1 (certainty). Their resultant calculation concludes there are more than thirty current technological civilizations in the galaxy (disregarding error bars).
Fraction of the above that develops intelligent life,
This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for . Likewise, the Rare Earth hypothesis, notwithstanding their low value for above, also think a low value for dominates the analysis. Those who favor higher values note the generally increasing complexity of life over time, concluding that the appearance of intelligence is almost inevitable, implying an approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. (See Criticism).
In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the snowball Earth or research into extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist might raise the estimate of but would indicate that in half the known cases, intelligent life did not develop.
Estimates of have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation.
There has been quantitative work to begin to define . One example is a Bayesian analysis published in 2020. In the conclusion, the author cautions that this study applies to Earth's conditions. In Bayesian terms, the study favors the formation of intelligence on a planet with identical conditions to Earth but does not do so with high confidence.
Planetary scientist Pascal Lee of the SETI Institute proposes that this fraction is very low (0.0002). He based this estimate on how long it took Earth to develop intelligent life (1 million years since Homo erectus evolved, compared to 4.6 billion years since Earth formed).
Fraction of the above revealing their existence via signal release into space,
For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for human presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than present day humans. By this standard, the Earth is a communicating civilization.
Another question is what percentage of civilizations in the galaxy are close enough for us to detect, assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth radio transmissions from roughly a light year away.
Lifetime of such a civilization wherein it communicates its signals into space,
Michael Shermer estimated as 420 years, based on the duration of sixty historical Earthly civilizations. Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including reappearance number, this lack of specificity in defining single civilizations does not matter for the result, since such a civilization turnover could be described as an increase in the reappearance number rather than increase in , stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid.
David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for potentially billions of years. If this is the case, then he proposes that the Milky Way Galaxy may have been steadily accumulating advanced civilizations since it formed. He proposes that the last factor be replaced with , where is the fraction of communicating civilizations that become "immortal" (in the sense that they simply do not die out), and representing the length of time during which this process has been going on. This has the advantage that would be a relatively easy-to-discover number, as it would simply be some fraction of the age of the universe.
It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other.
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare. Paleobiologist Olev Vinn suggests that the lifetime of most technological civilizations is brief due to inherited behavior patterns present in all intelligent organisms. These behaviors, incompatible with civilized conditions, inevitably lead to self-destruction soon after the emergence of advanced technologies.
An intelligent civilization might not be organic, as some have suggested that artificial general intelligence may replace humanity.
Range of results
As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions, as the values used in portions of the Drake equation are not well established. In particular, the result can be , meaning we are likely alone in the galaxy, or , implying there are many civilizations we might contact. One of the few points of wide agreement is that the presence of humanity implies a probability of intelligence arising of greater than zero.
As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value of , Mayr's view on intelligence arising, Drake's view of communication, and Shermer's estimate of lifetime:
, , , [Drake, above], and years
gives:
i.e., suggesting that we are probably alone in this galaxy, and possibly in the observable universe.
On the other hand, with larger values for each of the parameters above, values of can be derived that are greater than 1. The following higher values that have been proposed for each of the parameters:
, , , , , [Drake, above], and years
Use of these parameters gives:
Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.
Possible former technological civilizations
In 2016, Adam Frank and Woodruff Sullivan modified the Drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be, to give the result that Earth hosts the only technological species that has ever arisen, for two cases: (a) this Galaxy, and (b) the universe as a whole. By asking this different question, one removes the lifetime and simultaneous communication uncertainties. Since the numbers of habitable planets per star can today be reasonably estimated, the only remaining unknown in the Drake equation is the probability that a habitable planet ever develops a technological species over its lifetime. For Earth to have the only technological species that has ever occurred in the universe, they calculate the probability of any given habitable planet ever developing a technological species must be less than . Similarly, for Earth to have been the only case of hosting a technological species over the history of this Galaxy, the odds of a habitable zone planet ever hosting a technological species must be less than (about 1 in 60 billion). The figure for the universe implies that it is extremely unlikely that Earth hosts the only technological species that has ever occurred. On the other hand, for this Galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this Galaxy.
Modifications
As many observers have pointed out, the Drake equation is a very simple model that omits potentially relevant parameters, and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms.
Combining the estimates of the original six factors by major researchers via a Monte Carlo procedure leads to a best value for the non-longevity factors of 0.85 1/years. This result differs insignificantly from the estimate of unity given both by Drake and the Cyclops report.
Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society". Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed.
Colonization It has been proposed to generalize the Drake equation to include additional effects of alien civilizations colonizing other star systems. Each original site expands with an expansion velocity , and establishes additional sites that survive for a lifetime . The result is a more complex set of 3 equations.
Reappearance factor The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be , which is the actual reappearance factor added to the equation.
The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then may be almost zero. In the case of total life extinction, a similar factor may be applicable for , that is, how many times life may appear on a planet where it has appeared once.
METI factor Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, humans, although being in a communicative phase, are not a communicative civilization; we do not practise such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (messaging to extraterrestrial intelligence) to the classical Drake equation. He defined the factor as "the fraction of communicative civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction of communicative civilizations that actually engage in deliberate interstellar transmission.
The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate.
Biogenic gases Astronomer Sara Seager proposed a revised equation that focuses on the search for planets with biosignature gases. These gases are produced by living organisms that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.
The Seager equation looks like this:
where:
= the number of planets with detectable signs of life
= the number of stars observed
= the fraction of stars that are quiet
= the fraction of stars with rocky planets in the habitable zone
= the fraction of those planets that can be observed
= the fraction that have life
= the fraction on which life produces a detectable signature gas
Seager stresses, "We're not throwing out the Drake Equation, which is really a different topic," explaining, "Since Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one that's not related to intelligent life: Can we detect any signs of life in any way in the very near future?"
Carl Sagan's version of the Drake equationAmerican astronomer Carl Sagan made some modifications in the Drake equation and presented it in the 1980 program Cosmos: A Personal Voyage. The modified equation is shown below
where
= the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone);
and
= Number of stars in the Milky Way Galaxy
= the fraction of those stars that have planets.
= the average number of planets that can potentially support life per star that has planets.
= the fraction of planets that could support life that actually develop life at some point.
= the fraction of planets with life that go on to develop intelligent life (civilizations).
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
= fraction of a planetary lifetime graced by a technological civilization
Criticism
Criticism of the Drake equation is varied. Firstly, many of the terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around the present day understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.
Others point out that the equation was formulated before our understanding of the universe had matured. Astrophysicist Ethan Siegel, said:
One reply to such criticisms is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference.
Fermi paradox
A civilization lasting for tens of millions of years could be able to spread throughout the galaxy, even at the slow speeds foreseeable with present-day technology. However, no confirmed signs of civilizations or intelligent life elsewhere have been found, either in this Galaxy or in the observable universe of 2 trillion galaxies. According to this line of thinking, the tendency to fill (or at least explore) all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?".
A large number of explanations have been proposed to explain this lack of contact; a book published in 2015 elaborated on 75 different explanations. In terms of the Drake Equation, the explanations can be divided into three classes:
Few intelligent civilizations ever arise. This is an argument that at least one of the first few terms, , has a low value. The most common suspect is , but explanations such as the rare Earth hypothesis argue that is the small term.
Intelligent civilizations exist, but we see no evidence, meaning is small. Typical arguments include that civilizations are too far apart, it is too expensive to spread throughout the galaxy, civilizations broadcast signals for only a brief period of time, communication is dangerous, and many others.
The lifetime of intelligent, communicative civilizations is short, meaning the value of is small. Drake suggested that a large number of extraterrestrial civilizations would form, and he further speculated that the lack of evidence of such civilizations may be because technological civilizations tend to disappear rather quickly. Typical explanations include it is the nature of intelligent life to destroy itself, it is the nature of intelligent life to destroy others, they tend to be destroyed by natural events, and others.
These lines of reasoning lead to the Great Filter hypothesis, which states that since there are no observed extraterrestrial civilizations despite the vast number of stars, at least one step in the process must be acting as a filter to reduce the final value. According to this view, either it is very difficult for intelligent life to arise, or the lifetime of technologically advanced civilizations, or the period of time they reveal their existence must be relatively short.
An analysis by Anders Sandberg, Eric Drexler and Toby Ord suggests "a substantial ex ante (predicted) probability of there being no other intelligent life in our observable universe".
In fiction and popular culture
The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown on Star Trek, the television series he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal. The invented equation created by Roddenberry is:
Regarding Roddenberry's fictional version of the equation, Drake himself commented that a number raised to the first power is just the number itself.
A commemorative plate on NASA's Europa Clipper mission, planned for launch in October 2024, features a poem by the U.S. Poet Laureate Ada Limón, waveforms of the word 'water' in 103 languages, a schematic of the water hole, the Drake equation, and a portrait of planetary scientist Ron Greeley on it.
The track Abiogenesis on the Carbon Based Lifeforms album World of Sleepers features the Drake equation in a spoken voice-over.
See also
The Search for Life: The Drake Equation, BBC documentary
Notes
References
Further reading
External links
Interactive Drake Equation Calculator
Frank Drake's 2010 article on "The Origin of the Drake Equation"
"Only a matter of time, says Frank Drake". A Q&A with Frank Drake in February 2010
Macromedia Flash page allowing the user to modify Drake's values from PBS's Nova
"The Drake Equation", Astronomy Cast episode #23; includes full transcript
Animated simulation of the Drake equation. ()
"The Alien Equation", BBC Radio program Discovery (22 September 2010)
"Reflections on the Equation" (PDF), by Frank Drake, 2013
1961 introductions
Astrobiology
Astronomical controversies
Astronomical hypotheses
Eponymous equations of physics
Fermi paradox
Interstellar messages
Search for extraterrestrial intelligence | Drake equation | [
"Physics",
"Astronomy",
"Biology"
] | 6,582 | [
"Astronomical hypotheses",
"Origin of life",
"Equations of physics",
"History of astronomy",
"Speculative evolution",
"Eponymous equations of physics",
"Astrobiology",
"Astronomical controversies",
"Fermi paradox",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
9,109 | https://en.wikipedia.org/wiki/Diophantine%20equation | In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.
Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century.
Examples
In the following Diophantine equations, , and are the unknowns and the other letters are given constants:
Linear Diophantine equations
One equation
The simplest linear Diophantine equation takes the form
where , and are given integers. The solutions are described by the following theorem:
This Diophantine equation has a solution (where and are integers) if and only if is a multiple of the greatest common divisor of and . Moreover, if is a solution, then the other solutions have the form , where is an arbitrary integer, and and are the quotients of and (respectively) by the greatest common divisor of and .
Proof: If is this greatest common divisor, Bézout's identity asserts the existence of integers and such that . If is a multiple of , then for some integer , and is a solution. On the other hand, for every pair of integers and , the greatest common divisor of and divides . Thus, if the equation has a solution, then must be a multiple of . If and , then for every solution , we have
showing that is another solution. Finally, given two solutions such that
one deduces that
As and are coprime, Euclid's lemma shows that divides , and thus that there exists an integer such that both
Therefore,
which completes the proof.
Chinese remainder theorem
The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let be pairwise coprime integers greater than one, be arbitrary integers, and be the product The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution such that , and that the other solutions are obtained by adding to a multiple of :
System of linear Diophantine equations
More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written
where is an matrix of integers, is an column matrix of unknowns and is an column matrix of integers.
The computation of the Smith normal form of provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) and of respective dimensions and , such that the matrix
is such that is not zero for not greater than some integer , and all the other entries are zero. The system to be solved may thus be rewritten as
Calling the entries of and those of , this leads to the system
This system is equivalent to the given one in the following sense: A column matrix of integers is a solution of the given system if and only if for some column matrix of integers such that .
It follows that the system has a solution if and only if divides for and for . If this condition is fulfilled, the solutions of the given system are
where are arbitrary integers.
Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form."
Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.
Homogeneous equations
A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem
As a homogeneous polynomial in indeterminates defines a hypersurface in the projective space of dimension , solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface.
Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the th power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for , there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved.
For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).
For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.
Degree two
Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.
For proving that there is no solution, one may reduce the equation modulo . For example, the Diophantine equation
does not have any other solution than the trivial solution . In fact, by dividing , and by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if , and are all even, and are thus not coprime. Thus the only solution is the trivial solution . This shows that there is no rational point on a circle of radius centered at the origin.
More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist.
If a non-trivial integer solution is known, one may produce all other solutions in the following way.
Geometric interpretation
Let
be a homogeneous Diophantine equation, where is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all are zero. If is a non-trivial integer solution of this equation, then are the homogeneous coordinates of a rational point of the hypersurface defined by . Conversely, if are homogeneous coordinates of a rational point of this hypersurface, where are integers, then is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form
where is any integer, and is the greatest common divisor of the
It follows that solving the Diophantine equation is completely reduced to finding the rational points of the corresponding projective hypersurface.
Parameterization
Let now be an integer solution of the equation As is a polynomial of degree two, a line passing through crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through , and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters.
More precisely, one may proceed as follows.
By permuting the indices, one may suppose, without loss of generality that Then one may pass to the affine case by considering the affine hypersurface defined by
which has the rational point
If this rational point is a singular point, that is if all partial derivatives are zero at , all lines passing through are contained in the hypersurface, and one has a cone. The change of variables
does not change the rational points, and transforms into a homogeneous polynomial in variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables.
If the polynomial is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case.
In the general case, consider the parametric equation of a line passing through :
Substituting this in , one gets a polynomial of degree two in , that is zero for . It is thus divisible by . The quotient is linear in , and may be solved for expressing as a quotient of two polynomials of degree at most two in with integer coefficients:
Substituting this in the expressions for one gets, for ,
where are polynomials of degree at most two with integer coefficients.
Then, one can return to the homogeneous case. Let, for ,
be the homogenization of These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by :
A point of the projective hypersurface defined by is rational if and only if it may be obtained from rational values of As are homogeneous polynomials, the point is not changed if all are multiplied by the same rational number. Thus, one may suppose that are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences where, for ,
where is an integer, are coprime integers, and is the greatest common divisor of the integers
One could hope that the coprimality of the , could imply that . Unfortunately this is not the case, as shown in the next section.
Example of Pythagorean triples
The equation
is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples.
For retrieving exactly Euclid's formula, we start from the solution , corresponding to the point of the unit circle. A line passing through this point may be parameterized by its slope:
Putting this in the circle equation
one gets
Dividing by , results in
which is easy to solve in :
It follows
Homogenizing as described above one gets all solutions as
where is any integer, and are coprime integers, and is the greatest common divisor of the three numerators. In fact, if and are both odd, and if one is odd and the other is even.
The primitive triples are the solutions where and .
This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that , and are all positive, and does not distinguish between two triples that differ by the exchange of and ,
Diophantine analysis
Typical questions
The questions asked in Diophantine analysis include:
Are there any solutions?
Are there any solutions beyond some that are easily found by inspection?
Are there finitely or infinitely many solutions?
Can all solutions be found in theory?
Can one in practice compute a full list of solutions?
These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles.
Typical problem
The given information is that a father's age is 1 less than twice that of his son, and that the digits making up the father's age are reversed in the son's age (i.e. ). This leads to the equation , thus . Inspection gives the result , , and thus equals 73 years and equals 37 years. One may easily show that there is not any other solution with and positive integers less than 10.
Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts.
17th and 18th centuries
In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation has no solutions for any higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles.
In 1657, Fermat attempted to solve the Diophantine equation (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is , (see Chakravala method).
Hilbert's tenth problem
In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist.
Diophantine geometry
Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field , when is not algebraically closed.
Modern research
The oldest general method for solving a Diophantine equationor for proving that there is no solution is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations.
The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist.
During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates.
This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations.
Infinite Diophantine equations
An example of an infinite Diophantine equation is:
which can be expressed as "How many ways can a given integer be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive . Compare this to:
which does not always have a solution for positive .
Exponential Diophantine equations
If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include:
the Ramanujan–Nagell equation,
the equation of the Fermat–Catalan conjecture and Beal's conjecture, with inequality restrictions on the exponents
the Erdős–Moser equation,
A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error.
See also
Kuṭṭaka, Aryabhata's algorithm for solving linear Diophantine equations in two unknowns
Notes
References
Further reading
Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997.
Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré" Historia Mathematica 8 (1981), 393–416.
Bashmakova, Izabella G., Slavutin, E. I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian].
Bashmakova, Izabella G. "Diophantine Equations and the Evolution of Algebra", American Mathematical Society Translations 147 (2), 1990, pp. 85–100. Translated by A. Shenitzer and H. Grant.
Bogdan Grechuk (2024). Polynomial Diophantine Equations: A Systematic Approach, Springer.
Rashed, Roshdi, Histoire de l'analyse diophantienne classique : D'Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter.
External links
Diophantine Equation. From MathWorld at Wolfram Research.
Dario Alpern's Online Calculator. Retrieved 18 March 2009 | Diophantine equation | [
"Mathematics"
] | 4,038 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
9,146 | https://en.wikipedia.org/wiki/Dolly%20%28sheep%29 | Dolly (5 July 1996 – 14 February 2003) was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned.
The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells.
Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found.
Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003.
Genesis
Dolly was cloned by Keith Campbell, Ian Wilmut and colleagues at the Roslin Institute, part of the University of Edinburgh, Scotland, and the biotechnology company PPL Therapeutics, based near Edinburgh. The funding for Dolly's cloning was provided by PPL Therapeutics and the Ministry of Agriculture. She was born on 5 July 1996. She has been called "the world's most famous sheep" by sources including BBC News and Scientific American.
The cell used as the donor for the cloning of Dolly was taken from a mammary gland, and the production of a healthy clone, therefore, proved that a cell taken from a specific part of the body could recreate a whole individual. On Dolly's name, Wilmut stated "Dolly is derived from a mammary gland cell and we couldn't think of a more impressive pair of glands than Dolly Parton's."
Birth
Dolly was born on 5 July 1996 and had three mothers: one provided the egg, another the DNA, and a third carried the cloned embryo to term. She was created using the technique of somatic cell nuclear transfer, where the cell nucleus from an adult cell is transferred into an unfertilized oocyte (developing egg cell) that has had its cell nucleus removed. The hybrid cell is then stimulated to divide by an electric shock, and when it develops into a blastocyst it is implanted in a surrogate mother. Dolly was the first clone produced from a cell taken from an adult mammal. The production of Dolly showed that genes in the nucleus of such a mature differentiated somatic cell are still capable of reverting to an embryonic totipotent state, creating a cell that can then go on to develop into any part of an animal.
Dolly's existence was announced to the public on 22 February 1997. It gained much attention in the media. A commercial with Scottish scientists playing with sheep was aired on TV, and a special report in Time magazine featured Dolly. Science featured Dolly as the breakthrough of the year. Even though Dolly was not the first animal cloned, she received media attention because she was the first cloned from an adult cell.
Life
Dolly lived her entire life at the Roslin Institute in Midlothian. There she was bred with a Welsh Mountain ram and produced six lambs in total. Her first lamb, named Bonnie, was born in April 1998. The next year, Dolly produced twin lambs Sally and Rosie; further, she gave birth to triplets Lucy, Darcy and Cotton in 2000. In late 2001, at the age of four, Dolly developed arthritis and started to have difficulty walking. This was treated with anti-inflammatory drugs.
Death
On 14 February 2003, Dolly was euthanised because she had a progressive lung disease and severe arthritis. A Finn Dorset such as Dolly has a life expectancy of around 11 to 12 years, but Dolly lived 6.5 years. A post-mortem examination showed she had a form of lung cancer called ovine pulmonary adenocarcinoma, also known as Jaagsiekte, which is a fairly common disease of sheep and is caused by the retrovirus JSRV. Roslin scientists stated that they did not think there was a connection with Dolly being a clone, and that other sheep in the same flock had died of the same disease. Such lung diseases are a particular danger for sheep kept indoors, and Dolly had to sleep inside for security reasons.
Some in the press speculated that a contributing factor to Dolly's death was that she could have been born with a genetic age of six years, the same age as the sheep from which she was cloned. One basis for this idea was the finding that Dolly's telomeres were short, which is typically a result of the aging process. The Roslin Institute stated that intensive health screening did not reveal any abnormalities in Dolly that could have come from advanced aging.
In 2016, scientists reported no defects in thirteen cloned sheep, including four from the same cell line as Dolly. The first study to review the long-term health outcomes of cloning, the authors found no evidence of late-onset, non-communicable diseases other than some minor examples of osteoarthritis and concluded "We could find no evidence, therefore, of a detrimental long-term effect of cloning by SCNT on the health of aged offspring among our cohort."
After her death Dolly's body was preserved via taxidermy and is currently on display at the National Museum of Scotland in Edinburgh.
Legacy
After cloning was successfully demonstrated through the production of Dolly, many other large mammals were cloned, including pigs, deer, horses and bulls. The attempt to clone argali (mountain sheep) did not produce viable embryos. The attempt to clone a banteng bull was more successful, as were the attempts to clone mouflon (a form of wild sheep), both resulting in viable offspring. The reprogramming process that cells need to go through during cloning is not perfect and embryos produced by nuclear transfer often show abnormal development. Making cloned mammals was highly inefficientin 1996, Dolly was the only lamb that survived to adulthood from 277 attempts. By 2014, Chinese scientists were reported to have 70–80% success rates cloning pigs, and in 2016, a Korean company, Sooam Biotech, was producing 500 cloned embryos a day. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.
Cloning may have uses in preserving endangered species, and may become a viable tool for reviving extinct species. In January 2009, scientists from the Centre of Food Technology and Research of Aragon in northern Spain announced the cloning of the Pyrenean ibex, a form of wild mountain goat, which was officially declared extinct in 2000. Although the newborn ibex died shortly after birth due to physical defects in its lungs, it is the first time an extinct animal has been cloned, and may open doors for saving endangered and newly extinct species by resurrecting them from frozen tissue.
In July 2016, four identical clones of Dolly (Daisy, Debbie, Dianna, and Denise) were alive and healthy at nine years old.
Scientific American concluded in 2016 that the main legacy of Dolly has not been cloning of animals but in advances into stem cell research. Gene targeting was added in 2000, when researchers cloned female lamb Diana from sheep DNA altered to contain the human gene for alpha 1-antitrypsin. The human gene was specifically activated in the ewe’s mammary gland, so Diana produced milk containing human alpha 1-antitrypsin. After Dolly, researchers realised that ordinary cells could be reprogrammed to induced pluripotent stem cells, which can be grown into any tissue.
The first successful cloning of a primate species was reported in January 2018, using the same method which produced Dolly. Two identical clones of a macaque monkey, Zhong Zhong and Hua Hua, were created by researchers in China and were born in late 2017.
In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, again using this method, and the gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
Dolly in popular culture
In 2003, the Belgian artist Dominique Goblet published a short comic strip about Dolly the cloned sheep with the title: “2004 Apparition de Dolly dans la campagne anglaise”
"Dolly The Sheep" was initially released on November 13, 2012, as a flash game developed by the small game development company Pozirk Games, in which Dolly the cloned sheep is being chased by evil scientists. For some time the game was available to play online as well as on mobile devices. As of June 14, 2023, it is only available online for desktop/laptop computers.
See also
In re Roslin Institute (Edinburgh) – US court decision that determined that Dolly could not be patented
List of cloned animals
References
External links
Dolly the Sheep at the National Museum of Scotland, Edinburgh
Cloning Dolly the Sheep Dolly the Sheep and the importance of animal research
Animal cloning and Dolly
Episode where several items appertaining to Dolly, including wool from a shearing and scientific instruments, were appraised.
1996 animal births
2003 animal deaths
1996 in biotechnology
1996 in Scotland
2003 in Scotland
Animal world record holders
Cloned sheep
Cloning
Individual animals in the United Kingdom
Collection of National Museums Scotland
History of Midlothian
Dolly Parton
Individual animals in Scotland
Individual taxidermy exhibits | Dolly (sheep) | [
"Engineering",
"Biology"
] | 2,041 | [
"Cloning",
"Genetic engineering"
] |
9,225 | https://en.wikipedia.org/wiki/Electronic%20paper | Electronic paper or intelligent paper, is a display device that reflects ambient light, mimicking the appearance of ordinary ink on paper – unlike conventional flat-panel displays which need additional energy to emit their own light. This may make them more comfortable to read, and provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, and newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade.
Technologies include Gyricon, electrophoretics, electrowetting, interferometry, and plasmonics.
Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. Applications of e-paper include electronic shelf labels and digital signage, bus station time tables, electronic billboards, smartphone displays, and e-readers able to display digital versions of books and magazines.
Technologies
Gyricon
Electronic paper was first developed in the 1970s by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometers across. Each sphere is a Janus particle composed of negatively charged black plastic on one side and positively charged white plastic on the other (each bead is thus a dipole). The spheres are embedded in a transparent silicone sheet, with each sphere suspended in a bubble of oil so that it can rotate freely. The polarity of the voltage applied to each pair of electrodes then determines whether the white or black side is face-up, thus giving the pixel a white or black appearance.
At the FPD 2008 exhibition, Japanese company Soken demonstrated a wall with electronic wall-paper using this technology. In 2007, the Estonian company Visitret Displays was developing this kind of display using polyvinylidene fluoride (PVDF) as the material for the spheres, dramatically improving the video speed and decreasing the control voltage needed.
Electrophoretic
An electrophoretic display (EPD) forms images by rearranging charged pigment particles with an applied electric field.
In the simplest implementation of an EPD, titanium dioxide (titania) particles approximately one micrometer in diameter are dispersed in a hydrocarbon oil. A dark-colored dye is also added to the oil, along with surfactants and charging agents that cause the particles to take on an electric charge. This mixture is placed between two parallel, conductive plates separated by a gap of 10 to 100 micrometres. When a voltage is applied across the two plates, the particles migrate electrophoretically to the plate that bears the opposite charge from that on the particles. When the particles are located at the front (viewing) side of the display, it appears white, because the light is scattered back to the viewer by the high-index titania particles. When the particles are located at the rear side of the display, it appears dark, because the light is absorbed by the colored dye. If the rear electrode is divided into a number of small picture elements (pixels), then an image can be formed by applying the appropriate voltage to each region of the display to create a pattern of reflecting and absorbing regions.
EPDs are typically addressed using MOSFET-based thin-film transistor (TFT) technology. TFTs are often used to form a high-density image in an EPD.
A common application for TFT-based EPDs are e-readers. Electrophoretic displays are considered prime examples of the electronic paper category, because of their paper-like appearance and low power consumption. Examples of commercial electrophoretic displays include the high-resolution active matrix displays used in the Amazon Kindle, Barnes & Noble Nook, Sony Reader, Kobo eReader, and iRex iLiad e-readers. These displays are constructed from an electrophoretic imaging film manufactured by E Ink Corporation. A mobile phone that used the technology is the Motorola Fone.
Electrophoretic Display technology has also been developed by SiPix and Bridgestone/Delta. SiPix is now part of E Ink Corporation. The SiPix design uses a flexible 0.15 mm Microcup architecture, instead of E Ink's 0.04 mm diameter microcapsules. Bridgestone Corp.'s Advanced Materials Division cooperated with Delta Optoelectronics Inc. in developing Quick Response Liquid Powder Display technology.
Electrophoretic displays can be manufactured using the Electronics on Plastic by Laser Release (EPLaR) process, developed by Philips Research, to enable existing AM-LCD manufacturing plants to create flexible plastic displays.
Microencapsulated electrophoretic display
In the 1990s another type of electronic ink based on a microencapsulated electrophoretic display was conceived and prototyped by a team of undergraduates at MIT as described in their Nature paper. J.D. Albert, Barrett Comiskey, Joseph Jacobson, Jeremy Rubin and Russ Wilcox co-founded E Ink Corporation in 1997 to commercialize the technology. E Ink subsequently formed a partnership with Philips Components two years later to develop and market the technology. In 2005, Philips sold the electronic paper business as well as its related patents to Prime View International. "It has for many years been an ambition of researchers in display media to create a flexible low-cost system that is the electronic analog of paper. In this context, microparticle-based displays have long intrigued researchers. Switchable contrast in such displays is achieved by the electromigration of highly scattering or absorbing microparticles (in the size range 0.1–5 μm), quite distinct from the molecular-scale properties that govern the behavior of the more familiar liquid-crystal displays. Micro-particle-based displays possess intrinsic bistability, exhibit extremely low power d.c. field addressing and have demonstrated high contrast and reflectivity. These features, combined with a near-lambertian viewing characteristic, result in an 'ink on paper' look. But such displays have to date suffered from short lifetimes and difficulty in manufacture. Here we report the synthesis of an electrophoretic ink based on the microencapsulation of an electrophoretic dispersion. The use of a microencapsulated electrophoretic medium solves the lifetime issues and permits the fabrication of a bistable electronic display solely by means of printing. This system may satisfy the practical requirements of electronic paper."
This used tiny microcapsules filled with electrically charged white particles suspended in a colored oil. In early versions, the underlying circuitry controlled whether the white particles were at the top of the capsule (so it looked white to the viewer) or at the bottom of the capsule (so the viewer saw the color of the oil). This was essentially a reintroduction of the well-known electrophoretic display technology, but microcapsules meant the display could be made on flexible plastic sheets instead of glass.
One early version of the electronic paper consists of a sheet of very small transparent capsules, each about 40 micrometers across. Each capsule contains an oily solution containing black dye (the electronic ink), with numerous white titanium dioxide particles suspended within. The particles are slightly negatively charged, and each one is naturally white.
The screen holds microcapsules in a layer of liquid polymer, sandwiched between two arrays of electrodes, the upper of which is transparent. The two arrays are aligned to divide the sheet into pixels, and each pixel corresponds to a pair of electrodes situated on either side of the sheet. The sheet is laminated with transparent plastic for protection, resulting in an overall thickness of 80 micrometers, or twice that of ordinary paper.
The network of electrodes connects to display circuitry, which turns the electronic ink 'on' and 'off' at specific pixels by applying a voltage to specific electrode pairs. A negative charge to the surface electrode repels the particles to the bottom of local capsules, forcing the black dye to the surface and turning the pixel black. Reversing the voltage has the opposite effect. It forces the particles to the surface, turning the pixel white. A more recent implementation of this concept requires only one layer of electrodes beneath the microcapsules. These are commercially referred to as Active Matrix Electrophoretic Displays (AMEPD).
Reflective LCD
This technology is similar to common LCD while the backlight panel is substituted by a reflective surface.
A comparable technology is also obtainable in backlight LCDs by software or hardware deactivating the backlight control.
Electrowetting
Electrowetting display (EWD) is based on controlling the shape of a confined water/oil interface by an applied voltage. With no voltage applied, the (colored) oil forms a flat film between the water and a hydrophobic (water-repellent) insulating coating of an electrode, resulting in a colored pixel. When a voltage is applied between the electrode and the water, the interfacial tension between the water and the coating changes. As a result, the stacked state is no longer stable, causing the water to move the oil aside. This makes a partly transparent pixel, or, if a reflective white surface is under the switchable element, a white pixel. Because of the small pixel size, the user only experiences the average reflection, which provides a high-brightness, high-contrast switchable element.
Displays based on electrowetting provide several attractive features. The switching between white and colored reflection is fast enough to display video content. It is a low-power, low-voltage technology, and displays based on the effect can be made flat and thin. The reflectivity and contrast are better than or equal to other reflective display types and approach the visual qualities of paper. In addition, the technology offers a unique path toward high-brightness full-color displays, leading to displays that are four times brighter than reflective LCDs and twice as bright as other emerging technologies. Instead of using red, green, and blue (RGB) filters or alternating segments of the three primary colors, which effectively result in only one-third of the display reflecting light in the desired color, electrowetting allows for a system in which one sub-pixel can switch two different colors independently.
This results in the availability of two-thirds of the display area to reflect light in any desired color. This is achieved by building up a pixel with a stack of two independently controllable colored oil films plus a color filter.
The colors are cyan, magenta, and yellow, which is a subtractive system, comparable to the principle used in inkjet printing. Compared to LCD, brightness is gained because no polarisers are required.
Electrofluidic
Electrofluidic display is a variation of an electrowetting display that place an aqueous pigment dispersion inside a tiny reservoir. The reservoir comprises less than 5-10% of the viewable pixel area and therefore the pigment is substantially hidden from view. Voltage is used to electromechanically pull the pigment out of the reservoir and spread it as a film directly behind the viewing substrate. As a result, the display takes on color and brightness similar to that of conventional pigments printed on paper. When voltage is removed liquid surface tension causes the pigment dispersion to rapidly recoil into the reservoir. The technology can potentially provide greater than 85% white state reflectance for electronic paper.
The core technology was invented at the Novel Devices Laboratory at the University of Cincinnati and there are working prototypes developed by collaboration with Sun Chemical, Polymer Vision and Gamma Dynamics.
It has a wide margin in critical aspects such as brightness, color saturation and response time.
Because the optically active layer can be less than 15 micrometres thick, there is strong potential for rollable displays.
Interferometric modulator (Mirasol)
The technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid-crystal displays (LCD).
Plasmonic electronic display
Plasmonic nanostructures with conductive polymers have also been suggested as one kind of electronic paper. The material has two parts. The first part is a highly reflective metasurface made by metal-insulator-metal films tens of nanometers in thickness including nanoscale holes. The metasurfaces can reflect different colors depending on the thickness of the insulator. The standard RGB color schema can be used as pixels for full-color displays. The second part is a polymer with optical absorption controllable by an electrochemical potential. After growing the polymer on the plasmonic metasurfaces, the reflection of the metasurfaces can be modulated by the applied voltage. This technology presents broad range colors, high polarization-independent reflection (>50 %), strong contrast (>30 %), the fast response time (hundreds of ms), and long-term stability. In addition, it has ultralow power consumption (< 0.5 mW/cm2) and potential for high resolution (>10000 dpi). Since the ultrathin metasurfaces are flexible and the polymer is soft, the whole system can be bent. Desired future improvements for this technology include bistability, cheaper materials and implementation with TFT arrays.
Other technologies
Other research efforts into e-paper have involved using organic transistors embedded into flexible substrates, including attempts to build them into conventional paper.
Simple color e-paper consists of a thin colored optical filter added to the monochrome technology described above. The array of pixels is divided into triads, typically consisting of the standard cyan, magenta and yellow, in the same way as CRT monitors (although using subtractive primary colors as opposed to additive primary colors). The display is then controlled like any other electronic color display.
History
E Ink Corporation of E Ink Holdings Inc. released the first colored E Ink displays to be used in a marketed product. The Ectaco jetBook Color was released in 2012 as the first colored electronic ink device, which used E Ink's Triton display technology. E Ink in early 2015 also announced another color electronic ink technology called Prism. This new technology is a color changing film that can be used for e-readers, but Prism is also marketed as a film that can be integrated into architectural design such as "wall, ceiling panel, or entire room instantly." The disadvantage of these current color displays is that they are considerably more expensive than standard E Ink displays. The jetBook Color costs roughly nine times more than other popular e-readers such as the Amazon Kindle. As of January 2015, Prism had not been announced to be used in the plans for any e-reader devices.
Applications
Several companies are simultaneously developing electronic paper and ink. While the technologies used by each company provide many of the same features, each has its own distinct technological advantages. All electronic paper technologies face the following general challenges:
A method for encapsulation
An ink or active material to fill the encapsulation
Electronics to activate the ink
Electronic ink can be applied to flexible or rigid materials. For flexible displays, the base requires a thin, flexible material tough enough to withstand considerable wear, such as extremely thin plastic. The method of how the inks are encapsulated and then applied to the substrate is what distinguishes each company from others. These processes are complex and are carefully guarded industry secrets. Nevertheless, making electronic paper is less complex and costly than LCDs.
There are many approaches to electronic paper, with many companies developing technology in this area. Other technologies being applied to electronic paper include modifications of liquid-crystal displays, electrochromic displays, and the electronic equivalent of an Etch A Sketch at Kyushu University. Advantages of electronic paper include low power usage (power is only drawn when the display is updated), flexibility and better readability than most displays. Electronic ink can be printed on any surface, including walls, billboards, product labels and T-shirts. The ink's flexibility would also make it possible to develop rollable displays for electronic devices.
Wristwatches
In December 2005, Seiko released the first electronic ink based watch called the Spectrum SVRD001 wristwatch, which has a flexible electrophoretic display and in March 2010 Seiko released a second generation of this famous electronic ink watch with an active matrix display. The Pebble smart watch (2013) uses a low-power memory LCD manufactured by Sharp for its e-paper display.
In 2019, Fossil launched a hybrid smartwatch called the Hybrid HR, integrating an always on electronic ink display with physical hands and dial to simulate the look of a traditional analog watch.
E-book readers
In 2004, Sony released the Librié in Japan, the first e-book reader with an electronic paper E Ink display. In September 2006, Sony released the PRS-500 Sony Reader e-book reader in the USA. On October 2, 2007, Sony announced the PRS-505, an updated version of the Reader. In November 2008, Sony released the PRS-700BC, which incorporated a backlight and a touchscreen.
Mobile phones
Motorola's low-cost mobile phone, the Motorola F3, uses an alphanumeric black-and-white electrophoretic display.
The Samsung Alias 2 mobile phone incorporates electronic ink from E Ink into the keypad, which allows the keypad to change character sets and orientation while in different display modes.
Smartphones
On December 12, 2012, Yota Devices announced the first "YotaPhone" prototype and was later released in December 2013, a unique double-display smartphone. It has a 4.3-inch, HD LCD on the front and an electronic ink display on the back.
On May and June 2020, Hisense released the Hisense A5c and A5 pro cc, the first color electronic ink smartphones. With a single color display, with a togglable front light running android 9 and Android 10.
Computer monitors
Electronic paper is used on computer monitors like the 13.3 inch Dasung Paperlike 3 HD and 25.3 inch Paperlike 253.
Laptop
Some laptops like Lenovo ThinkBook Plus use e-paper as a secondary screen.
Other common laptops use reflective LCD panels with no backlight.
Furthermore, some operating systems e.g. Xubuntu, Kali Linux provide a control to dim backlight LCD brightness to 0% in internal monitors, while crystals keep working so that the display is lighted by ambient light as it was paper.
In late 2007, Amazon began producing and marketing the Amazon Kindle, an e-book reader with an e-paper display. In February 2009, Amazon released the Kindle 2 and in May 2009 the larger Kindle DX was announced. In July 2010 the third-generation Kindle was announced, with notable design changes. The fourth generation of Kindle, called Touch, was announced in September 2011 that was the Kindle's first departure from keyboards and page turn buttons in favor of touchscreens. In September 2012, Amazon announced the fifth generation of the Kindle called the Paperwhite, which incorporates a LED frontlight and a higher contrast display.
In November 2009, Barnes and Noble launched the Barnes & Noble Nook, running an Android operating system. It differs from other e-readers in having a replaceable battery, and a separate touch-screen color LCD below the main electronic paper reading screen.
In 2017, Sony and reMarkable offered e-books tailored for writing with a smart stylus.
In 2020, Onyx released the first frontlit 13.3 inch electronic paper Android tablet, the Boox Max Lumi. At the end of the same year, Bigme released the first 10.3 inch color electronic paper Android tablet, the Bigme B1 Pro. This was also the first large electronic paper tablet to support 4g cellular data.
Newspapers
In February 2006, the Flemish daily De Tijd distributed an electronic version of the paper to select subscribers in a limited marketing study, using a pre-release version of the iRex iLiad. This was the first recorded application of electronic ink to newspaper publishing.
The French daily Les Échos announced the official launch of an electronic version of the paper on a subscription basis in September 2007. Two offers were available, combining a one-year subscription and a reading device. The offer included either a light (176g) reading device (adapted for Les Echos by Ganaxa) or the iRex iLiad. Two different processing platforms were used to deliver readable information of the daily, one based on the newly developed GPP electronic ink platform from Ganaxa, and the other one developed internally by Les Echos.
Displays embedded in smart cards
Flexible display cards enable financial payment cardholders to generate a one-time password to reduce online banking and transaction fraud. Electronic paper offers a flat and thin alternative to existing key fob tokens for data security. The world's first ISO compliant smart card with an embedded display was developed by Innovative Card Technologies and nCryptone in 2005. The cards were manufactured by Nagra ID.
Status displays
Some devices, like USB flash drives, have used electronic paper to display status information, such as available storage space. Once the image on the electronic paper has been set, it requires no power to maintain, so the readout can be seen even when the flash drive is not plugged in.
Electronic shelf labels
E-paper based electronic shelf labels (ESL) are used to digitally display the prices of goods at retail stores. Electronic-paper-based labels are updated via two-way infrared or radio technology and powered by a rechargeable coin cell.
Some variants use ZBD (zenithal bistable display) which is more similar to LCD but does not need power to retain an image.
Public transport timetables
E-paper displays at bus or trams stops can be remotely updated. Compared to LED or liquid-crystal displays (LCDs), they consume lower energy and the text or graphics stays visible during a power failure. Compared to LCDs, it easily visible under full sunshine.
Digital signage
Because of its energy-saving properties, electronic paper has proved a technology suited to digital signage applications.
Electronic tags
Typically, e-paper electronic tags integrate e-ink technology with wireless interfaces like NFC or UHF. They are most commonly used as employees' ID cards or as production labels to track manufacturing changes and status. E-paper tags are also increasingly being used as shipping labels, especially in the case of reusable boxes.
An interesting feature provided by some e-paper Tags manufacturers is batteryless design. This means that the power needed for a display's content update is provided wirelessly and the module itself doesn't contain any battery.
Other
Other proposed applications include clothes, digital photo frames, information boards, and keyboards. Keyboards with dynamically changeable keys are useful for less represented languages, non-standard keyboard layouts such as Dvorak, or for special non-alphabetical applications such as video editing or games.
The reMarkable is a writer tablet for reading and taking notes.
See also
E-book
Embedded controller
Electrofluidic
Flexible display
Flexible electronics
Hardware Attached on Top (HAT)
History of display technology
Raspberry Pi/Arduino
Raw display
Serial Peripheral Interface
References
Further reading
Electric paper, New Scientist, 2003
E-paper may offer video images, New Scientist, 2003
Paper comes alive New Scientist, 2003
Most flexible electronic paper yet revealed, New Scientist, 2004
Roll-up digital displays move closer to market New Scientist, 2005
External links
Wired article on E Ink-Philips partnership, and background
, retrieved 2007-08-26
MIT ePaper Project
Fujitsu Develops World's First Film Substrate-based Bendable Color Electronic Paper featuring Image Memory Function
American inventions
Display technology
Electronic engineering
Electronic paper technology
Paper | Electronic paper | [
"Technology",
"Engineering"
] | 4,913 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering",
"Display technology"
] |
9,256 | https://en.wikipedia.org/wiki/Enigma%20machine | The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.
The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plaintext is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.
The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message.
Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.
History
The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.
Several Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depended on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.
Breaking Enigma
Hans-Thilo Schmidt was a German who spied for the French, obtaining access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to Poland. Around December 1932, Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently, the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.
Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic.
On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).
In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.
Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked.
During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort.
Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.
The Abwehr used different versions of Enigma machines. In November 1942, during Operation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor.
The Abwehr code had been broken on 8 December 1941 by Dilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled the Double-Cross System to operate.From October 1944, the German Abwehr used the Schlüsselgerät 41 in limited quantities.
Design
Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.
Electrical pathway
An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on.
Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.
The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.
Rotors
The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant.
By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.
Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector.
Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.
The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks.
The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.
Stepping
To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.
Turnover
The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.
The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.
The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion.
With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.
To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.
A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.
Entry wheel
The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification.
Reflector
With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.
In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels.
In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.
Plugboard
The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.
A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.
Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.
Accessories
Other features made various Enigma machines more secure or more convenient.
Schreibmax
Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext.
Fernlesegerät
Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.
Uhr
In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs.
Mathematical analysis
The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let denote the plugboard transformation, denote that of the reflector (), and , , denote those of the left, middle and right rotors respectively. Then the encryption can be expressed as
After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor is rotated positions, the transformation becomes
where is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as and rotations of and . The encryption transformation can then be described as
Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits).
Choose 3 rotors from a set of 5 rotors = 5 x 4 x 3 = 60
26 positions per rotor = 26 x 26 x 26 = 17,576
Plugboard = 26! / ( 6! x 10! x 2^10) = 150,738,274,937,250
Multiply each of the above = 158,962,555,217,826,360,000
Operation
Basic operation
A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.
Details
In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.
An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine:
Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted.
Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring.
Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together.
In very late versions, the wiring of the reconfigurable reflector.
Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message.
For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:
Wheel order: IV, II, V
Ring settings: 15, 23, 26
Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD
Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW
Indicator groups: lsa zbw vcj rxn
Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack.
Indicator
Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.
One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message.
At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message.
This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique".
During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.
This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key.
Additional details
The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.
Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ.
The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.
The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters.
The Kriegsmarine used four-character groups and counted those groups.
Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.
Example enciphering process
The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as
LUSHQOXDMZNAIKFREPCYBWVGTJ
and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in
D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ
Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to
RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ
can be represented as
0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26
0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01
0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02
0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03
0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04
0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05
0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06
0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07
0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08
0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09
0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10
0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11
0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12
0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13
0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14
0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15
0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16
0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17
0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18
0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19
0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20
0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21
0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22
0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23
0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24
0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25
0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26
0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01
0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02
0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03
0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04
0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05
where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.
The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character:
G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ
P EFMQAB(G)UINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
1 OFRJVM(A)ZHQNBXPYKCULGSWETDI N 03 VIII
2 (N)UKCHVSMDGTZQFYEWPIALOXRJB U 17 VI
3 XJMIYVCARQOWH(L)NDSUFKGBEPZT D 15 V
4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ C 25 β
R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q c
4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK β
3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY V
2 TZDIPNJESYCUHAVRMXGKB(F)QWOL VI
1 GLQYW(B)TIZDPSFKANJCUXREVMOH VIII
P E(F)MQABGUINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
F < KPTXIG(F)MESAUHYQBOVJCLRZDNW
Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.
This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters.
Models
The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.
An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.
Commercial Enigma
On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.
Enigma Handelsmaschine (1923)
Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about .
Schreibende Enigma (1924)
This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering.
Glühlampenmaschine, Enigma A (1924)
The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version.
The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step.
Enigma B (1924)
Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.
Enigma C (1926)
Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.
Enigma D (1927)
The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages.
"Navy Cipher D"
Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services.
Enigma H (1929)
There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.
Enigma K
The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan.
Military Enigma
The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber.
Funkschlüssel C
The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.
The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933.
Enigma G (1928–1930)
By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G.
The Abwehr used the Enigma G. This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma.
Wehrmacht Enigma I (1930–1938)
Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II.
The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.
Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around .
In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.
M3 (1934)
By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.
Two extra rotors (1938)
In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.
M4 (1942)
A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.
Surviving machines
The effort to break the Enigma was not disclosed until 1973. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.
The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. The Deutsches Spionagemuseum in Berlin also showcases two military variants. Enigma machines are also exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.
In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England.
In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario.
Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.
A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors.
In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.
In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ.
The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.
On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein.
An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023.
Derivatives
The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar.
Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.
A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.
Simulators
See also
Alastair Denniston
Arlington Hall
Arne Beurling
Beaumanor Hall, a stately home used during the Second World War for military intelligence
Cryptanalysis of the Enigma
Erhard Maertens—investigated Enigma security
Erich Fellgiebel
ECM Mark II—cipher machine used by the Americans in the Second World War
Fritz Thiele
Gisbert Hasenjaeger—responsible for Enigma security
United States Naval Computing Machine Laboratory
Typex—cipher machine used by the British in the Second World War, based on the principles of the commercial Enigma machine
Explanatory notes
References
Citations
General and cited references
Further reading
Heath, Nick, Hacking the Nazis: The secret story of the women who broke Hitler's codes TechRepublic, 27 March 2015
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part I", Cryptologia 25(2), April 2001, pp. 101–141.
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part II", Cryptologia 25(3), July 2001, pp. 177–212.
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part III", Cryptologia 25(4), October 2001, pp. 296–310.
Perera, Tom. The Story of the ENIGMA: History, Technology and Deciphering, 2nd Edition, CD-ROM, 2004, Artifax Books, sample pages
Rebecca Ratcliffe: Searching for Security. The German Investigations into Enigma's security. In: Intelligence and National Security 14 (1999) Issue 1 (Special Issue) S. 146–167.
Rejewski, Marian. "How Polish Mathematicians Deciphered the Enigma" , Annals of the History of Computing 3, 1981. This article is regarded by Andrew Hodges, Alan Turing's biographer, as "the definitive account" (see Hodges' Alan Turing: The Enigma, Walker and Company, 2000 paperback edition, p. 548, footnote 4.5).
Ulbricht, Heinz. Enigma Uhr, Cryptologia, 23(3), April 1999, pp. 194–205.
Untold Story of Enigma Code-Breaker — The Ministry of Defence (U.K.)
External links
Gordon Corera, Poland's overlooked Enigma codebreakers, BBC News Magazine, 4 July 2014
Long-running list of places with Enigma machines on display
Bletchley Park National Code Centre Home of the British codebreakers during the Second World War
Enigma machines on the Crypto Museum Web site
Pictures of a four-rotor naval enigma, including Flash (SWF) views of the machine
Enigma Pictures and Demonstration by NSA Employee at RSA
Kenngruppenheft
Process of building an Enigma M4 replica
Breaking German Navy Ciphers
Broken stream ciphers
Cryptographic hardware
Encryption devices
Military communications of Germany
Military equipment introduced in the 1920s
Products introduced in 1918
Rotor machines
Signals intelligence of World War II
World War II military equipment of Germany | Enigma machine | [
"Physics",
"Technology"
] | 12,736 | [
"Physical systems",
"Machines",
"Rotor machines"
] |
9,257 | https://en.wikipedia.org/wiki/Enzyme | Enzymes () are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties.
Enzymes are known to catalyze more than 5,000 biochemical reaction types.
Other biocatalysts are catalytic RNA molecules, also called ribozymes. They are sometimes described as a type of enzyme rather than being like an enzyme, but even in the decades since ribozymes' discovery in 1980–1982, the word enzyme alone often means the protein type specifically (as is used in this article).
An enzyme's specificity comes from its unique three-dimensional structure.
Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.
Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.
Etymology and history
By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified.
French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells."
In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes , to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms.
Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers).
The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry.
The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail.
Classification and nomenclature
Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity.
Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes.
The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity.
The top-level classification is:
EC 1, Oxidoreductases: catalyze oxidation/reduction reactions
EC 2, Transferases: transfer a functional group (e.g. a methyl or phosphate group)
EC 3, Hydrolases: catalyze the hydrolysis of various bonds
EC 4, Lyases: cleave various bonds by means other than hydrolysis and oxidation
EC 5, Isomerases: catalyze isomerization changes within a single molecule
EC 6, Ligases: join two molecules with covalent bonds.
EC 7, Translocases: catalyze the movement of ions or molecules across membranes, or their separation within membranes.
These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1).
Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam.
Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement.
Structure
Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate.
Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site.
In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity.
A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components.
Mechanism
Substrate binding
Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific.
Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.
Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function.
"Lock and key" model
To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve.
Induced fit model
In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined.
Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.
Catalysis
Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG‡, Gibbs free energy)
By stabilizing the transition state:
Creating an environment with a charge distribution complementary to that of the transition state to lower its energy
By providing an alternative reaction pathway:
Temporarily reacting with the substrate, forming a covalent intermediate to provide a lower energy transition state
By destabilizing the substrate ground state:
Distorting bound substrate(s) into their transition state form to reduce the energy required to reach the transition state
By orienting the substrates into a productive arrangement to reduce the reaction entropy change (the contribution of this mechanism to catalysis is relatively small)
Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilize charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate.
Dynamics
Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory.
Substrate presentation
Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane.
Allosteric modulation
Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway.
Cofactors
Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase).
An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions.
Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.
Coenzymes
Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include:
the hydride ion (H−), carried by NAD or NADP+
the phosphate group, carried by adenosine triphosphate
the acetyl group, carried by coenzyme A
formyl, methenyl or methyl groups, carried by folic acid and
the methyl group, carried by S-adenosylmethionine
Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH.
Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day.
Thermodynamics
As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants:
The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES‡). Finally the enzyme-product complex (EP) dissociates to release the products.
Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions.
Kinetics
Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today.
Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme.
Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second.
The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 108 to 109 (M−1 s−1). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of and are about and , respectively.
Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects.
Inhibition
Enzyme reaction rates can be decreased by various types of enzyme inhibitors.
Types of inhibition
Competitive
A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site.
Non-competitive
A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration.
Uncompetitive
An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare.
Mixed
A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation.
Irreversible
An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner.
Functions of inhibitors
In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism.
Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration.
Factors affecting enzyme activity
As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc.
The following table shows pH optima for various enzymes.
Biological function
Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase.
An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber.
Metabolism
Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme.
Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions.
Control of activity
There are five main ways that enzyme activity is controlled in the cell.
Regulation
Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms.
Post-translational modification
Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme.
Quantity
Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression.
Subcellular distribution
Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments.
Organ specialization
In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production.
Involvement in disease
Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase.
One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired.
Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance.
Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light.
Evolution
Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases.
Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below).
Industrial applications
Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature.
See also
Industrial enzymes
List of enzymes
Molecular machine
Enzyme databases
BRENDA
ExPASy
IntEnz
KEGG
MetaCyc
References
Further reading
General
, A biochemistry textbook available free online through NCBI Bookshelf.
Etymology and history
, A history of early enzymology.
Enzyme structure and mechanism
Kinetics and inhibition
External links
Biomolecules
Catalysis
Metabolism
Process chemicals | Enzyme | [
"Chemistry",
"Biology"
] | 7,692 | [
"Catalysis",
"Natural products",
"Biochemistry",
"Organic compounds",
"Cellular processes",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Chemical kinetics",
"Metabolism",
"Process chemicals"
] |
9,260 | https://en.wikipedia.org/wiki/Equivalence%20class | In mathematics, when the elements of some set have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set into equivalence classes. These equivalence classes are constructed so that elements and belong to the same equivalence class if, and only if, they are equivalent.
Formally, given a set and an equivalence relation on the of an element in is denoted or, equivalently, to emphasize its equivalence relation The definition of equivalence relations implies that the equivalence classes form a partition of meaning, that every element of the set belongs to exactly one equivalence class.
The set of the equivalence classes is sometimes called the quotient set or the quotient space of by and is denoted by
When the set has some structure (such as a group operation or a topology) and the equivalence relation is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories.
Definition and notation
An equivalence relation on a set is a binary relation on satisfying the three properties:
for all (reflexivity),
implies for all (symmetry),
if and then for all (transitivity).
The equivalence class of an element is defined as
The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets.
The set of all equivalence classes in with respect to an equivalence relation is denoted as and is called modulo (or the of by ). The surjective map from onto which maps each element to its equivalence class, is called the , or the canonical projection.
Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from to . Since its composition with the canonical surjection is the identity of such an injection is called a section, when using the terminology of category theory.
Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called . For example, in modular arithmetic, for every integer greater than , the congruence modulo is an equivalence relation on the integers, for which two integers and are equivalent—in this case, one says congruent—if divides this is denoted Each class contains a unique non-negative integer smaller than and these integers are the canonical representatives.
The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted and produces the remainder of the Euclidean division of by .
Properties
Every element of is a member of the equivalence class Every two equivalence classes and are either equal or disjoint. Therefore, the set of all equivalence classes of forms a partition of : every element of belongs to one and only one equivalence class. Conversely, every partition of comes from an equivalence relation in this way, according to which if and only if and belong to the same set of the partition.
It follows from the properties in the previous section that if is an equivalence relation on a set and and are two elements of the following statements are equivalent:
Examples
Let be the set of all rectangles in a plane, and the equivalence relation "has the same area as", then for each positive real number there will be an equivalence class of all the rectangles that have area
Consider the modulo 2 equivalence relation on the set of integers, such that if and only if their difference is an even number. This relation gives rise to exactly two equivalence classes: one class consists of all even numbers, and the other class consists of all odd numbers. Using square brackets around one member of the class to denote an equivalence class under this relation, and all represent the same element of
Let be the set of ordered pairs of integers with non-zero and define an equivalence relation on such that if and only if then the equivalence class of the pair can be identified with the rational number and this equivalence relation and its equivalence classes can be used to give a formal definition of the set of rational numbers. The same construction can be generalized to the field of fractions of any integral domain.
If consists of all the lines in, say, the Euclidean plane, and means that and are parallel lines, then the set of lines that are parallel to each other form an equivalence class, as long as a line is considered parallel to itself. In this situation, each equivalence class determines a point at infinity.
Graphical representation
An undirected graph may be associated to any symmetric relation on a set where the vertices are the elements of and two vertices and are joined if and only if Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques.
Invariants
If is an equivalence relation on and is a property of elements of such that whenever is true if is true, then the property is said to be an invariant of or well-defined under the relation
A frequent particular case occurs when is a function from to another set ; if whenever then is said to be or simply This occurs, for example, in the character theory of finite groups. Some authors use "compatible with " or just "respects " instead of "invariant under ".
Any function is class invariant under according to which if and only if The equivalence class of is the set of all elements in which get mapped to that is, the class is the inverse image of This equivalence relation is known as the kernel of
More generally, a function may map equivalent arguments (under an equivalence relation on ) to equivalent values (under an equivalence relation on ). Such a function is a morphism of sets equipped with an equivalence relation.
Quotient space in topology
In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes.
In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action.
The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation.
A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously.
Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above.
See also
Equivalence partitioning, a method for devising test sets in software testing based on dividing the possible program inputs into equivalence classes according to the behavior of the program on those inputs
Homogeneous space, the quotient space of Lie groups
Notes
References
Further reading
External links
Algebra
Binary relations
Equivalence (mathematics)
Set theory | Equivalence class | [
"Mathematics"
] | 1,735 | [
"Set theory",
"Mathematical logic",
"Binary relations",
"Mathematical relations",
"Algebra"
] |
9,263 | https://en.wikipedia.org/wiki/Ether | In organic chemistry, ethers are a class of compounds that contain an ether group—a single oxygen atom bonded to two separate carbon atoms, each part of an organyl group (e.g., alkyl or aryl). They have the general formula , where R and R′ represent the organyl groups. Ethers can again be classified into two varieties: if the organyl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin.
Structure and bonding
Ethers feature bent linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp3.
Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however.
Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane.
Vinyl- and acetylenic ethers
Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds.
Nomenclature
In the IUPAC Nomenclature system, ethers are named using the general formula "alkoxyalkane", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a "methoxy-" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3).
Trivial name
IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by "ether". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties.
Polyethers
Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term "oxide" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties.
Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers.
The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO).
Related compounds
Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′).
There are compounds which, instead of C in the linkage, contain heavier group 14 chemical elements (e.g., Si, Ge, Sn, Pb). Such compounds are considered ethers as well. Examples of such ethers are silyl enol ethers (containing the linkage), disiloxane (the other name of this compound is disilyl ether, containing the linkage) and stannoxanes (containing the linkage).
Physical properties
Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless.
Reactions
The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes.
Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below.
Cleavage
Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides:
ROCH3 + HBr → CH3Br + ROH
These reactions proceed via onium intermediates, i.e. [RO(H)CH3]+Br−.
Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base.
Despite these difficulties the chemical paper pulping processes are based on cleavage of ether bonds in the lignin.
Peroxide formation
When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes.
Lewis bases
Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. borane diethyl etherate (). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms with many complexes.
Alpha-halogenation
This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers.
Synthesis
Dehydration of alcohols
The dehydration of alcohols affords ethers:
2 R–OH → R–O–R + H2O at high temperature
This direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol:
R–CH2–CH2(OH) → R–CH=CH2 + H2O
The dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers.
Electrophilic addition of alcohols to alkenes
Alcohols add to electrophilically activated alkenes. The method is atom-economical:
R2C=CR2 + R–OH → R2CH–C(–O–R)–R2
Acid catalysis is required for this reaction. Commercially important ethers prepared in this way are derived from isobutene or isoamylene, which protonate to give relatively stable carbocations. Using ethanol and methanol with these two alkenes, four fuel-grade ethers are produced: methyl tert-butyl ether (MTBE), methyl tert-amyl ether (TAME), ethyl tert-butyl ether (ETBE), and ethyl tert-amyl ether (TAEE).
Solid acid catalysts are typically used to promote this reaction.
Epoxides
Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes:
By the oxidation of alkenes with a peroxyacid such as m-CPBA.
By the base intramolecular nucleophilic substitution of a halohydrin.
Many ethers, ethoxylates and crown ethers, are produced from epoxides.
Williamson and Ullmann ether syntheses
Nucleophilic displacement of alkyl halides by alkoxides
R–ONa + R′–X → R–O–R′ + NaX
This reaction, the Williamson ether synthesis, involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Although popular in textbooks, the method is usually impractical on scale because it cogenerates significant waste.
Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups.
In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism.
C6H5OH + OH− → C6H5–O− + H2O
C6H5–O− + R–X → C6H5OR
The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper.
Important ethers
See also
Ester
Ether lipid
Ether addiction
Ether (song)
History of general anesthesia
Inhalant
Chemical paper pulping processes: Kraft process (and Soda pulping), Organosolv pulping process and the Sulfite process
References
Functional groups
Impression material | Ether | [
"Chemistry"
] | 2,697 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
9,311 | https://en.wikipedia.org/wiki/Endocrinology | Endocrinology (from endocrine + -ology) is a branch of biology and medicine dealing with the endocrine system, its diseases, and its specific secretions known as hormones. It is also concerned with the integration of developmental events proliferation, growth, and differentiation, and the psychological or behavioral activities of metabolism, growth and development, tissue function, sleep, digestion, respiration, excretion, mood, stress, lactation, movement, reproduction, and sensory perception caused by hormones. Specializations include behavioral endocrinology and comparative endocrinology.
The endocrine system consists of several glands, all in different parts of the body, that secrete hormones directly into the blood rather than into a duct system. Therefore, endocrine glands are regarded as ductless glands. Hormones have many different functions and modes of action; one hormone may have several effects on different target organs, and, conversely, one target organ may be affected by more than one hormone.
The endocrine system
Endocrinology is the study of the endocrine system in the human body. This is a system of glands which secrete hormones. Hormones are chemicals that affect the actions of different organ systems in the body. Examples include thyroid hormone, growth hormone, and insulin. The endocrine system involves a number of feedback mechanisms, so that often one hormone (such as thyroid stimulating hormone) will control the action or release of another secondary hormone (such as thyroid hormone). If there is too much of the secondary hormone, it may provide negative feedback to the primary hormone, maintaining homeostasis.
In the original 1902 definition by Bayliss and Starling (see below), they specified that, to be classified as a hormone, a chemical must be produced by an organ, be released (in small amounts) into the blood, and be transported by the blood to a distant organ to exert its specific function. This definition holds for most "classical" hormones, but there are also paracrine mechanisms (chemical communication between cells within a tissue or organ), autocrine signals (a chemical that acts on the same cell), and intracrine signals (a chemical that acts within the same cell). A neuroendocrine signal is a "classical" hormone that is released into the blood by a neurosecretory neuron (see article on neuroendocrinology).
Hormones
Griffin and Ojeda identify three different classes of hormones based on their chemical composition:
Amines
Amines, such as norepinephrine, epinephrine, and dopamine (catecholamines), are derived from single amino acids, in this case tyrosine. Thyroid hormones such as 3,5,3'-triiodothyronine (T3) and 3,5,3',5'-tetraiodothyronine (thyroxine, T4) make up a subset of this class because they derive from the combination of two iodinated tyrosine amino acid residues.
Peptide and protein
Peptide hormones and protein hormones consist of three (in the case of thyrotropin-releasing hormone) to more than 200 (in the case of follicle-stimulating hormone) amino acid residues and can have a molecular mass as large as 31,000 grams per mole. All hormones secreted by the pituitary gland are peptide hormones, as are leptin from adipocytes, ghrelin from the stomach, and insulin from the pancreas.
Steroid
Steroid hormones are converted from their parent compound, cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Some forms of vitamin D, such as calcitriol, are steroid-like and bind to homologous receptors, but lack the characteristic fused ring structure of true steroids.
As a profession
Although every organ system secretes and responds to hormones (including the brain, lungs, heart, intestine, skin, and the kidneys), the clinical specialty of endocrinology focuses primarily on the endocrine organs, meaning the organs whose primary function is hormone secretion. These organs include the pituitary, thyroid, adrenals, ovaries, testes, and pancreas.
An endocrinologist is a physician who specializes in treating disorders of the endocrine system, such as diabetes, hyperthyroidism, and many others (see list of diseases).
Work
The medical specialty of endocrinology involves the diagnostic evaluation of a wide variety of symptoms and variations and the long-term management of disorders of deficiency or excess of one or more hormones.
The diagnosis and treatment of endocrine diseases are guided by laboratory tests to a greater extent than for most specialties. Many diseases are investigated through excitation/stimulation or inhibition/suppression testing. This might involve injection with a stimulating agent to test the function of an endocrine organ. Blood is then sampled to assess the changes of the relevant hormones or metabolites. An endocrinologist needs extensive knowledge of clinical chemistry and biochemistry to understand the uses and limitations of the investigations.
A second important aspect of the practice of endocrinology is distinguishing human variation from disease. Atypical patterns of physical development and abnormal test results must be assessed as indicative of disease or not. Diagnostic imaging of endocrine organs may reveal incidental findings called incidentalomas, which may or may not represent disease.
Endocrinology involves caring for the person as well as the disease. Most endocrine disorders are chronic diseases that need lifelong care. Some of the most common endocrine diseases include diabetes mellitus, hypothyroidism and the metabolic syndrome. Care of diabetes, obesity and other chronic diseases necessitates understanding the patient at the personal and social level as well as the molecular, and the physician–patient relationship can be an important therapeutic process.
Apart from treating patients, many endocrinologists are involved in clinical science and medical research, teaching, and hospital management.
Training
Endocrinologists are specialists of internal medicine or pediatrics. Reproductive endocrinologists deal primarily with problems of fertility and menstrual function—often training first in obstetrics. Most qualify as an internist, pediatrician, or gynecologist for a few years before specializing, depending on the local training system. In the U.S. and Canada, training for board certification in internal medicine, pediatrics, or gynecology after medical school is called residency. Further formal training to subspecialize in adult, pediatric, or reproductive endocrinology is called a fellowship. Typical training for a North American endocrinologist involves 4 years of college, 4 years of medical school, 3 years of residency, and 2 years of fellowship. In the US, adult endocrinologists are board certified by the American Board of Internal Medicine (ABIM) or the American Osteopathic Board of Internal Medicine (AOBIM) in Endocrinology, Diabetes and Metabolism.
Diseases treated by endocrinologists
Diabetes mellitus: This is a chronic condition that affects how your body regulates blood sugar. There are two main types: type 1 diabetes, which is an autoimmune disease that occurs when the body attacks the cells that produce insulin, and type 2 diabetes, which is a condition in which the body either doesn't produce enough insulin or doesn't use it effectively.
Thyroid disorders: These are conditions that affect the thyroid gland, a butterfly-shaped gland located in the front of your neck. The thyroid gland produces hormones that regulate your metabolism, heart rate, and body temperature. Common thyroid disorders include hyperthyroidism (overactive thyroid) and hypothyroidism (underactive thyroid).
Adrenal disorders: The adrenal glands are located on top of your kidneys. They produce hormones that help regulate blood pressure, blood sugar, and the body's response to stress. Common adrenal disorders include Cushing syndrome (excess cortisol production) and Addison's disease (adrenal insufficiency).
Pituitary disorders: The pituitary gland is a pea-sized gland located at the base of the brain. It produces hormones that control many other hormone-producing glands in the body. Common pituitary disorders include acromegaly (excess growth hormone production) and Cushing's disease (excess ACTH production).
Metabolic disorders: These are conditions that affect how your body processes food into energy. Common metabolic disorders include obesity, high cholesterol, and gout.
Calcium and bone disorders: Endocrinologists also treat conditions that affect calcium levels in the blood, such as hyperparathyroidism (too much parathyroid hormone) and osteoporosis (weak bones).
Sexual and reproductive disorders: Endocrinologists can also help diagnose and treat hormonal problems that affect sexual development and function, such as polycystic ovary syndrome (PCOS) and erectile dysfunction.
Endocrine cancers: These are cancers that develop in the endocrine glands. Endocrinologists can help diagnose and treat these cancers.
Diseases and medicine
Diseases
See main article at Endocrine diseases
Endocrinology also involves the study of the diseases of the endocrine system. These diseases may relate to too little or too much secretion of a hormone, too little or too much action of a hormone, or problems with receiving the hormone.
Societies and Organizations
Because endocrinology encompasses so many conditions and diseases, there are many organizations that provide education to patients and the public. The Hormone Foundation is the public education affiliate of The Endocrine Society and provides information on all endocrine-related conditions. Other educational organizations that focus on one or more endocrine-related conditions include the American Diabetes Association, Human Growth Foundation, American Menopause Foundation, Inc., and American Thyroid Association.
In North America the principal professional organizations of endocrinologists include The Endocrine Society, the American Association of Clinical Endocrinologists, the American Diabetes Association, the Lawson Wilkins Pediatric Endocrine Society, and the American Thyroid Association.
In Europe, the European Society of Endocrinology (ESE) and the European Society for Paediatric Endocrinology (ESPE) are the main organisations representing professionals in the fields of adult and paediatric endocrinology, respectively.
In the United Kingdom, the Society for Endocrinology and the British Society for Paediatric Endocrinology and Diabetes are the main professional organisations.
The European Society for Paediatric Endocrinology is the largest international professional association dedicated solely to paediatric endocrinology. There are numerous similar associations around the world.
History
The earliest study of endocrinology began in China. The Chinese were isolating sex and pituitary hormones from human urine and using them for medicinal purposes by 200 BC. They used many complex methods, such as sublimation of steroid hormones. Another method specified by Chinese texts—the earliest dating to 1110—specified the use of saponin (from the beans of Gleditsia sinensis) to extract hormones, but gypsum (containing calcium sulfate) was also known to have been used.
Although most of the relevant tissues and endocrine glands had been identified by early anatomists, a more humoral approach to understanding biological function and disease was favoured by the ancient Greek and Roman thinkers such as Aristotle, Hippocrates, Lucretius, Celsus, and Galen, according to Freeman et al., and these theories held sway until the advent of germ theory, physiology, and organ basis of pathology in the 19th century.
In 1849, Arnold Berthold noted that castrated cockerels did not develop combs and wattles or exhibit overtly male behaviour. He found that replacement of testes back into the abdominal cavity of the same bird or another castrated bird resulted in normal behavioural and morphological development, and he concluded (erroneously) that the testes secreted a substance that "conditioned" the blood that, in turn, acted on the body of the cockerel. In fact, one of two other things could have been true: that the testes modified or activated a constituent of the blood or that the testes removed an inhibitory factor from the blood. It was not proven that the testes released a substance that engenders male characteristics until it was shown that the extract of testes could replace their function in castrated animals. Pure, crystalline testosterone was isolated in 1935.
Graves' disease was named after Irish doctor Robert James Graves, who described a case of goiter with exophthalmos in 1835. The German Karl Adolph von Basedow also independently reported the same constellation of symptoms in 1840, while earlier reports of the disease were also published by the Italians Giuseppe Flajani and Antonio Giuseppe Testa, in 1802 and 1810 respectively, and by the English physician Caleb Hillier Parry (a friend of Edward Jenner) in the late 18th century. Thomas Addison was first to describe Addison's disease in 1849.
In 1902 William Bayliss and Ernest Starling performed an experiment in which they observed that acid instilled into the duodenum caused the pancreas to begin secretion, even after they had removed all nervous connections between the two. The same response could be produced by injecting extract of jejunum mucosa into the jugular vein, showing that some factor in the mucosa was responsible. They named this substance "secretin" and coined the term hormone for chemicals that act in this way.
Joseph von Mering and Oskar Minkowski made the observation in 1889 that removing the pancreas surgically led to an increase in blood sugar, followed by a coma and eventual death—symptoms of diabetes mellitus. In 1922, Banting and Best realized that homogenizing the pancreas and injecting the derived extract reversed this condition.
Neurohormones were first identified by Otto Loewi in 1921. He incubated a frog's heart (innervated with its vagus nerve attached) in a saline bath, and left in the solution for some time. The solution was then used to bathe a non-innervated second heart. If the vagus nerve on the first heart was stimulated, negative inotropic (beat amplitude) and chronotropic (beat rate) activity were seen in both hearts. This did not occur in either heart if the vagus nerve was not stimulated. The vagus nerve was adding something to the saline solution. The effect could be blocked using atropine, a known inhibitor to heart vagal nerve stimulation. Clearly, something was being secreted by the vagus nerve and affecting the heart. The "vagusstuff" (as Loewi called it) causing the myotropic (muscle enhancing) effects was later identified to be acetylcholine and norepinephrine. Loewi won the Nobel Prize for his discovery.
Recent work in endocrinology focuses on the molecular mechanisms responsible for triggering the effects of hormones. The first example of such work being done was in 1962 by Earl Sutherland. Sutherland investigated whether hormones enter cells to evoke action, or stayed outside of cells. He studied norepinephrine, which acts on the liver to convert glycogen into glucose via the activation of the phosphorylase enzyme. He homogenized the liver into a membrane fraction and soluble fraction (phosphorylase is soluble), added norepinephrine to the membrane fraction, extracted its soluble products, and added them to the first soluble fraction. Phosphorylase activated, indicating that norepinephrine's target receptor was on the cell membrane, not located intracellularly. He later identified the compound as cyclic AMP (cAMP) and with his discovery created the concept of second-messenger-mediated pathways. He, like Loewi, won the Nobel Prize for his groundbreaking work in endocrinology.
See also
Comparative endocrinology
Endocrine disease
Hormone
Hormone replacement therapy
Neuroendocrinology
Pediatric endocrinology
Reproductive endocrinology and infertility
Wildlife endocrinology
List of instruments used in endocrinology
References
Endocrine system
Hormones | Endocrinology | [
"Biology"
] | 3,413 | [
"Organ systems",
"Endocrine system"
] |
9,531 | https://en.wikipedia.org/wiki/Electrical%20engineering | Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use.
Electrical engineering is divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.
Electrical engineers typically hold a degree in electrical engineering, electronic or electrical and electronic engineering. Practicing engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE).
Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software.
History
Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery.
19th century
In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.
In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction.
In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.
Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation.
During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts.
In about 1885, Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world.
During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.
Early 20th century
During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose-built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of .
Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.
In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.
In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives.
In 1948, Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise).
Solid-state electronics
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices.
The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959.
The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world.
The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking.
The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution.
Subfields
One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today, electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes, certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right.
Power and energy
Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems.
Telecommunications
Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.
Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static.
Control engineering
Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation.
Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.
Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries.
Electronics
Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner.
Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.
Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.
Microelectronics and nanoelectronics
Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level.
Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002.
Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.
Signal processing
Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.
Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.
DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing.
Instrumentation
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.
Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control.
Computers
Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering.
Photonics and optics
Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber-optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials).
Related disciplines
Mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles.
Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems.
The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.
In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion.
Education
Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study.
At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.
Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree.
Professional practice
In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).
The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law.
Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer.
In Australia, Canada, and the United States, electrical engineers make up around 0.25% of the labor force.
Tools and work
From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunications systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery.
Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.
Although most electrical engineers will understand basic circuit theory (that is, the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunications systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering.
A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high-frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting.
For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.
The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers.
Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets.
See also
Barnacle (slang)
Comparison of EDA software
Electrical Technologist
Electronic design automation
Glossary of electrical and electronics engineering
Index of electrical engineering articles
Information engineering
International Electrotechnical Commission (IEC)
List of electrical engineers
List of engineering branches
List of mechanical, electrical and electronic equipment manufacturing companies by revenue
List of Russian electrical engineers
Occupations in electrical/electronics engineering
Outline of electrical engineering
Timeline of electrical and electronic engineering
Notes
References
Bibliography
Martini, L., "BSCCO-2233 multilayered conductors", in Superconducting Materials for High Energy Colliders, pp. 173–181, World Scientific, 2001 .
Schmidt, Rüdiger, "The LHC accelerator and its challenges", in Kramer M.; Soler, F.J.P. (eds), Large Hadron Collider Phenomenology, pp. 217–250, CRC Press, 2004 .
Further reading
External links
International Electrotechnical Commission (IEC)
MIT OpenCourseWare in-depth look at Electrical Engineering – online courses with video lectures.
IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences.
Electronic engineering
Computer engineering
Electrical and computer engineering
Engineering disciplines | Electrical engineering | [
"Technology",
"Engineering"
] | 6,637 | [
"Computer engineering",
"Electronic engineering",
"Electrical and computer engineering",
"nan",
"Electrical engineering"
] |
9,532 | https://en.wikipedia.org/wiki/Electromagnetism | In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
History
Ancient world
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
19th century
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges or one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
A fundamental force
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
Classical electrodynamics
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10May 1752 by Thomas-François Dalibard of France using a iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
Extension to nonlinear phenomena
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
Quantities and units
Here is a list of common units related to electromagnetism:
ampere (electric current, SI unit)
coulomb (electric charge)
farad (capacitance)
henry (inductance)
ohm (resistance)
siemens (conductance)
tesla (magnetic flux density)
volt (electric potential)
watt (power)
weber (magnetic flux)
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
Applications
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
See also
Abraham–Lorentz force
Aeromagnetic surveys
Computational electromagnetics
Double-slit experiment
Electrodynamic droplet deformation
Electromagnet
Electromagnetic induction
Electromagnetic wave equation
Electromagnetic scattering
Electromechanics
Geophysics
Introduction to electromagnetism
Magnetostatics
Magnetoquasistatic field
Optics
Relativistic electromagnetism
Wheeler–Feynman absorber theory
References
Further reading
Web sources
Textbooks
General coverage
External links
Magnetic Field Strength Converter
Electromagnetic Force – from Eric Weisstein's World of Physics
Fundamental interactions | Electromagnetism | [
"Physics",
"Mathematics"
] | 3,282 | [
"Physical phenomena",
"Force",
"Electromagnetism",
"Physical quantities",
"Fundamental interactions",
"Particle physics",
"Electrodynamics",
"Dynamical systems"
] |
9,540 | https://en.wikipedia.org/wiki/Electricity%20generation | Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage, using for example, the pumped-storage method.
Consumable electricity is not freely available in nature, so it must be "produced", transforming other forms of energy to electricity. Production is carried out in power stations, also called "power plants". Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission, but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics).
Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power.
History
The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss.
Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph.
Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains.
The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources.
In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification.
The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today.
Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It was not until the 1930s that rural areas saw the large-scale establishment of electrification.
Methods of generation
Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics.
Generators
Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material, e.g. copper wire. Almost all commercial electrical generation uses electromagnetic induction, in which mechanical energy forces a generator to rotate.
Electrochemistry
Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge.
Photovoltaic effect
The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems.
Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India.
Economics
The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand.
All power grids have varying loads on them. The daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal.
Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high.
Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle.
Generating equipment
Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale forms of electricity production that do not employ a generator are photovoltaic solar and fuel cells.
Turbines
Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines.
The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine, invented by Sir Charles Parsons in 1884, currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include:
Steam
Water is boiled by coal burned in a thermal power plant. About 41% of all electricity is generated this way.
Nuclear fission heat created in a nuclear reactor creates steam. Less than 15% of electricity is generated this way.
Renewable energy. The steam is generated by biomass, solar thermal energy, or geothermal power.
Natural gas: turbines are driven directly by gases produced by combustion. Combined cycle are driven by both steam and natural gas. They generate power by burning natural gas in a gas turbine and use residual heat to generate steam. At least 20% of the world's electricity is generated by natural gas.
Water Energy is captured by a water turbine from the movement of water - from falling water, the rise and fall of tides or ocean thermal currents (see ocean thermal energy conversion). Currently, hydroelectric plants provide approximately 16% of the world's electricity.
The windmill was a very early wind turbine. In 2018 around 5% of the world's electricity was produced from wind
Turbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points.
Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages.
World production
Total world generation in 2021 was 28,003 TWh, including coal (36%), gas (23%), hydro (15%), nuclear (10%), wind (6.6%), solar (3.7%), oil and other fossil fuels (3.1%), biomass (2.4%) and geothermal and other renewables (0.33%).
Production by country
China produced a third of the world's electricity in 2021, largely from coal. The United States produces half as much as China but uses far more natural gas and nuclear.
Environmental concerns
Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US.
According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output.
A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources.
Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods.
Centralised and distributed generation
Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used.
Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar.
Technologies
Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale.
Solar
Wind
Coal
Natural gas
Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin.
Natural gas power plants are more efficient than coal power generation, they however contribute to climate change, but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, the extraction of gas when mined releases a significant amount of methane into the atmosphere.
Nuclear
Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process.
Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem.
Electricity generation capacity by country
The table lists 45 countries with their total electricity capacities. The data is from 2022.
According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981.
Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries.
See also
Glossary of power generation
Cogeneration: the use of a heat engine or power station to generate electricity and useful heat at the same time.
Cost of electricity by source
Diesel generator
Engine-generator
Generation expansion planning
Steam–electric power station
World energy supply and consumption
Notes
References
Power engineering
Fossil fuel power stations
Infrastructure | Electricity generation | [
"Engineering"
] | 3,677 | [
"Energy engineering",
"Construction",
"Power engineering",
"Electrical engineering",
"Infrastructure"
] |
9,541 | https://en.wikipedia.org/wiki/Design%20of%20experiments | The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
History
Statistical experiments, following Charles S. Peirce
A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
Randomized experiments
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
Optimal designs for regression models
Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).
Sequences of experiments
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.
Fisher's principles
A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
Comparison
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Randomization
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Statistical replication
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
Blocking
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Multifactorial experiments
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Example
This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.
Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by
We consider two different experiments:
Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8.
Do the eight weighings according to the following schedule—a weighing matrix:
Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is
Similar estimates can be found for the weights of the other items:
The question of design of experiments is: which experiment is better?
The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve combinatorial designs, as in this example and others.
Avoiding false positives
False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.
Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.
P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
Discussion topics when setting up an experimental design
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
How many factors does the design have, and are the levels of these factors fixed or random?
Are control conditions needed, and what should they be?
Manipulation checks: did the manipulation really work?
What are the background variables?
What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power?
What is the relevance of interactions between factors?
What is the influence of delayed effects of substantive factors on outcomes?
How do response shifts affect self-report measures?
How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
What about using a proxy pretest?
Are there confounding variables?
Should the client/patient, researcher or even the analyst of the data be blind to conditions?
What is the feasibility of subsequent application of different conditions to the same units?
How many of each control and noise factors should be taken into account?
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
Causal attributions
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design.
Statistical control
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Experimental designs after Fisher
Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.
As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.
Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification.
Human participant constraints
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction. Constraints may involve
institutional review boards, informed consent
and confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory animals with the goal of defining safe exposure limits
for humans. Balancing
the constraints are views from the medical field. Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
See also
Adversarial collaboration
Bayesian experimental design
Block design
Box–Behnken design
Central composite design
Clinical trial
Clinical study design
Computer experiment
Control variable
Controlling for a variable
Experimetrics (econometrics-related experiments)
Factor analysis
Fractional factorial design
Glossary of experimental design
Grey box model
Industrial engineering
Instrument effect
Law of large numbers
Manipulation checks
Multifactor design of experiments software
One-factor-at-a-time method
Optimal design
Plackett–Burman design
Probabilistic design
Protocol (natural sciences)
Quasi-experimental design
Randomized block design
Randomized controlled trial
Research design
Robust parameter design
Sample size determination
Supersaturated design
Royal Commission on Animal Magnetism
Survey sampling
System identification
Taguchi methods
References
Sources
Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12–13. Relevant individual papers:
(1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint.
(1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint.
(1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint.
(1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint.
(1883), "A Theory of Probable Inference", Studies in Logic, pp. 126–181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, )
External links
A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Experiments
Industrial engineering
Metascience
Quantitative research
Statistical process control
Statistical theory
Systems engineering
Mathematics in medicine | Design of experiments | [
"Mathematics",
"Engineering"
] | 4,271 | [
"Systems engineering",
"Statistical process control",
"Applied mathematics",
"Industrial engineering",
"Engineering statistics",
"Mathematics in medicine"
] |
9,613 | https://en.wikipedia.org/wiki/Euler%27s%20formula | Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number , one has
where is the base of the natural logarithm, is the imaginary unit, and and are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted ("cosine plus i sine"). The formula is still valid if is a complex number, and is also called Euler's formula in this more general case.
Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics".
When , Euler's formula may be rewritten as or , which is known as Euler's identity.
History
In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of ) as:
Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of .
Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum.
Johann Bernoulli had found that
And since
the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral.
Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values.
The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel.
Definitions of complex exponentiation
The exponential function for real values of may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of for complex values of simply by substituting in place of and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of to the complex plane.
Differential equation definition
The exponential function is the unique differentiable function of a complex variable for which the derivative equals the function and
Power series definition
For complex
Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines for all complex .
Limit definition
For complex
Here, is restricted to positive integers, so there is no question about what the power with exponent means.
Proofs
Various proofs of the formula are possible.
Using differentiation
This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted).
Consider the function
for real . Differentiating gives by the product rule
Thus, is a constant. Since , then for all real , and thus
Using power series
Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of :
Using now the power-series definition from above, we see that for real values of
where in the last step we recognize the two terms are the Maclaurin series for and . The rearrangement of terms is justified because each series is absolutely convergent.
Using polar coordinates
Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some and depending on ,
No assumptions are being made about and ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of is . Therefore, differentiating both sides gives
Substituting for and equating real and imaginary parts in this formula gives and . Thus, is a constant, and is for some constant . The initial values and come from , giving and . This proves the formula
Applications
Applications in complex number theory
Interpretation of the formula
This formula can be interpreted as saying that the function is a unit complex number, i.e., it traces out the unit circle in the complex plane as ranges through the real numbers. Here is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians.
The original proof is based on the Taylor series expansions of the exponential function (where is a complex number) and of and for real numbers (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers .
A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number , and its complex conjugate, , can be written as
where
is the real part,
is the imaginary part,
is the magnitude of and
.
is the argument of , i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of . Many texts write instead of , but the first equation needs adjustment when . This is because for any real and , not both zero, the angles of the vectors and differ by radians, but have the identical value of .
Use of the formula to define the logarithm of complex numbers
Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation):
and that
both valid for any complex numbers and . Therefore, one can write:
for any . Taking the logarithm of both sides shows that
and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because is multi-valued.
Finally, the other exponential law
which can be seen to hold for all integers , together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula.
Relationship to trigonometry
Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function:
The two equations above can be derived by adding or subtracting Euler's formulas:
and solving for either cosine or sine.
These formulas can even serve as the definition of the trigonometric functions for complex arguments . For example, letting , we have:
In addition
Complex exponentials can simplify trigonometry, because they are mathematically easier to manipulate than their sine and cosine components. One technique is simply to convert sines and cosines into equivalent expressions in terms of exponentials sometimes called complex sinusoids. After the manipulations, the simplified result is still real-valued. For example:
Another technique is to represent sines and cosines in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example:
This formula is used for recursive generation of for integer values of and arbitrary (in radians).
Considering a parameter in equation above yields recursive formula for Chebyshev polynomials of the first kind.
Topological interpretation
In the language of topology, Euler's formula states that the imaginary exponential function is a (surjective) morphism of topological groups from the real line to the unit circle . In fact, this exhibits as a covering space of . Similarly, Euler's identity says that the kernel of this map is , where . These observations may be combined and summarized in the commutative diagram below:
Other applications
In differential equations, the function is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation.
In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor.
In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point on this sphere, and a real number, Euler's formula applies:
and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space.
Other special cases
The special cases that evaluate to units illustrate rotation around the complex unit circle:
The special case at (where , one turn) yields . This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging the addends from the general case:
An interpretation of the simplified form is that rotating by a full turn is an identity function.
See also
Complex number
Euler's identity
Integration using Euler's formula
History of Lorentz transformations
List of topics named after Leonhard Euler
References
Further reading
External links
Elements of Algebra
Theorems in complex analysis
Articles containing proofs
Mathematical analysis
E (mathematical constant)
Trigonometry
Leonhard Euler | Euler's formula | [
"Mathematics"
] | 2,139 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in complex analysis",
"E (mathematical constant)",
"Articles containing proofs"
] |
9,616 | https://en.wikipedia.org/wiki/Evolutionarily%20stable%20strategy | An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.
In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change).
History
Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it.
Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author.
The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour.
Uses of ESS:
The ESS was a major element used to analyze evolution in Richard Dawkins' bestselling 1976 book The Selfish Gene.
The ESS was first used in the social sciences by Robert Axelrod in his 1984 book The Evolution of Cooperation. Since then, it has been widely used in the social sciences, including anthropology, economics, philosophy, and political science.
In the social sciences, the primary interest is not in an ESS as the end of biological evolution, but as an end point in cultural evolution or individual learning.
In evolutionary psychology, ESS is used primarily as a model for human biological evolution.
Motivation
The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies.
Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives.
Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes.
Nash equilibrium
An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T:
E(S,S) ≥ E(T,S)
In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better).
A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS.
Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either
E(S,S) > E(T,S), or
E(S,S) = E(T,S) and E(S,T) > E(T,T)
The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T.
There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S
E(S,S) ≥ E(T,S), and
E(S,T) > E(T,T)
In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second.
In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T.
This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set.
Examples of differences between Nash equilibria and ESSes
In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS.
Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B).
Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS.
Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation).
This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points.
Vs. evolutionarily stable state
In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations.
In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade. Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory.
In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability.
B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS.
Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic.
Stochastic ESS
In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging.
Prisoner's dilemma
A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans.
Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect.
If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them. If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects.
This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives.
Human behavior
The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies.
Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences.
See also
Antipredator adaptation
Behavioral ecology
Evolutionary psychology
Fitness landscape
Hawk–dove game
Koinophilia
Sociobiology
War of attrition (game)
References
Further reading
Classic reference textbook.
. An 88-page mathematical introduction; see Section 3.8. Free online at many universities.
Parker, G. A. (1984) Evolutionary stable strategies. In Behavioural Ecology: an Evolutionary Approach (2nd ed) Krebs, J. R. & Davies N.B., eds. pp 30–61. Blackwell, Oxford.
. A comprehensive reference from a computational perspective; see Section 7.7. Downloadable free online.
Maynard Smith, John. (1982) Evolution and the Theory of Games. . Classic reference.
External links
Evolutionarily Stable Strategies at Animal Behavior: An Online Textbook by Michael D. Breed.
Game Theory and Evolutionarily Stable Strategies, Kenneth N. Prestwich's site at College of the Holy Cross.
Evolutionarily stable strategies knol Archived: https://web.archive.org/web/20091005015811/http://knol.google.com/k/klaus-rohde/evolutionarily-stable-strategies-and/xk923bc3gp4/50#
Game theory equilibrium concepts
Evolutionary game theory | Evolutionarily stable strategy | [
"Mathematics"
] | 3,274 | [
"Game theory",
"Game theory equilibrium concepts",
"Evolutionary game theory"
] |
9,630 | https://en.wikipedia.org/wiki/Ecology | Ecology () is the natural science of the relationships among living organisms and their environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology () was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size () will grow to approach equilibrium, where (), when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" () was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
See also
Carrying capacity
Chemical ecology
Climate justice
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological empathy
Ecological overshoot
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Human ecology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Philosophy of ecology
Political ecology
Theoretical ecology
Sensory ecology
Sexecology
Spiritual ecology
Sustainable development
Lists
Glossary of ecology
Index of biology articles
List of ecologists
Outline of biology
Terminology of ecology
Notes
References
External links
The Nature Education Knowledge Project: Ecology
Biogeochemistry
Emergence | Ecology | [
"Chemistry",
"Biology",
"Environmental_science"
] | 13,719 | [
"Ecology terminology",
"Environmental chemistry",
"Ecology",
"Chemical oceanography",
"Biogeochemistry"
] |
9,632 | https://en.wikipedia.org/wiki/Ecosystem | An ecosystem (or ecological system) is a system formed by organisms in interaction with their environment. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.
Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system through photosynthesis, building up plant tissue. Animals play an important role in the movement of matter and energy through the system, by feeding on plants and on one another. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
Definition
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows.
"Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.
Origin and development of the term
The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope".
G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.
Processes
External and internal factors
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.
Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.
Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function.
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.
Primary production
Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Energy flow
Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.
Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.
In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.
The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains which present a number of common, non random properties in the topology of their network.
Decomposition
The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.
Decomposition rates
Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.
Dynamics and resilience
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.
Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply."
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.
Nutrient cycling
Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical.
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.
Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.
Function and biodiversity
Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species.
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.
An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.
Study approaches
Ecosystem ecology
Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.
The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.
Classifications
Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification.
Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.
Human interactions with ecosystems
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.
Ecosystem goods and services
Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.
The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change.
Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.
Degradation and decline
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.
Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.
Management
When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).
Restoration and sustainable development
Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.
See also
Complex system
Earth science
Ecoregion
Ecological resilience
Ecosystem-based adaptation
Artificialization
Types
The following articles are types of ecosystems for particular types of regions or zones:
Aquatic ecosystem
Freshwater ecosystem
Lake ecosystem (lentic ecosystem)
River ecosystem (lotic ecosystem)
Marine ecosystem
Large marine ecosystem
Tropical salt pond ecosystem
Terrestrial ecosystem
Boreal ecosystem
Groundwater-dependent ecosystems
Montane ecosystem
Urban ecosystem
Ecosystems grouped by condition
Agroecosystem
Closed ecosystem
Depauperate ecosystem
Novel ecosystem
Reference ecosystem
Instances
Ecosystem instances in specific regions of the world:
Greater Yellowstone Ecosystem
Leuser Ecosystem
Longleaf pine Ecosystem
Tarangire Ecosystem
References
External links | Ecosystem | [
"Biology"
] | 5,994 | [
"Symbiosis",
"Ecosystems"
] |
9,640 | https://en.wikipedia.org/wiki/Engine | An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy.
Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form; thus heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing.
Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion.
Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine).
Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions.
Emission/Byproducts
All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of . If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, , a greenhouse gas, is emitted. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of , but this is an electrochemical engine not a heat engine.
Terminology
The word engine derives from Old French , from the Latin –the root of the word . Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the Industrial Revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses.
In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets.
When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion.
Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel.
A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam).
History
Antiquity
Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times.
According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors.
Medieval
Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour.
In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629.
In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe.
Industrial Revolution
The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation.
As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine.
The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir.
In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft.
Automobiles
The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the thermally more-efficient Diesel engine is used for trucks and buses. However, in recent years, turbocharged Diesel engines have become increasingly popular in automobiles, especially outside of the United States, even for quite small cars.
Horizontally-opposed pistons
In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as “flat” or “boxer” engines due to their shape and low profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles. Opposed four- and six-cylinder engines continue to be used as a power source in small, propeller-driven aircraft.
Advancement
The continued use of internal combustion engines in automobiles is partly due to the improvement of engine control systems, such as on-board computers providing engine management processes, and electronically controlled fuel injection. Forced air induction by turbocharging and supercharging have increased the power output of smaller displacement engines that are lighter in weight and more fuel-efficient at normal cruise power.. Similar changes have been applied to smaller Diesel engines, giving them almost the same performance characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine-propelled cars in Europe. Diesel engines produce lower hydrocarbon and emissions, but greater particulate and pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines.
Increasing power
In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S. models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements.
Combustion efficiency
Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around .
Engine configuration
Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft.
The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day.
Types
An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs.
Heat engine
Combustion engine
Combustion engines are heat engines driven by the heat of a combustion process.
Internal combustion engine
The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work.
External combustion engine
An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine).
"Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines.
The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas.
Air-breathing combustion engines
Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines.
A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly.
Examples
Typical air-breathing engines include:
Reciprocating engine
Steam engine
Gas turbine
Airbreathing jet engine
Turbo-propeller engine
Pulse detonation engine
Pulse jet
Ramjet
Scramjet
Liquid air cycle engine/Reaction Engines SABRE.
Environmental effects
The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness.
Air quality
Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming.
Non-combusting heat engines
Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine.
Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called "TA engines") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices.
Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder.
Non-thermal chemically powered motor
Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include:
Molecular motor – motors found in living things
Synthetic molecular motor.
Electric motor
An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical.
Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application.
The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks.
To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency).
By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor.
Physically powered motor
Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands.
Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy.
Pneumatic motor
A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or a piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry.
Hydraulic motor
A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery.
Hybrid
Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator.
Performance
The following are used in the assessment of the performance of an engine.
Speed
Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is typically measured in revolutions per minute (rpm).
Thrust
Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it.
Torque
Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft.
Power
Power is the measure of how fast work is done.
Efficiency
Efficiency is a proportion of useful energy output compared to total input.
Sound levels
Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air.
Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets.
Engines by use
Particularly notable kinds of engines include:
Aircraft engine
Automobile engine
Model engine
Motorcycle engine
Marine propulsion engines such as Outboard motor
Non-road engine is the term used to define engines that are not used by vehicles on roadways.
Railway locomotive engine
Spacecraft propulsion engines such as Rocket engine
Traction engine
See also
Aircraft engine
Automobile engine replacement
Electric motor
Engine cooling
Engine swap
Gasoline engine
HCCI engine
Hesselman engine
Hot bulb engine
IRIS engine
Micromotor
Flagella – biological motor used by some microorganisms
Nanomotor
Molecular motor
Synthetic molecular motor
Adiabatic quantum motor
Multifuel
Reaction engine
Solid-state engine
Timeline of heat engine technology
Timeline of motor and engine technology
References
Citations
Sources
J.G. Landels, Engineering in the Ancient World,
External links
Detailed Engine Animations
Working 4-Stroke Engine – Animation
Animated illustrations of various engines
5 Ways to Redesign the Internal Combustion Engine
Article on Small SI Engines.
Article on Compact Diesel Engines.
Types Of Engines
Motors (1915) by James Slough Zerbe. | Engine | [
"Physics",
"Technology"
] | 5,424 | [
"Physical systems",
"Machines",
"Engine technology",
"Engines"
] |
9,644 | https://en.wikipedia.org/wiki/European%20Environment%20Agency | The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment.
Definition
The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment.
Its goal is to help those involved in developing, implementing and evaluating environmental policy, and to inform the general public.
Organization
The EEA was established by the European Economic Community (EEC) Regulation 1210/1990 (amended by EEC Regulation 933/1999 and EC Regulation 401/2009) and became operational in 1994, headquartered in Copenhagen, Denmark.
The agency is governed by a management board composed of representatives of the governments of its 32 member states, a European Commission representative and two scientists appointed by the European Parliament, assisted by its Scientific Committee.
The current Executive Director of the agency is Leena Ylä-Mononen, who has been appointed for a five-year term, starting on 1 June 2023. Ms Ylä-Mononen is the successor of professor Hans Bruyninckx.
Member countries
The member states of the European Union are members; however other states may become members of it by means of agreements concluded between them and the EU.
It was the first EU body to open its membership to the 13 candidate countries (pre-2004 enlargement).
The EEA has 32 member countries and six cooperating countries. The members are the 27 European Union member states together with Iceland, Liechtenstein, Norway, Switzerland and Turkey.
Since Brexit in 2020, the UK is not a member of the EU anymore and therefore not a member state of the EEA.
The six Western Balkan countries are cooperating countries: Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. These cooperation activities are integrated into Eionet and are supported by the EU under the "Instrument for Pre-Accession Assistance".
The EEA is an active member of the EPA Network.
Reports, data and knowledge
The European Environment Agency (EEA) produces assessments based on quality-assured data on a wide range of issues from biodiversity, air quality, transport to climate change. These assessments are closely linked to the European Union's environment policies and legislation and help monitor progress in some areas and indicate areas where additional efforts are needed.
As required in its founding regulation, the EEA publishes its flagship report the State and Outlook of Europe's environment (SOER), which is an integrated assessment, analysing trends, progress to targets as well as outlook for the mid- to long-term. The agency publishes annually a report on Europe's most polluted provinces for air quality, detailing fine particulate matter PM 2.5.
The EEA shares this information, including the datasets used in its assessments, through its main website and a number of thematic information platforms such as Biodiversity Information System for Europe (BISE), Water Information System for Europe (WISE) and ClimateADAPT. The Climate-ADAPT knowledge platform presents information and data on expected climatic changes, the vulnerability of regions and sectors, adaptation case studies, and adaptation options, adaptation planning tools, and EU policy.
European Nature Information System
The European Nature Information System (EUNIS) provides access to the publicly available data in the EUNIS database for species, habitat types and protected sites across Europe. It is part of the European Biodiversity data centre (BDC), and is maintained by the EEA.
The database contains data
on species, habitat types and designated sites from the framework of Natura 2000,
from material compiled by the European Topic Centre on Biological Diversity
mentioned in relevant international conventions and in the IUCN Red Lists,
collected in the framework of the EEA's reporting activities.
European environment information and observation network
The European Environment Information and Observation Network (Eionet) is a collaboration network between EEA member countries and non-member, cooperating nations. Cooperation is facilitated through different national environmental agencies, ministries, or offices. Eionet encourages the sharing of data and highlights specific topics for discussion and cooperation among participating countries.
Eionet currently includes covers seven European Topic Centres (ETCs):
ETC on Biodiversity and Ecosystems (ETC BE)
ETC on Climate Change Adaptation and LULUCF (ETC CA)
ETC on Climate Change Mitigation (ETC CM)
ETC on Data Integration and Digitalisation (ETC DI)
ETC on Human Health and the Environment (ETC HE)
ETC on Circular Economy and Resource Use (ETC CE)
ETC on Sustainability Transitions (ETC ST)
The European Environment Agency (EEA) implements the "Shared Environmental Information System" principles and best practices via projects such as the "ENI SEIS II EAST PROJECT" & the "ENI SEIS II SOUTH PROJECT" to support environmental protection within the six eastern partnership countries (ENP) & to contribute to the reduction in marine pollution in the Mediterranean through the shared availability and access to relevant environmental information.
Budget management and discharge
As for every EU body and institution, the EEA's budget is subject to a discharge process, consisting of external examination of its budget execution and financial management, to ensure sound financial management of its budget. Since its establishment, the EEA has been granted discharge for its budget without exception. The EEA provides full access to its administrative and budgetary documents in its public documents register.
The discharge process for the 2010 budget required additional clarifications. In February 2012, the European Parliament's Committee on Budgetary Control published a draft report, identifying areas of concern in the use of funds and its influence for the 2010 budget such as a 26% budget increase from 2009 to 2010 to €50 600 000. and questioned that maximum competition and value-for-money principles were honored in hiring, also possible fictitious employees.
The EEA's Executive Director refuted allegations of irregularities in a public hearing. On 27 March 2012 Members of the European Parliament (MEPs) voted on the report and commended the cooperation between the Agency and NGOs working in the environmental area. On 23 October 2012, the European Parliament voted and granted the discharge to the European Environment Agency for its 2010 budget.
Executive directors
International cooperation
In addition to its 32 members and six Balkan cooperating countries, the EEA also cooperates and fosters partnerships with its neighbours and other countries and regions, mostly in the context of the European Neighbourhood Policy:
Eastern Partnership member states: Belarus, Ukraine, Moldova, Armenia, Azerbaijan, Georgia
Union for the Mediterranean member states: Algeria, Egypt, Israel, Jordan, Lebanon, Libya, Morocco, Palestinian Authority, Syria, Tunisia
Other ENPI states: Russia
Central Asian states: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan
Additionally the EEA cooperates with multiple international organizations and the corresponding agencies of the following countries:
United States (Environmental Protection Agency)
Canada (Environment Canada)
Official languages
The 26 official languages used by the EEA are: Bulgarian, Czech, Croatian, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Icelandic, Italian, Lithuanian, Latvian, Malti, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovene, Swedish and Turkish.
See also
Agencies of the European Union
Citizen Science, cleanup projects that people can take part in.
EU environmental policy
List of atmospheric dispersion models
List of environmental organizations
Confederation of European Environmental Engineering Societies
Coordination of Information on the Environment
European Agency for Safety and Health at Work
Environment Agency
References
External links
European Topic Centre on Land Use and Spatial Information (ETC LUSI)
European Topic Centre on Air and Climate Change(ETC/ACC)
European Topic Centre on Biological Diversity(ETC/BD)
Model Documentation System (MDS)
The European Environment Agency's near real-time ozone map (ozoneweb)
The European Climate Adaptation Platform Climate-ADAPT
EUNIS homepage
1990 in the European Economic Community
Agencies of the European Union
Atmospheric dispersion modeling
Environmental agencies in the European Union
Government agencies established in 1990
Organizations based in Copenhagen
1994 establishments in Denmark | European Environment Agency | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,623 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
9,649 | https://en.wikipedia.org/wiki/Energy | Energy () is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy.
Forms
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
History
The word energy derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
Units of measure
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
Scientific use
Classical mechanics
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
This says that the work () is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Chemistry
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
Biology
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C6H12O6 + 6O2 -> 6CO2 + 6H2O
C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
Earth sciences
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
Cosmology
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
Quantum mechanics
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is the Planck constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
Relativity
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
Transformation
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding .
Conservation of energy and mass in transformation
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ , equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
Reversible and non-reversible transformations
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
Conservation of energy
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
Energy transfer
Closed systems
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where is the amount of energy transferred, represents the work done on or by the system, and represents the heat flow into or out of the system. As a simplification, the heat term, , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
Open systems
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write
Thermodynamics
Internal energy
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
First law of thermodynamics
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
where is the heat supplied to the system and is the work applied to the system.
Equipartition of energy
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
See also
Combustion
Efficient energy use
Energy democracy
Energy crisis
Energy recovery
Energy recycling
Index of energy articles
Index of wave articles
List of low-energy building techniques
Orders of magnitude (energy)
Power station
Sustainable energy
Transfer energy
Waste-to-energy
Waste-to-energy plant
Zero-energy building
Notes
References
Further reading
The Biosphere (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1970.. This book, originally a 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources, population trends, and environmental degradation.
Energy and Power (A Scientific American Book), San Francisco, California, W. H. Freeman and Company, 1971..
Santos, Gildo M. "Energy in Brazil: a historical overview," The Journal of Energy History (2018), online.
Journals
The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018–
External links
Differences between Heat and Thermal energy () – BioCab
Main topic articles
Nature
Universe
Scalar physical quantities | Energy | [
"Physics",
"Mathematics"
] | 7,696 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
9,653 | https://en.wikipedia.org/wiki/Expected%20value | In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.
The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.
The expected value of a random variable is often denoted by , , or , with also often stylized as or .
History
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.
He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.
In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his treatise, Huygens wrote:
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.
Etymology
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:
More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:
Notations
The use of the letter to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique.
When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as (upright), (italic), or (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used.
Another popular notation is . , , and are commonly used in physics. is used in Russian-language literature.
Definition
As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language.
Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by .
Random variables with finitely many outcomes
Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as
Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities .
In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.
Examples
Let represent the outcome of a roll of a fair six-sided die. More specifically, will be the number of pips showing on the top face of the die after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of is If one rolls the die times and computes the average (arithmetic mean) of the results, then as grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be That is, the expected value to be won from a $1 bet is −$. Thus, in 190 bets, the net loss will probably be about $10.
Random variables with countably infinitely many outcomes
Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that
where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.
However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely.
For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation.
Examples
Suppose and for where is the scaling factor which makes the probabilities sum to 1. Then we have
Random variables with density
Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral
A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors.
Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is straightforward to compute in this case that
The limit of this expression as and does not exist: if the limits are taken so that , then the limit is zero, while if the constraint is taken, then the limit is .
To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of for more general random variables .
Arbitrary real-valued random variables
All definitions of the expected value may be expressed in the language of measure theory. In general, if is a real-valued random variable defined on a probability space , then the expected value of , denoted by , is defined as the Lebesgue integral
Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of is defined via weighted averages of approximations of which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable is said to be absolutely continuous if any of the following conditions are satisfied:
there is a nonnegative measurable function on the real line such that for any Borel set , in which the integral is Lebesgue.
the cumulative distribution function of is absolutely continuous.
for any Borel set of real numbers with Lebesgue measure equal to zero, the probability of being valued in is also equal to zero
for any positive number there is a positive number such that: if is a Borel set with Lebesgue measure less than , then the probability of being valued in is less than .
These conditions are all equivalent, although this is nontrivial to establish. In this definition, is called the probability density function of (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that
for any absolutely continuous random variable . The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable.
The expected value of any real-valued random variable can also be defined on the graph of its cumulative distribution function by a nearby equality of areas. In fact, with a real number if and only if the two surfaces in the --plane, described by
respectively, have the same finite area, i.e. if
and both improper Riemann integrals converge. Finally, this is equivalent to the representation
also with convergent integrals.
Infinite expected values
Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of . This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes , with associated probabilities , for ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has
It is natural to say that the expected value equals .
There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as . The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable , one defines the positive and negative parts by and . These are nonnegative random variables, and it can be directly checked that . Since and are both then defined as either nonnegative numbers or , it is then natural to define:
According to this definition, exists and is finite if and only if and are both finite. Due to the formula , this is the case if and only if is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations.
In the case of the St. Petersburg paradox, one has and so as desired.
Suppose the random variable takes values with respective probabilities . Then it follows that takes value with probability for each positive integer , and takes value with remaining probability. Similarly, takes value with probability for each positive integer and takes value with remaining probability. Using the definition for non-negative random variables, one can show that both and (see Harmonic series). Hence, in this case the expectation of is undefined.
Similarly, the Cauchy distribution, as discussed above, has undefined expectation.
Expected values of common distributions
The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references.
Properties
The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like is true almost surely, when the probability measure attributes zero-mass to the complementary event
Non-negativity: If (a.s.), then
of expectation: The expected value operator (or expectation operator) is linear in the sense that, for any random variables and and a constant whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for random variables and constants we have If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space.
Monotonicity: If (a.s.), and both and exist, then Proof follows from the linearity and the non-negativity property for since (a.s.).
Non-degeneracy: If then (a.s.).
If (a.s.), then In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y.
If (a.s.) for some real number , then In particular, for a random variable with well-defined expectation, A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value.
As a consequence of the formula as discussed above, together with the triangle inequality, it follows that for any random variable with well-defined expectation, one has
Let denote the indicator function of an event , then is given by the probability of . This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above.
Formulas in terms of CDF: If is the cumulative distribution function of a random variable , then where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of , it can be proved that with the integrals taken in the sense of Lebesgue. As a special case, for any random variable valued in the nonnegative integers , one has where denotes the underlying probability measure.
Non-multiplicativity: In general, the expected value is not multiplicative, i.e. is not necessarily equal to If and are independent, then one can show that If the random variables are dependent, then generally although in special cases of dependency the equality may hold.
Law of the unconscious statistician: The expected value of a measurable function of given that has a probability density function is given by the inner product of and : This formula also holds in multidimensional case, when is a function of several random variables, and is their joint density.
Inequalities
Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable and any positive number , it states that
If is any random variable with finite expectation, then Markov's inequality may be applied to the random variable to obtain Chebyshev's inequality
where is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables.
The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory.
Jensen's inequality: Let be a convex function and a random variable with finite expectation. Then Part of the assertion is that the negative part of has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that for positive numbers , one obtains the Lyapunov inequality This can also be proved by the Hölder inequality. In measure theory, this is particularly notable for proving the inclusion of , in the special case of probability spaces.
Hölder's inequality: if and are numbers satisfying , then for any random variables and . The special case of is called the Cauchy–Schwarz inequality, and is particularly well-known.
Minkowski inequality: given any number , for any random variables and with and both finite, it follows that is also finite and
The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces.
Expectations under convergence of random variables
In general, it is not the case that even if pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let be a random variable distributed uniformly on For define a sequence of random variables
with being the indicator function of the event Then, it follows that pointwise. But, for each Hence,
Analogously, for general sequence of random variables the expected value operator is not -additive, i.e.
An example is easily obtained by setting and for where is as in the previous example.
A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.
Monotone convergence theorem: Let be a sequence of random variables, with (a.s) for each Furthermore, let pointwise. Then, the monotone convergence theorem states that Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let be non-negative random variables. It follows from the monotone convergence theorem that
Fatou's lemma: Let be a sequence of non-negative random variables. Fatou's lemma states that Corollary. Let with for all If (a.s), then Proof is by observing that (a.s.) and applying Fatou's lemma.
Dominated convergence theorem: Let be a sequence of random variables. If pointwise (a.s.), (a.s.), and Then, according to the dominated convergence theorem,
;
Uniform integrability: In some cases, the equality holds when the sequence is uniformly integrable.
Relationship with characteristic function
The probability density function of a scalar random variable is related to its characteristic function by the inversion formula:
For the expected value of (where is a Borel function), we can use this inversion formula to obtain
If is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,
where
is the Fourier transform of The expression for also follows directly from the Plancherel theorem.
Uses and applications
The expectation of a random variable plays an important role in a variety of contexts.
In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.
For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of . The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. where is the indicator function of the set
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator operating on a quantum state vector is written as The uncertainty in can be calculated by the formula .
See also
Central tendency
Conditional expectation
Expectation (epistemic)
Expectile – related to expectations in a way analogous to that in which quantiles are related to medians
Law of total expectation – the expected value of the conditional expected value of X given Y is the same as the expected value of X
Median – indicated by in a drawing above
Nonlinear expectation – a generalization of the expected value
Population mean
Predicted value
Wald's equation – an equation for calculating the expected value of a random number of random variables
References
Bibliography
Theory of probability distributions
Gambling terminology
Articles containing proofs | Expected value | [
"Mathematics"
] | 5,361 | [
"Articles containing proofs"
] |
9,656 | https://en.wikipedia.org/wiki/Electric%20light | An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic which secures the lamp in the socket of a light fixture, which is often called a "lamp" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount.
The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor.
The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles.
History
Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires. In 1799–1800, Alessandro Volta created the voltaic pile, the first electric battery. Current from these batteries could heat copper wire to incandescence. Vasily Vladimirovich Petrov developed the first persistent electric arc in 1802, and English chemist Humphry Davy gave a practical demonstration of an arc light in 1806.
It took more than a century of continuous and incremental improvement, including numerous designs, patents, and resulting intellectual property disputes, to get from these early experiments to commercially produced incandescent light bulbs in the 1920s.
In 1840, Warren de la Rue enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating one of the world's first electric light bulbs. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although it was an efficient design, the cost of the platinum made it impractical for commercial use.
William Greener, an English inventor, made significant contributions to early electric lighting with his lamp in 1846 (patent specification 11076), laying the groundwork for future innovations such as those by Thomas Edison.
The late 1870s and 1880s were marked by intense competition and innovation, with inventors like Joseph Swan in the UK and Thomas Edison in the US independently developing functional incandescent lamps. Swan's bulbs, based on designs by William Staite, were successful, but the filaments were too thick. Edison worked to create bulbs with thinner filaments, leading to a better design. The rivalry between Swan and Edison eventually led to a merger, forming the Edison and Swan Electric Light Company. By the early twentieth century these had completely replaced arc lamps.
The turn of the century saw further improvements in bulb longevity and efficiency, notably with the introduction of the tungsten filament by William D. Coolidge, who applied for a patent in 1912. This innovation became a standard for incandescent bulbs for many years.
In 1910, Georges Claude introduced the first neon light, paving the way for neon signs which would become ubiquitous in advertising.
In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric's Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, "A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public."
The first practical LED arrived in 1962.
U.S. transition to LED bulbs
In the United States, incandescent light bulbs including halogen bulbs stopped being sold as of August 1, 2023, because they do not meet minimum lumens per watt performance metrics established by the U.S. Department of Energy. Compact fluorescent bulbs are also banned despite their lumens per watt performance because of their toxic mercury that can be released into the home if broken and widespread problems with proper disposal of mercury-containing bulbs.
Types
Incandescent
In its modern form, the incandescent light bulb consists of a coiled filament of tungsten sealed in a globular glass chamber, either a vacuum or full of an inert gas such as argon. When an electric current is connected, the tungsten is heated to and glows, emitting light that approximates a continuous spectrum.
Incandescent bulbs are highly inefficient, in that just 2–5% of the energy consumed is emitted as visible, usable light. The remaining 95% is lost as heat. In warmer climates, the emitted heat must then be removed, putting additional pressure on ventilation or air conditioning systems. In colder weather, the heat byproduct has some value, and has been successfully harnessed for warming in devices such as heat lamps. Incandescent bulbs are nonetheless being phased out in favor of technologies like CFLs and LED bulbs in many countries due to their low energy efficiency. The European Commission estimated in 2012 that a complete ban on incandescent bulbs would contribute 5 to 10 billion euros to the economy and save 15 billion metric tonnes of carbon dioxide emissions.
Halogen
Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire.
Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and longer lives than non-halogen types. The light output remains almost constant throughout their life.
Fluorescent
Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them.
LED
The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Some lasers have been adapted as an alternative to LEDs to provide highly focused illumination.
Carbon arc
Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight.
Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II.
Discharge
A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halides, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term "arc lamp" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp.
Some lamp types contain a small amount of neon, which permits striking at normal running voltage with no external ignition circuitry. Low-pressure sodium lamps operate this way. The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults.
The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra.
Characteristics
Form factor
Many lamp units, or light bulbs, are specified in standardized shape codes and socket names. Incandescent bulbs and their retrofit replacements are often specified as "A19/A60 E26/E27", a common size for those kinds of light bulbs. In this example, the "A" parameters describe the bulb size and shape within the A-series light bulb while the "E" parameters describe the Edison screw base size and thread characteristics.
Comparison parameters
Common comparison parameters include:
Luminous flux (in lumens)
Energy consumption (in watts)
Luminous efficacy (in lumens per watt)
Color temperature (in kelvins)
Less common parameters include color rendering index (CRI).
Life expectancy
Life expectancy for many types of lamp is defined as the number of hours of operation at which 50% of them fail, that is the median life of the lamps. Production tolerances as low as 1% can create a variance of 25% in lamp life, so in general some lamps will fail well before the rated life expectancy, and some will last much longer. For LEDs, lamp life is defined as the operation time at which 50% of lamps have experienced a 70% decrease in light output. In the 1900s the Phoebus cartel formed in an attempt to reduce the life of electric light bulbs, an example of planned obsolescence.
Some types of lamp are also sensitive to switching cycles. Rooms with frequent switching, such as bathrooms, can expect much shorter lamp life than what is printed on the box. Compact fluorescent lamps are particularly sensitive to switching cycles.
Uses
The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. External lighting grew at a rate of 3–6 percent for the later half of the 20th century and is the major source of light pollution that burdens astronomers and others with 80% of the world's population living in areas with night time light pollution. Light pollution has been shown to have a negative effect on some wildlife.
Electric lamps can be used as heat sources, for example in incubators, as infrared lamps in fast food restaurants and toys such as the Kenner Easy-Bake Oven.
Lamps can also be used for light therapy to deal with such issues as vitamin D deficiency, skin conditions such as acne and dermatitis, skin cancers, and seasonal affective disorder. Lamps which emit a specific frequency of blue light are also used to treat neonatal jaundice with the treatment which was initially undertaken in hospitals being able to be conducted at home.
Electric lamps can also be used as a grow light to aid in plant growth especially in indoor hydroponics and aquatic plants with recent research into the most effective types of light for plant growth.
Due to their nonlinear resistance characteristics, tungsten filament lamps have long been used as fast-acting thermistors in electronic circuits. Popular uses have included:
Stabilization of sine wave oscillators
Protection of tweeters in loudspeaker enclosures; excess current that is too high for the tweeter illuminates the light rather than destroying the tweeter.
Automatic volume control in telephones
Cultural symbolism
In Western culture, a lightbulb — in particular, the appearance of an illuminated lightbulb above a person's head — signifies sudden inspiration.
A stylized depiction of a light bulb features as the logo of the Turkish AK Party.
See also
Flameless candle
Light tube
List of light sources
References
External links
Dark Sacred Night" (2023) is a short science film from the Princeton University Office of Sustainability about lighting obscuring the stars and affecting health and the environment.
Light
Lighting | Electric light | [
"Technology",
"Engineering"
] | 3,251 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
9,703 | https://en.wikipedia.org/wiki/Evolutionary%20psychology | Evolutionary psychology is a theoretical approach in psychology that examines cognition and behavior from a modern evolutionary perspective. It seeks to identify human psychological adaptations with regards to the ancestral problems they evolved to solve. In this framework, psychological traits and mechanisms are either functional products of natural and sexual selection or non-adaptive by-products of other adaptive traits.
Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and the liver, is common in evolutionary biology. Evolutionary psychologists apply the same thinking in psychology, arguing that just as the heart evolved to pump blood, the liver evolved to detoxify poisons, and the kidneys evolved to filter turbid fluids there is modularity of mind in that different psychological mechanisms evolved to solve different adaptive problems. These evolutionary psychologists argue that much of human behavior is the output of psychological adaptations that evolved to solve recurrent problems in human ancestral environments.
Some evolutionary psychologists argue that evolutionary theory can provide a foundational, metatheoretical framework that integrates the entire field of psychology in the same way evolutionary biology has for biology.
Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations, including the abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, and cooperate with others. Findings have been made regarding human social behaviour related to infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price, and parental investment. The theories and findings of evolutionary psychology have applications in many fields, including economics, environment, health, law, management, psychiatry, politics, and literature.
Criticism of evolutionary psychology involves questions of testability, cognitive and evolutionary assumptions (such as modular functioning of the brain, and large uncertainty about the ancestral environment), importance of non-genetic and non-adaptive explanations, as well as political and ethical issues due to interpretations of research results. Evolutionary psychologists frequently engage with and respond to such criticisms.
Scope
Principles
Its central assumption is that the human brain is composed of a large number of specialized mechanisms that were shaped by natural selection over a vast period of time to solve the recurrent information-processing problems faced by our ancestors. These problems involve food choices, social hierarchies, distributing resources to offspring, and selecting mates. Proponents suggest that it seeks to integrate psychology into the other natural sciences, rooting it in the organizing theory of biology (evolutionary theory), and thus understanding psychology as a branch of biology. Anthropologist John Tooby and psychologist Leda Cosmides note:
Just as human physiology and evolutionary physiology have worked to identify physical adaptations of the body that represent "human physiological nature," the purpose of evolutionary psychology is to identify evolved emotional and cognitive adaptations that represent "human psychological nature." According to Steven Pinker, it is "not a single theory but a large set of hypotheses" and a term that "has also come to refer to a particular way of applying evolutionary theory to the mind, with an emphasis on adaptation, gene-level selection, and modularity." Evolutionary psychology adopts an understanding of the mind that is based on the computational theory of mind. It describes mental processes as computational operations, so that, for example, a fear response is described as arising from a neurological computation that inputs the perceptional data, e.g. a visual image of a spider, and outputs the appropriate reaction, e.g. fear of possibly dangerous animals. Under this view, any domain-general learning is impossible because of the combinatorial explosion. Evolutionary Psychology specifies the domain as the problems of survival and reproduction.
While philosophers have generally considered the human mind to include broad faculties, such as reason and lust, evolutionary psychologists describe evolved psychological mechanisms as narrowly focused to deal with specific issues, such as catching cheaters or choosing mates. The discipline sees the human brain as having evolved specialized functions, called cognitive modules, or psychological adaptations which are shaped by natural selection. Examples include language-acquisition modules, incest-avoidance mechanisms, cheater-detection mechanisms, intelligence and sex-specific mating preferences, foraging mechanisms, alliance-tracking mechanisms, agent-detection mechanisms, and others. Some mechanisms, termed domain-specific, deal with recurrent adaptive problems over the course of human evolutionary history. Domain-general mechanisms, on the other hand, are proposed to deal with evolutionary novelty.
Evolutionary psychology has roots in cognitive psychology and evolutionary biology but also draws on behavioral ecology, artificial intelligence, genetics, ethology, anthropology, archaeology, biology, ecopsycology and zoology. It is closely linked to sociobiology, but there are key differences between them including the emphasis on domain-specific rather than domain-general mechanisms, the relevance of measures of current fitness, the importance of mismatch theory, and psychology rather than behavior.
Nikolaas Tinbergen's four categories of questions can help to clarify the distinctions between several different, but complementary, types of explanations. Evolutionary psychology focuses primarily on the "why?" questions, while traditional psychology focuses on the "how?" questions.
Premises
Evolutionary psychology is founded on several core premises.
The brain is an information processing device, and it produces behavior in response to external and internal inputs.
The brain's adaptive mechanisms were shaped by natural and sexual selection.
Different neural mechanisms are specialized for solving problems in humanity's evolutionary past.
The brain has evolved specialized neural mechanisms that were designed for solving problems that recurred over deep evolutionary time, giving modern humans stone-age minds.
Most contents and processes of the brain are unconscious; and most mental problems that seem easy to solve are actually extremely difficult problems that are solved unconsciously by complicated neural mechanisms.
Human psychology consists of many specialized mechanisms, each sensitive to different classes of information or inputs. These mechanisms combine to manifest behavior.
History
Evolutionary psychology has its historical roots in Charles Darwin's theory of natural selection. In The Origin of Species, Darwin predicted that psychology would develop an evolutionary basis:
Two of his later books were devoted to the study of animal emotions and psychology; The Descent of Man, and Selection in Relation to Sex in 1871 and The Expression of the Emotions in Man and Animals in 1872. Darwin's work inspired William James's functionalist approach to psychology. Darwin's theories of evolution, adaptation, and natural selection have provided insight into why brains function the way they do.
The content of evolutionary psychology has derived from, on the one hand, the biological sciences (especially evolutionary theory as it relates to ancient human environments, the study of paleoanthropology and animal behavior) and, on the other, the human sciences, especially psychology.
Evolutionary biology as an academic discipline emerged with the modern synthesis in the 1930s and 1940s. In the 1930s the study of animal behavior (ethology) emerged with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch.
W.D. Hamilton's (1964) papers on inclusive fitness and Robert Trivers's (1972) theories on reciprocity and parental investment helped to establish evolutionary thinking in psychology and the other social sciences. In 1975, Edward O. Wilson combined evolutionary theory with studies of animal and social behavior, building on the works of Lorenz and Tinbergen, in his book Sociobiology: The New Synthesis.
In the 1970s, two major branches developed from ethology. Firstly, the study of animal social behavior (including humans) generated sociobiology, defined by its pre-eminent proponent Edward O. Wilson in 1975 as "the systematic study of the biological basis of all social behavior" and in 1978 as "the extension of population biology and evolutionary theory to social organization." Secondly, there was behavioral ecology which placed less emphasis on social behavior; it focused on the ecological and evolutionary basis of animal and human behavior.
In the 1970s and 1980s university departments began to include the term evolutionary biology in their titles. The modern era of evolutionary psychology was ushered in, in particular, by Donald Symons' 1979 book The Evolution of Human Sexuality and Leda Cosmides and John Tooby's 1992 book The Adapted Mind. David Buller observed that the term "evolutionary psychology" is sometimes seen as denoting research based on the specific methodological and theoretical commitments of certain researchers from the Santa Barbara school (University of California), thus some evolutionary psychologists prefer to term their work "human ecology", "human behavioural ecology" or "evolutionary anthropology" instead.
From psychology there are the primary streams of developmental, social and cognitive psychology. Establishing some measure of the relative influence of genetics and environment on behavior has been at the core of behavioral genetics and its variants, notably studies at the molecular level that examine the relationship between genes, neurotransmitters and behavior. Dual inheritance theory (DIT), developed in the late 1970s and early 1980s, has a slightly different perspective by trying to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. DIT is seen by some as a "middle-ground" between views that emphasize human universals versus those that emphasize cultural variation.
Theoretical foundations
The theories on which evolutionary psychology is based originated with Charles Darwin's work, including his speculations about the evolutionary origins of social instincts in humans. Modern evolutionary psychology, however, is possible only because of advances in evolutionary theory in the 20th century.
Evolutionary psychologists say that natural selection has provided humans with many psychological adaptations, in much the same way that it generated humans' anatomical and physiological adaptations. As with adaptations in general, psychological adaptations are said to be specialized for the environment in which an organism evolved, the environment of evolutionary adaptedness. Sexual selection provides organisms with adaptations related to mating. For male mammals, which have a relatively high maximal potential reproduction rate, sexual selection leads to adaptations that help them compete for females. For female mammals, with a relatively low maximal potential reproduction rate, sexual selection leads to choosiness, which helps females select higher quality mates. Charles Darwin described both natural selection and sexual selection, and he relied on group selection to explain the evolution of altruistic (self-sacrificing) behavior. But group selection was considered a weak explanation, because in any group the less altruistic individuals will be more likely to survive, and the group will become less self-sacrificing as a whole.
In 1964, the evolutionary biologist William D. Hamilton proposed inclusive fitness theory, emphasizing a gene-centered view of evolution. Hamilton noted that genes can increase the replication of copies of themselves into the next generation by influencing the organism's social traits in such a way that (statistically) results in helping the survival and reproduction of other copies of the same genes (most simply, identical copies in the organism's close relatives). According to Hamilton's rule, self-sacrificing behaviors (and the genes influencing them) can evolve if they typically help the organism's close relatives so much that it more than compensates for the individual animal's sacrifice. Inclusive fitness theory resolved the issue of how altruism can evolve. Other theories also help explain the evolution of altruistic behavior, including evolutionary game theory, tit-for-tat reciprocity, and generalized reciprocity. These theories help to explain the development of altruistic behavior, and account for hostility toward cheaters (individuals that take advantage of others' altruism).
Several mid-level evolutionary theories inform evolutionary psychology. The r/K selection theory proposes that some species prosper by having many offspring, while others follow the strategy of having fewer offspring but investing much more in each one. Humans follow the second strategy. Parental investment theory explains how parents invest more or less in individual offspring based on how successful those offspring are likely to be, and thus how much they might improve the parents' inclusive fitness. According to the Trivers–Willard hypothesis, parents in good conditions tend to invest more in sons (who are best able to take advantage of good conditions), while parents in poor conditions tend to invest more in daughters (who are best able to have successful offspring even in poor conditions). According to life history theory, animals evolve life histories to match their environments, determining details such as age at first reproduction and number of offspring. Dual inheritance theory posits that genes and human culture have interacted, with genes affecting the development of culture, and culture, in turn, affecting human evolution on a genetic level, in a similar way to the Baldwin effect.
Evolved psychological mechanisms
Evolutionary psychology is based on the hypothesis that, just like hearts, lungs, livers, kidneys, and immune systems, cognition has a functional structure that has a genetic basis, and therefore has evolved by natural selection. Like other organs and tissues, this functional structure should be universally shared amongst a species and should solve important problems of survival and reproduction.
Evolutionary psychologists seek to understand psychological mechanisms by understanding the survival and reproductive functions they might have served over the course of evolutionary history. These might include abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, cooperate with others and follow leaders. Consistent with the theory of natural selection, evolutionary psychology sees humans as often in conflict with others, including mates and relatives. For instance, a mother may wish to wean her offspring from breastfeeding earlier than does her infant, which frees up the mother to invest in additional offspring. Evolutionary psychology also recognizes the role of kin selection and reciprocity in evolving prosocial traits such as altruism. Like chimpanzees and bonobos, humans have subtle and flexible social instincts, allowing them to form extended families, lifelong friendships, and political alliances. In studies testing theoretical predictions, evolutionary psychologists have made modest findings on topics such as infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price and parental investment.
Another example would be the evolved mechanism in depression. Clinical depression is maladaptive and should have evolutionary approaches so it can become adaptive. Over the centuries animals and humans have gone through hard times to stay alive, which made our fight or flight senses evolve tremendously. For instances, mammalians have separation anxiety from their guardians which causes distress and sends signals to their hypothalamic pituitary adrenal axis, and emotional/behavioral changes. Going through these types of circumstances helps mammals cope with separation anxiety.
Historical topics
Proponents of evolutionary psychology in the 1990s made some explorations in historical events, but the response from historical experts was highly negative and there has been little effort to continue that line of research. Historian Lynn Hunt says that the historians complained that the researchers:
Hunt states that "the few attempts to build up a subfield of psychohistory collapsed under the weight of its presuppositions." She concludes that, as of 2014, the "'iron curtain' between historians and psychology...remains standing."
Products of evolution: adaptations, exaptations, byproducts, and random variation
Not all traits of organisms are evolutionary adaptations. As noted in the table below, traits may also be exaptations, byproducts of adaptations (sometimes called "spandrels"), or random variation between individuals.
Psychological adaptations are hypothesized to be innate or relatively easy to learn and to manifest in cultures worldwide. For example, the ability of toddlers to learn a language with virtually no training is likely to be a psychological adaptation. On the other hand, ancestral humans did not read or write, thus today, learning to read and write requires extensive training, and presumably involves the repurposing of cognitive capacities that evolved in response to selection pressures unrelated to written language. However, variations in manifest behavior can result from universal mechanisms interacting with different local environments. For example, Caucasians who move from a northern climate to the equator will have darker skin. The mechanisms regulating their pigmentation do not change; rather the input to those mechanisms change, resulting in different outputs.
One of the tasks of evolutionary psychology is to identify which psychological traits are likely to be adaptations, byproducts or random variation. George C. Williams suggested that an "adaptation is a special and onerous concept that should only be used where it is really necessary." As noted by Williams and others, adaptations can be identified by their improbable complexity, species universality, and adaptive functionality.
Obligate and facultative adaptations
A question that may be asked about an adaptation is whether it is generally obligate (relatively robust in the face of typical environmental variation) or facultative (sensitive to typical environmental variation). The sweet taste of sugar and the pain of hitting one's knee against concrete are the result of fairly obligate psychological adaptations; typical environmental variability during development does not much affect their operation. By contrast, facultative adaptations are somewhat like "if-then" statements. For example, The adaptation for skin to tan is conditional to exposure to sunlight; this is an example of another facultative adaptation. When a psychological adaptation is facultative, evolutionary psychologists concern themselves with how developmental and environmental inputs influence the expression of the adaptation.
Cultural universals
Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations. Cultural universals include behaviors related to language, cognition, social roles, gender roles, and technology. Evolved psychological adaptations (such as the ability to learn a language) interact with cultural inputs to produce specific behaviors (e.g., the specific language learned).
Basic gender differences, such as greater eagerness for sex among men and greater coyness among women, are explained as sexually dimorphic psychological adaptations that reflect the different reproductive strategies of males and females. It has been found that both male and female personality traits differ on a large spectrum. Males had a higher rate of traits relating to dominance, tension, and directness. Females had higher rates organizational behavior and more emotional based characteristics.
Evolutionary psychologists contrast their approach to what they term the "standard social science model," according to which the mind is a general-purpose cognition device shaped almost entirely by culture.
Environment of evolutionary adaptedness
Evolutionary psychology argues that to properly understand the functions of the brain, one must understand the properties of the environment in which the brain evolved. That environment is often referred to as the "environment of evolutionary adaptedness".
The idea of an environment of evolutionary adaptedness was first explored as a part of attachment theory by John Bowlby. This is the environment to which a particular evolved mechanism is adapted. More specifically, the environment of evolutionary adaptedness is defined as the set of historically recurring selection pressures that formed a given adaptation, as well as those aspects of the environment that were necessary for the proper development and functioning of the adaptation.
Humans, the genus Homo, appeared between 1.5 and 2.5 million years ago, a time that roughly coincides with the start of the Pleistocene 2.6 million years ago. Because the Pleistocene ended a mere 12,000 years ago, most human adaptations either newly evolved during the Pleistocene, or were maintained by stabilizing selection during the Pleistocene. Evolutionary psychology, therefore, proposes that the majority of human psychological mechanisms are adapted to reproductive problems frequently encountered in Pleistocene environments. In broad terms, these problems include those of growth, development, differentiation, maintenance, mating, parenting, and social relationships.
The environment of evolutionary adaptedness is significantly different from modern society. The ancestors of modern humans lived in smaller groups, had more cohesive cultures, and had more stable and rich contexts for identity and meaning. Researchers look to existing hunter-gatherer societies for clues as to how hunter-gatherers lived in the environment of evolutionary adaptedness. Unfortunately, the few surviving hunter-gatherer societies are different from each other, and they have been pushed out of the best land and into harsh environments, so it is not clear how closely they reflect ancestral culture. However, all around the world small-band hunter-gatherers offer a similar developmental system for the young ("hunter-gatherer childhood model," Konner, 2005;
"evolved developmental niche" or "evolved nest;" Narvaez et al., 2013). The characteristics of the niche are largely the same as for social mammals, who evolved over 30 million years ago: soothing perinatal experience, several years of on-request breastfeeding, nearly constant affection or physical proximity, responsiveness to need (mitigating offspring distress), self-directed play, and for humans, multiple responsive caregivers. Initial studies show the importance of these components in early life for positive child outcomes.
Evolutionary psychologists sometimes look to chimpanzees, bonobos, and other great apes for insight into human ancestral behavior.
Mismatches
Since an organism's adaptations were suited to its ancestral environment, a new and different environment can create a mismatch. Because humans are mostly adapted to Pleistocene environments, psychological mechanisms sometimes exhibit "mismatches" to the modern environment. One example is the fact that although over 20,000 people are murdered by guns in the US annually, whereas spiders and snakes kill only a handful, people nonetheless learn to fear spiders and snakes about as easily as they do a pointed gun, and more easily than an unpointed gun, rabbits or flowers. A potential explanation is that spiders and snakes were a threat to human ancestors throughout the Pleistocene, whereas guns (and rabbits and flowers) were not. There is thus a mismatch between humans' evolved fear-learning psychology and the modern environment.
This mismatch also shows up in the phenomena of the supernormal stimulus, a stimulus that elicits a response more strongly than the stimulus for which the response evolved. The term was coined by Niko Tinbergen to refer to non-human animal behavior, but psychologist Deirdre Barrett said that supernormal stimulation governs the behavior of humans as powerfully as that of other animals. She explained junk food as an exaggerated stimulus to cravings for salt, sugar, and fats, and she says that television is an exaggeration of social cues of laughter, smiling faces and attention-grabbing action. Magazine centerfolds and double cheeseburgers pull instincts intended for an environment of evolutionary adaptedness where breast development was a sign of health, youth and fertility in a prospective mate, and fat was a rare and vital nutrient. The psychologist Mark van Vugt recently argued that modern organizational leadership is a mismatch. His argument is that humans are not adapted to work in large, anonymous bureaucratic structures with formal hierarchies. The human mind still responds to personalized, charismatic leadership primarily in the context of informal, egalitarian settings. Hence the dissatisfaction and alienation that many employees experience. Salaries, bonuses and other privileges exploit instincts for relative status, which attract particularly males to senior executive positions.
Research methods
Evolutionary theory is heuristic in that it may generate hypotheses that might not be developed from other theoretical approaches. One of the main goals of adaptationist research is to identify which organismic traits are likely to be adaptations, and which are byproducts or random variations. As noted earlier, adaptations are expected to show evidence of complexity, functionality, and species universality, while byproducts or random variation will not. In addition, adaptations are expected to be presented as proximate mechanisms that interact with the environment in either a generally obligate or facultative fashion (see above). Evolutionary psychologists are also interested in identifying these proximate mechanisms (sometimes termed "mental mechanisms" or "psychological adaptations") and what type of information they take as input, how they process that information, and their outputs. Evolutionary developmental psychology, or "evo-devo," focuses on how adaptations may be activated at certain developmental times (e.g., losing baby teeth, adolescence, etc.) or how events during the development of an individual may alter life-history trajectories.
Evolutionary psychologists use several strategies to develop and test hypotheses about whether a psychological trait is likely to be an evolved adaptation. Buss (2011) notes that these methods include:
Evolutionary psychologists also use various sources of data for testing, including experiments, archaeological records, data from hunter-gatherer societies, observational studies, neuroscience data, self-reports and surveys, public records, and human products.
Recently, additional methods and tools have been introduced based on fictional scenarios, mathematical models, and multi-agent computer simulations.
Main areas of research
Foundational areas of research in evolutionary psychology can be divided into broad categories of adaptive problems that arise from evolutionary theory itself: survival, mating, parenting, family and kinship, interactions with non-kin, and cultural evolution.
Survival and individual-level psychological adaptations
Problems of survival are clear targets for the evolution of physical and psychological adaptations. Major problems the ancestors of present-day humans faced included food selection and acquisition; territory selection and physical shelter; and avoiding predators and other environmental threats.
Consciousness
Consciousness meets George Williams' criteria of species universality, complexity, and functionality, and it is a trait that apparently increases fitness.
In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness. In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social and natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine. Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars. Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought. Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches. Consistent with this hypothesis, Gordon Gallup found that chimpanzees and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests.
The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behavior involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviors are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place seemingly outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so.
Evolutionary psychology approaches self-deception as an adaptation that can improve one's results in social exchanges.
Sleep may have evolved to conserve energy when activity would be less fruitful or more dangerous, such as at night, and especially during the winter season.
Sensation and perception
Many experts, such as Jerry Fodor, write that the purpose of perception is knowledge, but evolutionary psychologists hold that its primary purpose is to guide action. For example, they say, depth perception seems to have evolved not to help us know the distances to other objects but rather to help us move around in space. Evolutionary psychologists say that animals from fiddler crabs to humans use eyesight for collision avoidance, suggesting that vision is basically for directing action, not providing knowledge.
Building and maintaining sense organs is metabolically expensive, so these organs evolve only when they improve an organism's fitness. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources, so the senses must provide exceptional benefits to fitness. Perception accurately mirrors the world; animals get useful, accurate information through their senses.
Scientists who study perception and sensation have long understood the human senses as adaptations to their surrounding worlds. Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world. Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects. Sound waves go around corners and interact with obstacles, creating a complex pattern that includes useful information about the sources of and distances to objects. Larger animals naturally make lower-pitched sounds as a consequence of their size. The range over which an animal hears, on the other hand, is determined by adaptation. Homing pigeons, for example, can hear the very low-pitched sound (infrasound) that carries great distances, even though most smaller animals detect higher-pitched sounds. Taste and smell respond to chemicals in the environment that are thought to have been significant for fitness in the environment of evolutionary adaptedness. For example, salt and sugar were apparently both valuable to the human or pre-human inhabitants of the environment of evolutionary adaptedness, so present-day humans have an intrinsic hunger for salty and sweet tastes. The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain. Pain, while unpleasant, is adaptive. An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation. For example, one's eyes automatically adjust to dim or bright ambient light. Sensory abilities of different organisms often coevolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make.
Evolutionary psychologists contend that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks. For example, people with damage to a particular part of the brain have the specific defect of not being able to recognize faces (prosopagnosia). Evolutionary psychology suggests that this indicates a so-called face-reading module.
Learning and facultative adaptations
In evolutionary psychology, learning is said to be accomplished through evolved capacities, specifically facultative adaptations. Facultative adaptations express themselves differently depending on input from the environment. Sometimes the input comes during development and helps shape that development. For example, migrating birds learn to orient themselves by the stars during a critical period in their maturation. Evolutionary psychologists believe that humans also learn language along an evolved program, also with critical periods. The input can also come during daily tasks, helping the organism cope with changing environmental conditions. For example, animals evolved Pavlovian conditioning in order to solve problems about causal relationships. Animals accomplish learning tasks most easily when those tasks resemble problems that they faced in their evolutionary past, such as a rat learning where to find food or water. Learning capacities sometimes demonstrate differences between the sexes. In many animal species, for example, males can solve spatial problems faster and more accurately than females, due to the effects of male hormones during development. The same might be true of humans.
Emotion and motivation
Motivations direct and energize behavior, while emotions provide the affective component to motivation, positive or negative. In the early 1970s, Paul Ekman and colleagues began a line of research which suggests that many emotions are universal. He found evidence that humans share at least five basic emotions: fear, sadness, happiness, anger, and disgust. Social emotions evidently evolved to motivate social behaviors that were adaptive in the environment of evolutionary adaptedness. For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status.
Motivation has a neurobiological basis in the reward system of the brain. Recently, it has been suggested that reward systems may evolve in such a way that there may be an inherent or unavoidable trade-off in the motivational system for activities of short versus long duration.
Cognition
Cognition refers to internal representations of the world and internal information processing. From an evolutionary psychology perspective, cognition is not "general purpose". Cognition uses heuristics, or strategies, that generally increase the likelihood of solving problems that the ancestors of present-day humans routinely faced in their lives. For example, present-day humans are far more likely to solve logic problems that involve detecting cheating (a common problem given humans' social nature) than the same logic problem put in purely abstract terms. Since the ancestors of present-day humans did not encounter truly random events and lived under simpler life terms, present-day humans may be cognitively predisposed to incorrectly identify patterns in random sequences. "Gamblers' Fallacy" is one example of this. Gamblers may falsely believe that they have hit a "lucky streak" even when each outcome is actually random and independent of previous trials. Most people believe that if a fair coin has been flipped 9 times and Heads appears each time, that on the tenth flip, there is a greater than 50% chance of getting Tails. Humans find it far easier to make diagnoses or predictions using frequency data than when the same information is presented as probabilities or percentages. This could be due to the ancestors of present-day humans living in relatively small tribes (usually with fewer than 150 people) where frequency information was more readily available and experienced less random occurrences in their lives.
Personality
Evolutionary psychology is primarily interested in finding commonalities between people, or basic human psychological nature. From an evolutionary perspective, the fact that people have fundamental differences in personality traits initially presents something of a puzzle. (Note: The field of behavioral genetics is concerned with statistically partitioning differences between people into genetic and environmental sources of variance. However, understanding the concept of heritability can be tricky – heritability refers only to the differences between people, never the degree to which the traits of an individual are due to environmental or genetic factors, since traits are always a complex interweaving of both.)
Personality traits are conceptualized by evolutionary psychologists as due to normal variation around an optimum, due to frequency-dependent selection (behavioral polymorphisms), or as facultative adaptations. Like variability in height, some personality traits may simply reflect inter-individual variability around a general optimum. Or, personality traits may represent different genetically predisposed "behavioral morphs" – alternate behavioral strategies that depend on the frequency of competing behavioral strategies in the population. For example, if most of the population is generally trusting and gullible, the behavioral morph of being a "cheater" (or, in the extreme case, a sociopath) may be advantageous. Finally, like many other psychological adaptations, personality traits may be facultative – sensitive to typical variations in the social environment, especially during early development. For example, later-born children are more likely than firstborns to be rebellious, less conscientious and more open to new experiences, which may be advantageous to them given their particular niche in family structure.
Shared environmental influences do play a role in personality and are not always of less importance than genetic factors. However, shared environmental influences often decrease to near zero after adolescence but do not completely disappear.
Language
According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's The Language Instinct). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop.
Pinker follows Chomsky in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction. Chomsky himself does not believe language to have evolved as an adaptation, but suggests that it likely evolved as a byproduct of some other adaptation, a so-called spandrel. But Pinker and Bloom argue that the organic nature of language strongly suggests that it has an adaptational origin.
Evolutionary psychologists hold that the FOXP2 gene may well be associated with the evolution of human language. In the 1980s, psycholinguist Myrna Gopnik identified a dominant gene that causes language impairment in the KE family of Britain. This gene turned out to be a mutation of the FOXP2 gene. Humans have a unique allele of this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans. However, the once-popular idea that FOXP2 is a 'grammar gene' or that it triggered the emergence of language in Homo sapiens is now widely discredited.
Currently, several competing theories about the evolutionary origin of language coexist, none of them having achieved a general consensus. Researchers of language acquisition in primates and humans such as Michael Tomasello and Talmy Givón, argue that the innatist framework has understated the role of imitation in learning and that it is not at all necessary to posit the existence of an innate grammar module to explain human language acquisition. Tomasello argues that studies of how children and primates actually acquire communicative skills suggest that humans learn complex behavior through experience, so that instead of a module specifically dedicated to language acquisition, language is acquired by the same cognitive mechanisms that are used to acquire all other kinds of socially transmitted behavior.
On the issue of whether language is best seen as having evolved as an adaptation or as a spandrel, evolutionary biologist W. Tecumseh Fitch, following Stephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation. He criticizes some strands of evolutionary psychology for suggesting a pan-adaptionist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading. He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made by Terrence Deacon who in The Symbolic Species argues that the different features of language have co-evolved with the evolution of the mind and that the ability to use symbolic communication is integrated in all other cognitive processes.
If the theory that language could have evolved as a single adaptation is accepted, the question becomes which of its many functions has been the basis of adaptation. Several evolutionary hypotheses have been posited: that language evolved for the purpose of social grooming, that it evolved as a way to show mating potential or that it evolved to form social contracts. Evolutionary psychologists recognize that these theories are all speculative and that much more evidence is required to understand how language might have been selectively adapted.
Mating
Given that sexual reproduction is the means by which genes are propagated into future generations, sexual selection plays a large role in human evolution. Human mating, then, is of interest to evolutionary psychologists who aim to investigate evolved mechanisms to attract and secure mates. Several lines of research have stemmed from this interest, such as studies of mate selection mate poaching, mate retention, mating preferences and conflict between the sexes.
In 1972 Robert Trivers published an influential paper on sex differences that is now referred to as parental investment theory. The size differences of gametes (anisogamy) is the fundamental, defining difference between males (small gametes – sperm) and females (large gametes – ova). Trivers noted that anisogamy typically results in different levels of parental investment between the sexes, with females initially investing more. Trivers proposed that this difference in parental investment leads to the sexual selection of different reproductive strategies between the sexes and to sexual conflict. For example, he suggested that the sex that invests less in offspring will generally compete for access to the higher-investing sex to increase their inclusive fitness. Trivers posited that differential parental investment led to the evolution of sexual dimorphisms in mate choice, intra- and inter- sexual reproductive competition, and courtship displays. In mammals, including humans, females make a much larger parental investment than males (i.e. gestation followed by childbirth and lactation). Parental investment theory is a branch of life history theory.
Buss and Schmitt's (1993) sexual strategies theory proposed that, due to differential parental investment, humans have evolved sexually dimorphic adaptations related to "sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment." Their strategic interference theory suggested that conflict between the sexes occurs when the preferred reproductive strategies of one sex interfere with those of the other sex, resulting in the activation of emotional responses such as anger or jealousy.
Women are generally more selective when choosing mates, especially under long-term mating conditions. However, under some circumstances, short term mating can provide benefits to women as well, such as fertility insurance, trading up to better genes, reducing the risk of inbreeding, and insurance protection of her offspring.
Due to male paternity uncertainty, sex differences have been found in the domains of sexual jealousy. Females generally react more adversely to emotional infidelity and males will react more to sexual infidelity. This particular pattern is predicted because the costs involved in mating for each sex are distinct. Women, on average, should prefer a mate who can offer resources (e.g., financial, commitment), thus, a woman risks losing such resources with a mate who commits emotional infidelity. Men, on the other hand, are never certain of the genetic paternity of their children because they do not bear the offspring themselves. This suggests that for men sexual infidelity would generally be more aversive than emotional infidelity because investing resources in another man's offspring does not lead to the propagation of their own genes.
Another interesting line of research is that which examines women's mate preferences across the ovulatory cycle. The theoretical underpinning of this research is that ancestral women would have evolved mechanisms to select mates with certain traits depending on their hormonal status. Known as the ovulatory shift hypothesis, the theory posits that, during the ovulatory phase of a woman's cycle (approximately days 10–15 of a woman's cycle), a woman who mated with a male with high genetic quality would have been more likely, on average, to produce and bear a healthy offspring than a woman who mated with a male with low genetic quality. These putative preferences are predicted to be especially apparent for short-term mating domains because a potential male mate would only be offering genes to a potential offspring. This hypothesis allows researchers to examine whether women select mates who have characteristics that indicate high genetic quality during the high fertility phase of their ovulatory cycles. Indeed, studies have shown that women's preferences vary across the ovulatory cycle. In particular, Haselton and Miller (2006) showed that highly fertile women prefer creative but poor men as short-term mates. Creativity may be a proxy for good genes. Research by Gangestad et al. (2004) indicates that highly fertile women prefer men who display social presence and intrasexual competition; these traits may act as cues that would help women predict which men may have, or would be able to acquire, resources.
Parenting
Reproduction is always costly for women, and can also be for men. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival and further reproductive output.
Parental investment is any parental expenditure (time, energy etc.) that benefits one offspring at a cost to parents' ability to invest in other components of fitness (Clutton-Brock 1991: 9; Trivers 1972). Components of fitness (Beatty 1992) include the well-being of existing offspring, parents' future reproduction, and inclusive fitness through aid to kin (Hamilton, 1964). Parental investment theory is a branch of life history theory.
The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately, on the reproductive success of the offspring. However, these benefits can come at the cost of the parent's ability to reproduce in the future e.g. through the increased risk of injury when defending offspring against predators, the loss of mating opportunities whilst rearing offspring, and an increase in the time to the next reproduction. Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will likely evolve when the benefits exceed the costs.
The Cinderella effect is an alleged high incidence of stepchildren being physically, emotionally or sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than their genetic counterparts. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters. Daly and Wilson (1996) noted: "Evolutionary thinking led to the discovery of the most important risk factor for child homicide – the presence of a stepparent. Parental efforts and investments are valuable resources, and selection favors those parental psyches that allocate effort effectively to promote fitness. The adaptive problems that challenge parental decision-making include both the accurate identification of one's offspring and the allocation of one's resources among them with sensitivity to their needs and abilities to convert parental investment into fitness increments…. Stepchildren were seldom or never so valuable to one's expected fitness as one's own offspring would be, and those parental psyches that were easily parasitized by just any appealing youngster must always have incurred a selective disadvantage"(Daly & Wilson, 1996, pp. 64–65). However, they note that not all stepparents will "want" to abuse their partner's children, or that genetic parenthood is any insurance against abuse. They see step parental care as primarily "mating effort" towards the genetic parent.
Family and kin
Inclusive fitness is the sum of an organism's classical fitness (how many of its own offspring it produces and supports) and the number of equivalents of its own offspring it can add to the population by supporting others. The first component is called classical fitness by Hamilton (1964).
From the gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Until 1964, it was generally believed that genes only achieved this by causing the individual to leave the maximum number of viable offspring. However, in 1964 W. D. Hamilton proved mathematically that, because close relatives of an organism share some identical genes, a gene can also increase its evolutionary success by promoting the reproduction and survival of these related or otherwise similar individuals. Hamilton concluded that this leads natural selection to favor organisms that would behave in ways that maximize their inclusive fitness. It is also true that natural selection favors behavior that maximizes personal fitness.
Hamilton's rule describes mathematically whether or not a gene for altruistic behavior will spread in a population:
where
is the reproductive cost to the altruist,
is the reproductive benefit to the recipient of the altruistic behavior, and
is the probability, above the population average, of the individuals sharing an altruistic gene – commonly viewed as "degree of relatedness".
The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. Altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene (Chapter 6) and The Extended Phenotype, this must be distinguished from the green-beard effect.
Although it is generally true that humans tend to be more altruistic toward their kin than toward non-kin, the relevant proximate mechanisms that mediate this cooperation have been debated (see kin recognition), with some arguing that kin status is determined primarily via social and cultural factors (such as co-residence, maternal association of sibs, etc.), while others have argued that kin recognition can also be mediated by biological factors such as facial resemblance and immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007) (PDF).
Whatever the proximate mechanisms of kin recognition there is substantial evidence that humans act generally more altruistically to close genetic kin compared to genetic non-kin.
Interactions with non-kin / reciprocity
Although interactions with non-kin are generally less altruistic compared to those with kin, cooperation can be maintained with non-kin via mutually beneficial reciprocity as was proposed by Robert Trivers. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favored even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act:
w > c/b
Reciprocity can also be indirect if information about previous interactions is shared. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help. The calculations of indirect reciprocity are complicated and only a tiny fraction of this universe has been uncovered, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone's reputation exceeds the cost-to-benefit ratio of the altruistic act:
q > c/b
One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known.
Trivers argues that friendship and various social emotions evolved in order to manage reciprocity. Liking and disliking, he says, evolved to help present-day humans' ancestors form coalitions with others who reciprocated and to exclude those who did not reciprocate. Moral indignation may have evolved to prevent one's altruism from being exploited by cheaters, and gratitude may have motivated present-day humans' ancestors to reciprocate appropriately after benefiting from others' altruism. Likewise, present-day humans feel guilty when they fail to reciprocate. These social motivations match what evolutionary psychologists expect to see in adaptations that evolved to maximize the benefits and minimize the drawbacks of reciprocity.
Evolutionary psychologists say that humans have psychological adaptations that evolved specifically to help us identify nonreciprocators, commonly referred to as "cheaters." In 1993, Robert Frank and his associates found that participants in a prisoner's dilemma scenario were often able to predict whether their partners would "cheat", based on a half-hour of unstructured social interaction. In a 1996 experiment, for example, Linda Mealey and her colleagues found that people were better at remembering the faces of people when those faces were associated with stories about those individuals cheating (such as embezzling money from a church).
Strong reciprocity (or "tribal reciprocity")
Humans may have an evolved set of psychological adaptations that predispose them to be more cooperative than otherwise would be expected with members of their tribal in-group, and, more nasty to members of tribal out groups. These adaptations may have been a consequence of tribal warfare. Humans may also have predispositions for "altruistic punishment" – to punish in-group members who violate in-group rules, even when this altruistic behavior cannot be justified in terms of helping those you are related to (kin selection), cooperating with those who you will interact with again (direct reciprocity), or cooperating to better your reputation with others (indirect reciprocity).
Evolutionary psychology and culture
Though evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations, considerable work has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989).
Biological explanations of human culture also brought criticism to evolutionary psychology: Evolutionary psychologists see the human psyche and physiology as a genetic product and assume that genes contain the information for the development and control of the organism and that this information is transmitted from one generation to the next via genes. Evolutionary psychologists thereby see physical and psychological characteristics of humans as genetically programmed. Even then, when evolutionary psychologists acknowledge the influence of the environment on human development, they understand the environment only as an activator or trigger for the programmed developmental instructions encoded in genes. Evolutionary psychologists, for example, believe that the human brain is made up of innate modules, each of which is specialised only for very specific tasks, e. g. an anxiety module. According to evolutionary psychologists, these modules are given before the organism actually develops and are then activated by some environmental event. Critics object that this view is reductionist and that cognitive specialisation only comes about through the interaction of humans with their real environment, rather than the environment of distant ancestors. Interdisciplinary approaches are increasingly striving to mediate between these opposing points of view and to highlight that biological and cultural causes need not be antithetical in explaining human behaviour and even complex cultural achievements.
In psychology sub-fields
Developmental psychology
According to Paul Baltes, the benefits granted by evolutionary selection decrease with age. Natural selection has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this might have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases.
Social psychology
As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences.
When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help.
Abnormal psychology
Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects of both nature and nurture, and often have multiple contributing causes.
Evolutionary psychologists have suggested that schizophrenia and bipolar disorder may reflect a side-effect of genes with fitness benefits, such as increased creativity. (Some individuals with bipolar disorder are especially creative during their manic phases and the close relatives of people with schizophrenia have been found to be more likely to have creative professions.) A 1994 report by the American Psychiatry Association found that people with schizophrenia at roughly the same rate in Western and non-Western cultures, and in industrialized and pastoral societies, suggesting that schizophrenia is not a disease of civilization nor an arbitrary social invention. Sociopathy may represent an evolutionarily stable strategy, by which a small number of people who cheat on social contracts benefit in a society consisting mostly of non-sociopaths. Mild depression may be an adaptive response to withdraw from, and re-evaluate, situations that have led to disadvantageous outcomes (the "analytical rumination hypothesis") (see Evolutionary approaches to depression).
Trofimova reviewed the most consistent psychological and behavioural sex differences in psychological abilities and disabilities and linked them to the Geodakyan's evolutionary theory of sex (ETS). She pointed out that a pattern of consistent sex differences in physical, verbal and social dis/abilities corresponds to the idea of the ETS considering sex dimorphism as a functional specialization of a species. Sex differentiation, according to the ETS, creates two partitions within a species, (1) conservational (females), and (2) variational (males). In females, superiority in verbal abilities, higher rule obedience, socialisation, empathy and agreeableness can be presented as a reflection of the systemic conservation function of the female sex. Male superiority is mostly noted in exploratory abilities - in risk- and sensation seeking, spacial orientation, physical strength and higher rates in physical aggression. In combination with higher birth and accidental death rates this pattern might be a reflection of the systemic variational function (testing the boundaries of beneficial characteristics) of the male sex. As a result, psychological sex differences might be influenced by a global tendency within a species to expand its norm of reaction, but at the same time to preserve the beneficial properties of the species. Moreover, Trofimova suggested a "redundancy pruning" hypothesis as an upgrade of the ETS theory. She pointed out to higher rates of psychopathy, dyslexia, autism and schizophrenia in males, in comparison to females. She suggested that the variational function of the "male partition" might also provide irrelevance/redundancy pruning of an excess in a bank of beneficial characteristics of a species, with a continuing resistance to any changes from the norm-driven conservational partition of species. This might explain a contradictory allocation of a high drive for social status/power in the male sex with the their least (among two sexes) abilities for social interaction. The high rates of communicative disorders and psychopathy in males might facilitate their higher rates of disengagement from normative expectations and their insensitivity to social disapproval, when they deliberately do not follow social norms.
Some of these speculations have yet to be developed into fully testable hypotheses, and a great deal of research is required to confirm their validity.
Antisocial and criminal behavior
Evolutionary psychology has been applied to explain criminal or otherwise immoral behavior as being adaptive or related to adaptive behaviors. Males are generally more aggressive than females, who are more selective of their partners because of the far greater effort they have to contribute to pregnancy and child-rearing. Males being more aggressive is hypothesized to stem from the more intense reproductive competition faced by them. Males of low status may be especially vulnerable to being childless. It may have been evolutionary advantageous to engage in highly risky and violently aggressive behavior to increase their status and therefore reproductive success. This may explain why males are generally involved in more crimes, and why low status and being unmarried are associated with criminality. Furthermore, competition over females is argued to have been particularly intensive in late adolescence and young adulthood, which is theorized to explain why crime rates are particularly high during this period. Some sociologists have underlined differential exposure to androgens as the cause of these behaviors, notably Lee Ellis in his evolutionary neuroandrogenic (ENA) theory.
Many conflicts that result in harm and death involve status, reputation, and seemingly trivial insults. Steven Pinker in his book The Better Angels of Our Nature argues that in non-state societies without a police it was very important to have a credible deterrence against aggression. Therefore, it was important to be perceived as having a credible reputation for retaliation, resulting in humans developing instincts for revenge as well as for protecting reputation ("honor"). Pinker argues that the development of the state and the police have dramatically reduced the level of violence compared to the ancestral environment. Whenever the state breaks down, which can be very locally such as in poor areas of a city, humans again organize in groups for protection and aggression and concepts such as violent revenge and protecting honor again become extremely important.
Rape is theorized to be a reproductive strategy that facilitates the propagation of the rapist's progeny. Such a strategy may be adopted by men who otherwise are unlikely to be appealing to women and therefore cannot form legitimate relationships, or by high-status men on socially vulnerable women who are unlikely to retaliate to increase their reproductive success even further. The sociobiological theories of rape are highly controversial, as traditional theories typically do not consider rape to be a behavioral adaptation, and objections to this theory are made on ethical, religious, political, as well as scientific grounds.
Psychology of religion
Adaptationist perspectives on religious belief suggest that, like all behavior, religious behaviors are a product of the human brain. As with all other organ functions, cognition's functional structure has been argued to have a genetic foundation, and is therefore subject to the effects of natural selection and sexual selection. Like other organs and tissues, this functional structure should be universally shared amongst humans and should have solved important problems of survival and reproduction in ancestral environments. However, evolutionary psychologists remain divided on whether religious belief is more likely a consequence of evolved psychological adaptations, or a byproduct of other cognitive adaptations.
Coalitional psychology
Coalitional psychology is an approach to explain political behaviors between different coalitions and the conditionality of these behaviors in evolutionary psychological perspective. This approach assumes that since human beings appeared on the earth, they have evolved to live in groups instead of living as individuals to achieve benefits such as more mating opportunities and increased status. Human beings thus naturally think and act in a way that manages and negotiates group dynamics.
Coalitional psychology offers falsifiable ex ante prediction by positing five hypotheses on how these psychological adaptations operate:
Humans represent groups as a special category of individual, unstable and with a short shadow of the future
Political entrepreneurs strategically manipulate the coalitional environment, often appealing to emotional devices such as "outrage" to inspire collective action.
Relative gains dominate relations with enemies, whereas absolute gains characterize relations with allies.
Coalitional size and male physical strength will positively predict individual support for aggressive foreign policies.
Individuals with children, particularly women, will vary in adopting aggressive foreign policies than those without progeny.
Reception and criticism
Critics of evolutionary psychology accuse it of promoting genetic determinism, pan-adaptationism (the idea that all behaviors and anatomical features are adaptations), unfalsifiable hypotheses, distal or ultimate explanations of behavior when proximate explanations are superior, and malevolent political or moral ideas.
Ethical implications
Critics have argued that evolutionary psychology might be used to justify existing social hierarchies and reactionary policies. It has also been suggested by critics that evolutionary psychologists' theories and interpretations of empirical data rely heavily on ideological assumptions about race and gender.
In response to such criticism, evolutionary psychologists often caution against committing the naturalistic fallacy – the assumption that "what is natural" is necessarily a moral good. However, their caution against committing the naturalistic fallacy has been criticized as means to stifle legitimate ethical discussions.
Contradictions in models
Some criticisms of evolutionary psychology point at contradictions between different aspects of adaptive scenarios posited by evolutionary psychology. One example is the evolutionary psychology model of extended social groups selecting for modern human brains, a contradiction being that the synaptic function of modern human brains require high amounts of many specific essential nutrients so that such a transition to higher requirements of the same essential nutrients being shared by all individuals in a population would decrease the possibility of forming large groups due to bottleneck foods with rare essential nutrients capping group sizes. It is mentioned that some insects have societies with different ranks for each individual and that monkeys remain socially functioning after the removal of most of the brain as additional arguments against big brains promoting social networking. The model of males as both providers and protectors is criticized for the impossibility of being in two places at once, the male cannot both protect his family at home and be out hunting at the same time. In the case of the claim that a provider male could buy protection service for his family from other males by bartering food that he had hunted, critics point at the fact that the most valuable food (the food that contained the rarest essential nutrients) would be different in different ecologies and as such vegetable in some geographical areas and animal in others, making it impossible for hunting styles relying on physical strength or risk-taking to be universally of similar value in bartered food and instead of making it inevitable that in some parts of Africa, food gathered with no need for major physical strength would be the most valuable to barter for protection. A contradiction between evolutionary psychology's claim of men needing to be more sexually visual than women for fast speed of assessing women's fertility than women needed to be able to assess the male's genes and its claim of male sexual jealousy guarding against infidelity is also pointed at, as it would be pointless for a male to be fast to assess female fertility if he needed to assess the risk of there being a jealous male mate and in that case his chances of defeating him before mating anyway (pointlessness of assessing one necessary condition faster than another necessary condition can possibly be assessed).
Standard social science model
Evolutionary psychology has been entangled in the larger philosophical and social science controversies related to the debate on nature versus nurture. Evolutionary psychologists typically contrast evolutionary psychology with what they call the standard social science model (SSSM). They characterize the SSSM as the "blank slate", "relativist", "social constructionist", and "cultural determinist" perspective that they say dominated the social sciences throughout the 20th century and assumed that the mind was shaped almost entirely by culture.
Critics have argued that evolutionary psychologists created a false dichotomy between their own view and the caricature of the SSSM. Other critics regard the SSSM as a rhetorical device or a straw man and suggest that the scientists whom evolutionary psychologists associate with the SSSM did not believe that the mind was a blank state devoid of any natural predispositions.
Reductionism and determinism
Some critics view evolutionary psychology as a form of genetic reductionism and genetic determinism, a common critique being that evolutionary psychology does not address the complexity of individual development and experience and fails to explain the influence of genes on behavior in individual cases. Evolutionary psychologists respond that they are working within a nature-nurture interactionist framework that acknowledges that many psychological adaptations are facultative (sensitive to environmental variations during individual development). The discipline is generally not focused on proximate analyses of behavior, but rather its focus is on the study of distal/ultimate causality (the evolution of psychological adaptations). The field of behavioral genetics is focused on the study of the proximate influence of genes on behavior.
Testability of hypotheses
A frequent critique of the discipline is that the hypotheses of evolutionary psychology are frequently arbitrary and difficult or impossible to adequately test, thus questioning its status as an actual scientific discipline, for example because many current traits probably evolved to serve different functions than they do now. Thus because there are a potentially infinite number of alternative explanations for why a trait evolved, critics contend that it is impossible to determine the exact explanation. While evolutionary psychology hypotheses are difficult to test, evolutionary psychologists assert that it is not impossible. Part of the critique of the scientific base of evolutionary psychology includes a critique of the concept of the Environment of Evolutionary Adaptation (EEA). Some critics have argued that researchers know so little about the environment in which Homo sapiens evolved that explaining specific traits as an adaption to that environment becomes highly speculative. Evolutionary psychologists respond that they do know many things about this environment, including the facts that present day humans' ancestors were hunter-gatherers, that they generally lived in small tribes, etc. Edward Hagen argues that the human past environments were not radically different in the same sense as the Carboniferous or Jurassic periods and that the animal and plant taxa of the era were similar to those of the modern world, as was the geology and ecology. Hagen argues that few would deny that other organs evolved in the EEA (for example, lungs evolving in an oxygen rich atmosphere) yet critics question whether or not the brain's EEA is truly knowable, which he argues constitutes selective scepticism. Hagen also argues that most evolutionary psychology research is based on the fact that females can get pregnant and males cannot, which Hagen observes was also true in the EEA.
John Alcock describes this as the "No Time Machine Argument", as critics are arguing that since it is not possible to travel back in time to the EEA, then it cannot be determined what was going on there and thus what was adaptive. Alcock argues that present-day evidence allows researchers to be reasonably confident about the conditions of the EEA and that the fact that so many human behaviours are adaptive in the current environment is evidence that the ancestral environment of humans had much in common with the present one, as these behaviours would have evolved in the ancestral environment. Thus Alcock concludes that researchers can make predictions on the adaptive value of traits. Similarly, Dominic Murphy argues that alternative explanations cannot just be forwarded but instead need their own evidence and predictions - if one explanation makes predictions that the others cannot, it is reasonable to have confidence in that explanation. In addition, Murphy argues that other historical sciences also make predictions about modern phenomena to come up with explanations about past phenomena, for example, cosmologists look for evidence for what we would expect to see in the modern-day if the Big Bang was true, while geologists make predictions about modern phenomena to determine if an asteroid wiped out the dinosaurs. Murphy argues that if other historical disciplines can conduct tests without a time machine, then the onus is on the critics to show why evolutionary psychology is untestable if other historical disciplines are not, as "methods should be judged across the board, not singled out for ridicule in one context."
Modularity of mind
Evolutionary psychologists generally presume that, like the body, the mind is made up of many evolved modular adaptations, although there is some disagreement within the discipline regarding the degree of general plasticity, or "generality," of some modules. It has been suggested that modularity evolves because, compared to non-modular networks, it would have conferred an advantage in terms of fitness and because connection costs are lower.
In contrast, some academics argue that it is unnecessary to posit the existence of highly domain specific modules, and, suggest that the neural anatomy of the brain supports a model based on more domain general faculties and processes. Moreover, empirical support for the domain-specific theory stems almost entirely from performance on variations of the Wason selection task which is extremely limited in scope as it only tests one subtype of deductive reasoning.
Cultural rather than genetic development of cognitive tools
Psychologist Cecilia Heyes has argued that the picture presented by some evolutionary psychology of the human mind as a collection of cognitive instinctsorgans of thought shaped by genetic evolution over very long time periodsdoes not fit research results. She posits instead that humans have cognitive gadgets"special-purpose organs of thought" built in the course of development through social interaction. Similar criticisms are articulated by Subrena E. Smith of the University of New Hampshire.
Response by evolutionary psychologists
Evolutionary psychologists have addressed many of their critics (e.g. in books by Segerstråle (2000), Barkow (2005), and Alcock (2001)). Among their rebuttals are that some criticisms are straw men, or are based on an incorrect nature versus nurture dichotomy or on basic misunderstandings of the discipline.
Robert Kurzban suggested that "...critics of the field, when they err, are not slightly missing the mark. Their confusion is deep and profound. It's not like they are marksmen who can't quite hit the center of the target; they're holding the gun backwards." Many have written specifically to correct basic misconceptions.
See also
Affective neuroscience
Behavioural genetics
Biocultural evolution
Biosocial criminology
Collective unconscious
Cognitive neuroscience
Cultural neuroscience
Darwinian Happiness
Darwinian literary studies
Deep social mind
Dunbar's number
Evolution of the brain
List of evolutionary psychologists
Evolutionary origin of religions
Evolutionary psychology and culture
Molecular evolution
Primate cognition
Hominid intelligence
Human ethology
Great ape language
Chimpanzee intelligence
Cooperative eye hypothesis
Id, ego, and superego
Intersubjectivity
Mirror neuron
Origin of language
Origin of speech
Ovulatory shift hypothesis
Primate empathy
Shadow (psychology)
Simulation theory of empathy
Theory of mind
Neuroethology
Paleolithic diet
Paleolithic lifestyle
r/K selection theory
Social neuroscience
Sociobiology
Universal Darwinism
Notes
References
Buss, D. M. (1994). The evolution of desire: Strategies of human mating. New York: Basic Books.
Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary psychology. Prentice Hall. 2003.
Nesse, R.M. (2000). Tingergen's Four Questions Organized .
Schacter, Daniel L, Daniel Wegner and Daniel Gilbert. 2007. Psychology. Worth Publishers. .
Further reading
Heylighen F. (2012). "Evolutionary Psychology", in: A. Michalos (ed.): Encyclopedia of Quality of Life Research (Springer, Berlin).
Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB
Oikkonen, Venla: Gender, Sexuality and Reproduction in Evolutionary Narratives. London: Routledge, 2013.
External links
PsychTable.org Collaborative effort to catalog human psychological adaptations
What Is Evolutionary Psychology? by Clinical Evolutionary Psychologist Dale Glaebach.
Evolutionary Psychology – Approaches in Psychology
Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB
Academic societies
Human Behavior and Evolution Society; international society dedicated to using evolutionary theory to study human nature
The International Society for Human Ethology; promotes ethological perspectives on the study of humans worldwide
European Human Behaviour and Evolution Association an interdisciplinary society that supports the activities of European researchers with an interest in evolutionary accounts of human cognition, behavior and society
The Association for Politics and the Life Sciences; an international and interdisciplinary association of scholars, scientists, and policymakers concerned with evolutionary, genetic, and ecological knowledge and its bearing on political behavior, public policy and ethics.
Society for Evolutionary Analysis in Law a scholarly association dedicated to fostering interdisciplinary exploration of issues at the intersection of law, biology, and evolutionary theory
The New England Institute for Cognitive Science and Evolutionary Psychology aims to foster research and education into the interdisciplinary nexus of cognitive science and evolutionary studies
The NorthEastern Evolutionary Psychology Society; regional society dedicated to encouraging scholarship and dialogue on the topic of evolutionary psychology
Feminist Evolutionary Psychology Society researchers that investigate the active role that females have had in human evolution
Journals
Evolutionary Psychology – free access online scientific journal
Evolution and Human Behavior – journal of the Human Behavior and Evolution Society
Evolutionary Psychological Science - An international, interdisciplinary forum for original research papers that address evolved psychology. Spans social and life sciences, anthropology, philosophy, criminology, law and the humanities.
Politics and the Life Sciences – an interdisciplinary peer-reviewed journal published by the Association for Politics and the Life Sciences
Human Nature: An Interdisciplinary Biosocial Perspective – advances the interdisciplinary investigation of the biological, social, and environmental factors that underlie human behavior. It focuses primarily on the functional unity in which these factors are continuously and mutually interactive. These include the evolutionary, biological, and sociological processes as they interact with human social behavior.
Biological Theory: Integrating Development, Evolution and Cognition – devoted to theoretical advances in the fields of biology and cognition, with an emphasis on the conceptual integration afforded by evolutionary and developmental approaches.
Evolutionary Anthropology
Behavioral and Brain Sciences – interdisciplinary articles in psychology, neuroscience, behavioral biology, cognitive science, artificial intelligence, linguistics and philosophy. About 30% of the articles have focused on evolutionary analyses of behavior.
Evolution and Development – research relevant to interface of evolutionary and developmental biology
The Evolutionary Review – Art, Science, and Culture
Videos
Brief video clip from the "Evolution" PBS Series
TED talk by Steven Pinker about his book The Blank Slate: The Modern Denial of Human Nature
RSA talk by evolutionary psychologist Robert Kurzban on modularity of mind, based on his book Why Everyone (Else) is a Hypocrite
Richard Dawkins' lecture on natural selection and evolutionary psychology
Evolutionary Psychology – Steven Pinker & Frans de Waal Audio recording
Stone Age Minds: A conversation with evolutionary psychologists Leda Cosmides and John Tooby
Margaret Mead and Samoa. Review of the nature versus nurture debate triggered by Mead's book "Coming of Age in Samoa."
"Evolutionary Psychology", In Our Time, BBC Radio 4 discussion with Janet Radcliffe Richards, Nicholas Humphrey and Steven Rose (November 2, 2000)
psychology | Evolutionary psychology | [
"Biology"
] | 16,133 | [
"Evolutionary biology"
] |
9,730 | https://en.wikipedia.org/wiki/Electron%20microscope | An electron microscope is a microscope that uses a beam of electrons as a source of illumination. They use electron optics that are analogous to the glass lenses of an optical light microscope to control the electron beam, for instance focusing them to produce magnified images or electron diffraction patterns. As the wavelength of an electron can be up to 100,000 times smaller than that of visible light, electron microscopes have a much higher resolution of about 0.1 nm, which compares to about 200 nm for light microscopes. Electron microscope may refer to:
Transmission electron microscopy (TEM) where swift electrons go through a thin sample
Scanning transmission electron microscopy (STEM) which is similar to TEM with a scanned electron probe
Scanning electron microscope (SEM) which is similar to STEM, but with thick samples
Electron microprobe similar to a SEM, but more for chemical analysis
Low-energy electron microscopy (LEEM), used to image surfaces
Photoemission electron microscopy (PEEM) which is similar to LEEM using electrons emitted from surfaces by photons
Additional details can be found in the above links. This article contains some general information mainly about transmission electron microscopes.
History
Many developments laid the groundwork of the electron optics used in microscopes. One significant step was the work of Hertz in 1883 who made a cathode-ray tube with electrostatic and magnetic deflection, demonstrating manipulation of the direction of an electron beam. Others were focusing of the electrons by an axial magnetic field by Emil Wiechert in 1899, improved oxide-coated cathodes which produced more electrons by Arthur Wehnelt in 1905 and the development of the electromagnetic lens in 1926 by Hans Busch. According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince him to build an electron microscope, for which Szilárd had filed a patent.
To this day the issue of who invented the transmission electron microscope is controversial. In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), Adolf Matthias (Professor of High Voltage Technology and Electrical Installations) appointed Max Knoll to lead a team of researchers to advance research on electron beams and cathode-ray oscilloscopes. The team consisted of several PhD students including Ernst Ruska. In 1931, Max Knoll and Ernst Ruska successfully generated magnified images of mesh grids placed over an anode aperture. The device, a replicate of which is shown in the figure, used two magnetic lenses to achieve higher magnifications, the first electron microscope. (Max Knoll died in 1969, so did not receive a share of the 1986 Nobel prize for the invention of electron microscopes.)
Apparently independent of this effort was work at Siemens-Schuckert by Reinhold Rüdenberg. According to patent law (U.S. Patent No. 2058914 and 2070318, both filed in 1932), he is the inventor of the electron microscope, but it is not clear when he had a working instrument. He stated in a very brief article in 1932 that Siemens had been working on this for some years before the patents were filed in 1932, claiming that his effort was parallel to the university development. He died in 1961, so similar to Max Knoll, was not eligible for a share of the 1986 Nobel prize.
In the following year, 1933, Ruska and Knoll built the first electron microscope that exceeded the resolution of an optical (light) microscope. Four years later, in 1937, Siemens financed the work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska, Ernst's brother, to develop applications for the microscope, especially with biological specimens. Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope. Siemens produced the first commercial electron microscope in 1938. The first North American electron microscopes were constructed in the 1930s, at the Washington State University by Anderson and Fitzsimmons and at the University of Toronto by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus. Siemens produced a transmission electron microscope (TEM) in 1939. Although current transmission electron microscopes are capable of two million times magnification, as scientific instruments they remain similar but with improved optics.
In the 1940s, high-resolution electron microscopes were developed, enabling greater magnification and resolution. By 1965, Albert Crewe at the University of Chicago introduced the scanning transmission electron microscope using a field emission source, enabling scanning microscopes at high resolution. By the early 1980s improvements in mechanical stability as well as the use of higher accelerating voltages enabled imaging of materials at the atomic scale. In the 1980s, the field emission gun became common for electron microscopes, improving the image quality due to the additional coherence and lower chromatic aberrations. The 2000s were marked by advancements in aberration-corrected electron microscopy, allowing for significant improvements in resolution and clarity of images.
Types
Transmission electron microscope (TEM)
The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. An electron beam is produced by an electron gun, with the electrons typically having energies in the range 20 to 400 keV, focused by electromagnetic lenses, and transmitted through the specimen. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by lenses of the microscope. The spatial variation in this information (the "image") may be viewed by projecting the magnified electron image onto a detector. For example, the image may be viewed directly by an operator using a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. A high-resolution phosphor may also be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. Direct electron detectors have no scintillator and are directly exposed to the electron beam, which addresses some of the limitations of scintillator-coupled cameras.
The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development.
Scanning transmission electron microscope (STEM)
The STEM rasters a focused incident probe across a specimen. The high resolution of the TEM is thus possible in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical techniques, but also means that image data is acquired in serial rather than in parallel fashion.
Scanning electron microscope (SEM)
The SEM produces images by probing the specimen with a focused electron beam that is scanned across the specimen (raster scanning). When the electron beam interacts with the specimen, it loses energy by a variety of mechanisms. These interactions lead to, among other events, emission of low-energy secondary electrons and high-energy backscattered electrons, light emission (cathodoluminescence) or X-ray emission, all of which provide signals carrying information about the properties of the specimen surface, such as its topography and composition. The image displayed by SEM represents the varying intensity of any of these signals into the image in a position corresponding to the position of the beam on the specimen when the signal was generated.
SEMs are different from TEMs in that they use electrons with much lower energy, generally below 20 keV, while TEMs generally use electrons with energies in the range of 80-300 keV. Thus, the electron sources and optics of the two microscopes have different designs, and they are normally separate instruments.
Main operating modes
Diffraction contrast imaging
Diffraction contrast uses the variation in either or both the direction of diffracted electrons or their amplitude as the contrast mechanism.
Phase contrast imaging
Phase contrast imaging involves generating contrast, for instance around edges, by defocusing the micriscope.
High resolution imaging
Chemical analysis
Electron diffraction
Transmission electron microscopes can be used in electron diffraction mode where a map of the angles of the electrons leaving the sample is produced. The advantages of electron diffraction over X-ray crystallography are primarily in the size of the crystals. In X-ray crystallography, crystals are commonly visible by the naked eye and are generally in the hundreds of micrometers in length. In comparison, crystals for electron diffraction must be less than a few hundred nanometers in thickness, and have no lower boundary of size. Additionally, electron diffraction is done on a TEM, which can also be used to obtain many other types of information, rather than requiring a separate instrument.
Sample preparation
Samples for electron microscopes mostly cannot be observed directly. The samples need to be prepared to stabilize the sample and enhance contrast. Preparation techniques differ vastly in respect to the sample and its specific qualities to be observed as well as the specific microscope used.
Scanning Electron Microscope (SEM)
To prevent charging and enhance the signal in SEM, non-conductive samples (e.g. biological samples as in figure) can be sputter-coated in a thin film of metal.
Transmission electron microscope
Materials to be viewed in a transmission electron microscope (TEM) may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required:
Chemical fixation – for biological specimens this aims to stabilize the specimen's mobile macromolecular structure by chemical crosslinking of proteins with aldehydes such as formaldehyde and glutaraldehyde, and lipids with osmium tetroxide.
Cryofixation – freezing a specimen so that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its native state. Methods to achieve this vitrification include plunge freezing rapidly in liquid ethane, and high pressure freezing. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS) and cryo-focused ion beam milling of lamellae, it is now possible to observe samples from virtually any biological specimen close to its native state.
Dehydration – replacement of water with organic solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins. See also freeze drying.
Embedding, biological specimens – after dehydration, tissue for observation in the transmission electron microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through a 'transition solvent' such as propylene oxide (epoxypropane) or acetone and then infiltrated with an epoxy resin such as Araldite, Epon, or Durcupan; tissues may also be embedded directly in water-miscible acrylic resin. After the resin has been polymerized (hardened) the sample is sectioned by ultramicrotomy and stained.
Embedding, materials – after embedding in resin, the specimen is usually ground and polished to a mirror-like finish using ultra-fine abrasives.
Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixation), then fractured by breaking (or by using a microtome) while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about −100 °C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. The second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve the stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-floating replica is thoroughly washed free from residual chemicals, carefully fished up on fine grids, dried then viewed in the TEM.
Freeze-fracture replica immunogold labeling (FRIL) – the freeze-fracture method has been modified to allow the identification of the components of the fracture face by immunogold labeling. Instead of removing all the underlying tissue of the thawed replica as the final step before viewing in the microscope the tissue thickness is minimized during or after the fracture process. The thin layer of tissue remains bound to the metal replica so it can be immunogold labeled with antibodies to the structures of choice. The thin layer of the original specimen on the replica with gold attached allows the identification of structures in the fracture plane. There are also related methods which label the surface of etched cells and other replica labeling variations.
Ion beam milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface. A subclass of this is focused ion beam milling, where gallium ions are used to produce an electron transparent membrane or 'lamella' in a specific region of the sample, for example through a device within a microprocessor or a focused ion beam SEM. Ion beam milling may also be used for cross-section polishing prior to analysis of materials that are difficult to prepare using mechanical polishing.
Negative stain – suspensions containing nanoparticles or fine biological material (such as viruses and bacteria) are briefly mixed with a dilute solution of an electron-opaque solution such as ammonium molybdate, uranyl acetate (or formate), or phosphotungstic acid. This mixture is applied to an EM grid, pre-coated with a plastic film such as formvar, blotted, then allowed to dry. Viewing of this preparation in the TEM should be carried out without delay for best results. The method is important in microbiology for fast but crude morphological identification, but can also be used as the basis for high-resolution 3D reconstruction using EM tomography methodology when carbon films are used for support.
Sectioning – produces thin slices of the specimen, semitransparent to electrons. These can be cut using ultramicrotomy on an ultramicrotome with a glass or diamond knife to produce ultra-thin sections about 60–90 nm thick. Disposable glass knives are also used because they can be made in the lab and are much cheaper. Sections can also be created in situ by milling in a focused ion beam SEM, where the section is known as a lamella.
Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). In biology, specimens can be stained "en bloc" before embedding and also later after sectioning. Typically thin sections are stained for several minutes with an aqueous or alcoholic solution of uranyl acetate followed by aqueous lead citrate.
EM workflows
In their most common configurations, electron microscopes produce images with a single brightness value per pixel, with the results usually rendered in greyscale. However, often these images are then colourized through the use of feature-detection software, or simply by hand-editing using a graphics editor. This may be done to clarify structure or for aesthetic effect and generally does not add new information about the specimen.
Electron microscopes are now frequently used in more complex workflows, with each workflow typically using multiple technologies to enable more complex and/or more quantitative analyses of a sample. A few examples are outlined below, but this should not be considered an exhaustive list. The choice of workflow will be highly dependent on the application and the requirements of the corresponding scientific questions, such as resolution, volume, nature of the target molecule, etc.
For example, images from light and electron microscopy of the same region of a sample can be overlaid to correlate the data from the two modalities. This is commonly used to provide higher resolution contextual EM information about a fluorescently labelled structure. This correlative light and electron microscopy (CLEM) is one of a range of correlative workflows now available. Another example is high resolution mass spectrometry (ion microscopy), which has been used to provide correlative information about subcellular antibiotic localisation, data that would be difficult to obtain by other means.
The initial role of electron microscopes in imaging two-dimensional slices (TEM) or a specimen surface (SEM with secondary electrons) has also increasingly expanded into the depth of samples. An early example of these ‘volume EM’ workflows was simply to stack TEM images of serial sections cut through a sample. The next development was virtual reconstruction of a thick section (200-500 nm) volume by backprojection of a set of images taken at different tilt angles - TEM tomography.
Serial imaging for volume EM
To acquire volume EM datasets of larger depths than TEM tomography (micrometers or millimeters in the z axis), a series of images taken through the sample depth can be used. For example, ribbons of serial sections can be imaged in a TEM as described above, and when thicker sections are used, serial TEM tomography can be used to increase the z-resolution. More recently, back scattered electron (BSE) images can be acquired of a larger series of sections collected on silicon wafers, known as SEM array tomography. An alternative approach is to use BSE SEM to image the block surface instead of the section, after each section has been removed. By this method, an ultramicrotome installed in an SEM chamber can increase automation of the workflow; the specimen block is loaded in the chamber and the system programmed to continuously cut and image through the sample. This is known as serial block face SEM. A related method uses focused ion beam milling instead of an ultramicrotome to remove sections. In these serial imaging methods, the output is essentially a sequence of images through a specimen block that can be digitally aligned in sequence and thus reconstructed into a volume EM dataset. The increased volume available in these methods has expanded the capability of electron microscopy to address new questions, such as mapping neural connectivity in the brain, and membrane contact sites between organelles.
Disadvantages
Electron microscopes are expensive to build and maintain. Microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field canceling systems.
The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. An exception is liquid-phase electron microscopy using either a closed liquid cell or an environmental chamber, for example, in the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to ) wet environment. Various techniques for in situ electron microscopy of gaseous samples have been developed.
Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon, osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or environmental) scanning electron microscope.
Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens, have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in artifacts, but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique.
See also
List of materials analysis methods
Electron diffraction
Electron energy loss spectroscopy (EELS)
Electron microscope images
Energy filtered transmission electron microscopy (EFTEM)
Environmental scanning electron microscope (ESEM)
Immune electron microscopy
In situ electron microscopy
Low-energy electron microscopy
Microscope image processing
Microscopy
Nanotechnology
Scanning confocal electron microscopy
Scanning electron microscope (SEM)
Thin section
Transmission Electron Aberration-Corrected Microscope
References
External links
An Introduction to Microscopy : resources for teachers and students
Cell Centered Database – Electron microscopy data
Science Aid: Electron Microscopy:By Kaden park
Microscopes
Accelerator physics
Anatomical pathology
Pathology
German inventions
Protein imaging
20th-century inventions | Electron microscope | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,391 | [
"Electron",
"Biochemistry methods",
"Electron microscopy",
"Applied and interdisciplinary physics",
"Pathology",
"Measuring instruments",
"Microscopes",
"Experimental physics",
"Microscopy",
"Accelerator physics",
"Protein imaging"
] |
9,735 | https://en.wikipedia.org/wiki/Electromagnetic%20field | An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field.
Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave.
The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion.
The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics.
History
The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831.
In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics.
Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience.
Mathematical description
There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).
If only the electric field () is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field () is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.
The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
Gauss's law
Gauss's law for magnetism
Faraday's law
Ampère–Maxwell law
where is the charge density, which is a function of time and position, is the vacuum permittivity, is the vacuum permeability, and is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.
The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the behavior of the field changes according to the properties of the media.
Properties of the field
Electrostatics and magnetostatics
The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field,
and
along with two formulae that involve the magnetic field:
and
These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena.
Transformations of electromagnetic fields
Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a field must be present.
In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields.
Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently.
Reciprocal behavior of electric and magnetic fields
The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator.
Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor.
Behavior of the fields in the absence of charges or currents
Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where and are zero, the electric and magnetic fields satisfy these electromagnetic wave equations:
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum.
Time-varying EM fields in Maxwell's equations
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.
A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.
A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.
Changing dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.
Changing dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies.
Health and safety
The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances.
See also
Classification of electromagnetic fields
Electric field
Electromagnetism
Electromagnetic propagation
Electromagnetic radiation
Electromagnetic spectrum
Electromagnetic field measurements
Magnetic field
Maxwell's equations
Photoelectric effect
Photon
Quantization of the electromagnetic field
Quantum electrodynamics
References
Citations
Sources
Further reading
(This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
External links
Electromagnetism | Electromagnetic field | [
"Physics"
] | 2,445 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
9,804 | https://en.wikipedia.org/wiki/Electric%20charge | Electric charge (symbol q, sometimes Q) is a physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e.
Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth.
Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics.
The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges.
Overview
Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed.
By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign.
The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge.
During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral.
Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa.
Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current.
Unit
The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
The elementary charge is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly
After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as , , or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect.
The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e.
History
From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect.
In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon.
In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge".
Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies.
In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia.
Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745).
Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium.
Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward.
It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge.
Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path.
In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity).
In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body.
In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state.
In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level.
Role of charge in static electricity
Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects.
Electrification by sliding
When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other.
A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:
The two pieces of glass repel each other.
Each piece of glass attracts each piece of resin.
The two pieces of resin repel each other.
This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts.
If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified.
An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand.
Role of charge in electric current
Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations.
At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma.
Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners.
Conservation of electric charge
The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I:
Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:
The charge transferred between times and is obtained by integrating both sides:
where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface.
Relativistic invariance
Aside from the properties described in articles about electromagnetism, electric charge is a relativistic invariant. This means that any particle that has electric charge q has the same electric charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the electric charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as that of two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus).
See also
SI electromagnetism units
Color charge
Partial charge
Positron or antielectron is an antiparticle or antimatter counterpart of the electron
References
External links
How fast does a charge decay?
Chemical properties
Conservation laws
Electricity
Flavour (particle physics)
Spintronics
Electromagnetic quantities | Electric charge | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,614 | [
"Electromagnetic quantities",
"Physical quantities",
"Electric charge",
"Equations of physics",
"Conservation laws",
"Spintronics",
"Quantity",
"Condensed matter physics",
"nan",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Physics theorems"
] |
9,813 | https://en.wikipedia.org/wiki/Extinction%20event | An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp fall in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the background extinction rate and the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from disagreement as to what constitutes a "major" extinction event, and the data chosen to measure past diversity.
The "Big Five" mass extinctions
In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five particular geological intervals with excessive diversity loss. They were originally identified as outliers on a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that in the current, Phanerozoic Eon, multicellular animal life has experienced at least five major and many minor mass extinctions. The "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events. All of the five in the Phanerozoic Eon were anciently preceded by the presumed far more extensive mass extinction of microbial life during the Great Oxidation Event (a.k.a. Oxygen Catastrophe) early in the Proterozoic Eon. At the end of the Ediacaran and just before the Cambrian explosion, yet another Proterozoic extinction event (of unknown magnitude) is speculated to have ushered in the Phanerozoic.
Despite the common presentation focusing only on these five events, no measure of extinction shows any definite line separating them from the many other Phanerozoic extinction events that appear only slightly lesser catastrophes; further, using different methods of calculating an extinction's impact can lead to other events featuring in the top five.
Fossil records of older events are more difficult to interpret. This is because:
Older fossils are more difficult to find, as they are usually buried at a considerable depth.
Dating of older fossils is more difficult.
Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched.
Prehistoric environmental events can disturb the deposition process.
Marine fossils tend to be better preserved than their more sought-after land-based counterparts, but the deposition and preservation of fossils on land is more erratic.
It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias.
Sixth mass extinction
Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event due to human activities is currently under way:
Extinctions by severity
Extinction events can be tracked by several methods, including geological change, ecological impact, extinction vs. origination (speciation) rates, and most commonly diversity loss among taxonomic units. Most early papers used families as the unit of taxonomy, based on compendiums of marine animal families by Sepkoski (1982, 1992). Later papers by Sepkoski and other authors switched to genera, which are more precise than families and less prone to taxonomic bias or incomplete sampling relative to species. These are several major papers estimating loss or ecological impact from fifteen commonly-discussed extinction events. Different methods used by these papers are described in the following section. The "Big Five" mass extinctions are bolded.
Graphed but not discussed by Sepkoski (1996), considered continuous with the Late Devonian mass extinction At the time considered continuous with the end-Permian mass extinction Includes late Norian time slices Diversity loss of both pulses calculated together Pulses extend over adjacent time slices, calculated separately Considered ecologically significant, but not analyzed directly Excluded due to a lack of consensus on Late Triassic chronology
The study of major extinction events
Breakthrough studies in the 1980s–1990s
For much of the 20th century, the study of mass extinctions was hampered by insufficient data. Mass extinctions, though acknowledged, were considered mysterious exceptions to the prevailing gradualistic view of prehistory, where slow evolutionary trends define faunal changes. The first breakthrough was published in 1980 by a team led by Luis Alvarez, who discovered trace metal evidence for an asteroid impact at the end of the Cretaceous period. The Alvarez hypothesis for the end-Cretaceous extinction gave mass extinctions, and catastrophic explanations, newfound popular and scientific attention.
Another landmark study came in 1982, when a paper written by David M. Raup and Jack Sepkoski was published in the journal Science. This paper, originating from a compendium of extinct marine animal families developed by Sepkoski, identified five peaks of marine family extinctions which stand out among a backdrop of decreasing extinction rates through time. Four of these peaks were statistically significant: the Ashgillian (end-Ordovician), Late Permian, Norian (end-Triassic), and Maastrichtian (end-Cretaceous). The remaining peak was a broad interval of high extinction smeared over the later half of the Devonian, with its apex in the Frasnian stage.
Through the 1980s, Raup and Sepkoski continued to elaborate and build upon their extinction and origination data, defining a high-resolution biodiversity curve (the "Sepkoski curve") and successive evolutionary faunas with their own patterns of diversification and extinction. Though these interpretations formed a strong basis for subsequent studies of mass extinctions, Raup and Sepkoski also proposed a more controversial idea in 1984: a 26-million-year periodic pattern to mass extinctions. Two teams of astronomers linked this to a hypothetical brown dwarf in the distant reaches of the solar system, inventing the "Nemesis hypothesis" which has been strongly disputed by other astronomers.
Around the same time, Sepkoski began to devise a compendium of marine animal genera, which would allow researchers to explore extinction at a finer taxonomic resolution. He began to publish preliminary results of this in-progress study as early as 1986, in a paper which identified 29 extinction intervals of note. By 1992, he also updated his 1982 family compendium, finding minimal changes to the diversity curve despite a decade of new data. In 1996, Sepkoski published another paper which tracked marine genera extinction (in terms of net diversity loss) by stage, similar to his previous work on family extinctions. The paper filtered its sample in three ways: all genera (the entire unfiltered sample size), multiple-interval genera (only those found in more than one stage), and "well-preserved" genera (excluding those from groups with poor or understudied fossil records). Diversity trends in marine animal families were also revised based on his 1992 update.
Revived interest in mass extinctions led many other authors to re-evaluate geological events in the context of their effects on life. A 1995 paper by Michael Benton tracked extinction and origination rates among both marine and continental (freshwater & terrestrial) families, identifying 22 extinction intervals and no periodic pattern. Overview books by O.H. Walliser (1996) and A. Hallam and P.B. Wignall (1997) summarized the new extinction research of the previous two decades. One chapter in the former source lists over 60 geological events which could conceivably be considered global extinctions of varying sizes. These texts, and other widely circulated publications in the 1990s, helped to establish the popular image of mass extinctions as a "big five" alongside many smaller extinctions through prehistory.
New data on genera: Sepkoski's compendium
Though Sepkoski died in 1999, his marine genera compendium was formally published in 2002. This prompted a new wave of studies into the dynamics of mass extinctions. These papers utilized the compendium to track origination rates (the rate that new species appear or speciate) parallel to extinction rates in the context of geological stages or substages. A review and re-analysis of Sepkoski's data by Bambach (2006) identified 18 distinct mass extinction intervals, including 4 large extinctions in the Cambrian. These fit Sepkoski's definition of extinction, as short substages with large diversity loss and overall high extinction rates relative to their surroundings.
Bambach et al. (2004) considered each of the "Big Five" extinction intervals to have a different pattern in the relationship between origination and extinction trends. Moreover, background extinction rates were broadly variable and could be separated into more severe and less severe time intervals. Background extinctions were least severe relative to the origination rate in the middle Ordovician-early Silurian, late Carboniferous-Permian, and Jurassic-recent. This argues that the Late Ordovician, end-Permian, and end-Cretaceous extinctions were statistically significant outliers in biodiversity trends, while the Late Devonian and end-Triassic extinctions occurred in time periods which were already stressed by relatively high extinction and low origination.
Computer models run by Foote (2005) determined that abrupt pulses of extinction fit the pattern of prehistoric biodiversity much better than a gradual and continuous background extinction rate with smooth peaks and troughs. This strongly supports the utility of rapid, frequent mass extinctions as a major driver of diversity changes. Pulsed origination events are also supported, though to a lesser degree which is largely dependent on pulsed extinctions.
Similarly, Stanley (2007) used extinction and origination data to investigate turnover rates and extinction responses among different evolutionary faunas and taxonomic groups. In contrast to previous authors, his diversity simulations show support for an overall exponential rate of biodiversity growth through the entire Phanerozoic.
Tackling biases in the fossil record
As data continued to accumulate, some authors began to re-evaluate Sepkoski's sample using methods meant to account for sampling biases. As early as 1982, a paper by Phillip W. Signor and Jere H. Lipps noted that the true sharpness of extinctions was diluted by the incompleteness of the fossil record. This phenomenon, later called the Signor-Lipps effect, notes that a species' true extinction must occur after its last fossil, and that origination must occur before its first fossil. Thus, species which appear to die out just prior to an abrupt extinction event may instead be a victim of the event, despite an apparent gradual decline looking at the fossil record alone. A model by Foote (2007) found that many geological stages had artificially inflated extinction rates due to Signor-Lipps "backsmearing" from later stages with extinction events.
Other biases include the difficulty in assessing taxa with high turnover rates or restricted occurrences, which cannot be directly assessed due to a lack of fine-scale temporal resolution. Many paleontologists opt to assess diversity trends by randomized sampling and rarefaction of fossil abundances rather than raw temporal range data, in order to account for all of these biases. But that solution is influenced by biases related to sample size. One major bias in particular is the "Pull of the recent", the fact that the fossil record (and thus known diversity) generally improves closer to the modern day. This means that biodiversity and abundance for older geological periods may be underestimated from raw data alone.
Alroy (2010) attempted to circumvent sample size-related biases in diversity estimates using a method he called "shareholder quorum subsampling" (SQS). In this method, fossils are sampled from a "collection" (such as a time interval) to assess the relative diversity of that collection. Every time a new species (or other taxon) enters the sample, it brings over all other fossils belonging to that species in the collection (its "share" of the collection). For example, a skewed collection with half its fossils from one species will immediately reach a sample share of 50% if that species is the first to be sampled. This continues, adding up the sample shares until a "coverage" or "quorum" is reached, referring to a pre-set desired sum of share percentages. At that point, the number of species in the sample are counted. A collection with more species is expected to reach a sample quorum with more species, thus accurately comparing the relative diversity change between two collections without relying on the biases inherent to sample size.
Alroy also elaborated on three-timer algorithms, which are meant to counteract biases in estimates of extinction and origination rates. A given taxon is a "three-timer" if it can be found before, after, and within a given time interval, and a "two-timer" if it overlaps with a time interval on one side. Counting "three-timers" and "two-timers" on either end of a time interval, and sampling time intervals in sequence, can together be combined into equations to predict extinction and origination with less bias. In subsequent papers, Alroy continued to refine his equations to improve lingering issues with precision and unusual samples.
McGhee et al. (2013), a paper which primarily focused on ecological effects of mass extinctions, also published new estimates of extinction severity based on Alroy's methods. Many extinctions were significantly more impactful under these new estimates, though some were less prominent.
Stanley (2016) was another paper which attempted to remove two common errors in previous estimates of extinction severity. The first error was the unjustified removal of "singletons", genera unique to only a single time slice. Their removal would mask the influence of groups with high turnover rates or lineages cut short early in their diversification. The second error was the difficulty in distinguishing background extinctions from brief mass extinction events within the same short time interval. To circumvent this issue, background rates of diversity change (extinction/origination) were estimated for stages or substages without mass extinctions, and then assumed to apply to subsequent stages with mass extinctions. For example, the Santonian and Campanian stages were each used to estimate diversity changes in the Maastrichtian prior to the K-Pg mass extinction. Subtracting background extinctions from extinction tallies had the effect of reducing the estimated severity of the six sampled mass extinction events. This effect was stronger for mass extinctions which occurred in periods with high rates of background extinction, like the Devonian.
Uncertainty in the Proterozoic and earlier eons
Because most diversity and biomass on Earth is microbial, and thus difficult to measure via fossils, extinction events placed on-record are those that affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. For this reason, well-documented extinction events are confined to the Phanerozoic eon – with the sole exception of the Oxygen Catastrophe in the Proterozoic – since before the Phanerozoic, all living organisms were either microbial, or if multicellular then soft-bodied. Perhaps due to the absence of a robust microbial fossil record, mass extinctions might only seem to be mainly a Phanerozoic phenomenon, with merely the observable extinction rates appearing low before large complex organisms with hard body parts arose.
Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years.
The Oxygen Catastrophe, which occurred around 2.45 billion years ago in the Paleoproterozoic, is plausible as the first-ever major extinction event. It was perhaps also the worst-ever, in some sense, but with the Earth's ecology just before that time so poorly understood, and the concept of prokaryote genera so different from genera of complex life, that it would be difficult to meaningfully compare it to any of the "Big Five" even if Paleoproterozoic life were better known.
Since the Cambrian explosion, five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and best-known, the Cretaceous–Paleogene extinction event, which occurred approximately Ma (million years ago), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major Phanerozoic mass extinctions, there are numerous lesser ones, and the ongoing mass extinction caused by human activity is sometimes called the sixth mass extinction.
Evolutionary importance
Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation.
For example, mammaliaformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Similarly, within Synapsida, the replacement of taxa that originated in the earliest, Pennsylvanian and Cisuralian evolutionary radiation (often still called "pelycosaurs", though this is a paraphyletic group) by therapsids occurred around the Kungurian/Roadian transition, which is often called Olson's extinction (which may be a slow decline over 20 Ma rather than a dramatic, brief event).
Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event.
Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking".
However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past".
Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the 'struggle for existence' – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species:
"Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others".
Patterns in frequency
Various authors have suggested that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas, mostly regarding astronomical influences, attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious or lacking statistical significance. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables such as Strontium isotopes, flood basalts, anoxic events, orogenies, and evaporite deposition. One explanation for this proposed cycle is carbon storage and release by oceanic crust, which exchanges carbon between the atmosphere and mantle.
Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time.
It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions,
but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable.
Causes
There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed.
Identifying causes of specific mass extinctions
A good theory for a particular mass extinction should:
explain all of the losses, not just focus on a few groups (such as dinosaurs);
explain why particular groups of organisms died out and why others survived;
provide mechanisms that are strong enough to cause a mass extinction but not a total extinction;
be based on events or processes that can be shown to have happened, not just inferred from the extinction.
It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world.
Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure.
Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate.
Most widely supported explanations
MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992):
Flood basalt events (giant volcanic eruptions): 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions.
Sea-level falls: 12, of which seven were associated with significant extinctions.
Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions, or cannot be dated precisely enough. The impact that created the Siljan Ring either was just before the Late Devonian Extinction or coincided with it.
The most commonly suggested causes of mass extinctions are listed below.
Flood basalt events
The formation of large igneous provinces by flood basalt events could have:
produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea
emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains
emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated.
Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years.
Flood basalt events have been implicated as the cause of many major extinction events. It is speculated that massive volcanism caused or contributed to the Kellwasser Event, the End-Guadalupian Extinction Event, the End-Permian Extinction Event, the Smithian-Spathian Extinction, the Triassic-Jurassic Extinction Event, the Toarcian Oceanic Anoxic Event, the Cenomanian-Turonian Oceanic Anoxic Event, the Cretaceous-Palaeogene Extinction Event, and the Palaeocene-Eocene Thermal Maximum. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon.
Sea-level fall
These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges.
Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous, along with the more recently recognised Capitanian mass extinction of comparable severity to the Big Five.
A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans.
Extraterrestrial threats
Impact events
The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires.
Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is lingering dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction.
The Permian-Triassic extinction event has also been hypothesised to have been caused by an asteroid impact that formed the Araguainha crater due to the estimated date of the crater's formation overlapping with the end-Permian extinction event. However, this hypothesis has been widely challenged, with the impact hypothesis being rejected by most researchers.
According to the Shiva hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on maps of the spiral structure of the Milky Way in CO molecular line emission has failed to find a correlation.
A nearby nova, supernova or gamma ray burst
A nearby gamma-ray burst (less than 6000 light-years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the Sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years.
It has been suggested that a gamma ray burst caused the End-Ordovician extinction, while a supernova has been proposed as the cause of the Hangenberg event. A supernova within 25 light-years would strip Earth of its atmosphere. Today there is in the Solar System's neighbourhood no critical star capable to produce a supernova dangerous to life on Earth.
Global cooling
Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction.
It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts.
Global warming
This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below).
Global warming as a cause of mass extinction is supported by several recent studies.
The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of all marine families became extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming.
Clathrate gun hypothesis
Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly – for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming.
The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13.
It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions.
Anoxic events
Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism.
It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Capitanian, Permian–Triassic, and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Lundgreni, Mulde, Lau, Smithian-Spathian, Toarcian, and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous that indicate anoxic events but are not associated with mass extinctions.
The bio-availability of essential trace elements (in particular selenium) to potentially lethal lows has been shown to coincide with, and likely have contributed to, at least three mass extinction events in the oceans, that is, at the end of the Ordovician, during the Middle and Late Devonian, and at the end of the Triassic. During periods of low oxygen concentrations very soluble selenate (Se6+) is converted into much less soluble selenide (Se2-), elemental Se and organo-selenium complexes. Bio-availability of selenium during these extinction events dropped to about 1% of the current oceanic concentration, a level that has been proven lethal to many extant organisms.
British oceanologist and atmospheric scientist, Andrew Watson, explained that, while the Holocene epoch exhibits many processes reminiscent of those that have contributed to past anoxic events, full-scale ocean anoxia would take "thousands of years to develop".
Hydrogen sulfide emissions from the seas
Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide, which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation.
Oceanic overturn
Oceanic overturn is a disruption of thermo-haline circulation that lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms that inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water.
Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events.
It has been suggested that oceanic overturn caused or contributed to the late Devonian and Permian–Triassic extinctions.
Geomagnetic reversal
One theory is that periods of increased geomagnetic reversals will weaken Earth's magnetic field long enough to expose the atmosphere to the solar winds, causing oxygen ions to escape the atmosphere in a rate increased by 3–4 orders, resulting in a disastrous decrease in oxygen.
Plate tectonics
Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges that expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent that includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior that may have extreme seasonal variations.
Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time, which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian.
Other hypotheses
Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms that are contrary to the available evidence; they are based on other theories that have been rejected or superseded.
Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with human-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss. A study published in May 2017 in Proceedings of the National Academy of Sciences argued that a "biological annihilation" akin to a sixth mass extinction event is underway as a result of anthropogenic causes, such as over-population and over-consumption. The study suggested that as much as 50% of the number of animal individuals that once lived on Earth were already extinct, threatening the basis for human existence too.
Future biosphere extinction/sterilization
The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide, could actually cause an even greater mass extinction, having the potential to wipe out even microbes (in other words, the Earth would be completely sterilized): rising global temperatures caused by the expanding Sun would gradually increase the rate of weathering, which would in turn remove more and more CO2 from the atmosphere. When CO2 levels get too low (perhaps at 50 ppm), most plant life will die out, although simpler plants like grasses and mosses can survive much longer, until levels drop to 10 ppm.
With all photosynthetic organisms gone, atmospheric oxygen can no longer be replenished, and it is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining aerobic life to die out via asphyxiation, leaving behind only simple anaerobic prokaryotes. When the Sun becomes 10% brighter in about a billion years, Earth will suffer a moist greenhouse effect resulting in its oceans boiling away, while the Earth's liquid outer core cools due to the inner core's expansion and causes the Earth's magnetic field to shut down. In the absence of a magnetic field, charged particles from the Sun will deplete the atmosphere and further increase the Earth's temperature to an average of around 420 K (147 °C, 296 °F) in 2.8 billion years, causing the last remaining life on Earth to die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, it would represent the final mass extinction in Earth's history (albeit a very long extinction event).
Effects and recovery
The effects of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, it takes millions of years for biodiversity to recover after extinction events. In the most severe mass extinctions it may take 15 to 30 million years.
The worst Phanerozoic event, the Permian–Triassic extinction, devastated life on Earth, killing over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to successive waves of extinction that inhibited recovery, as well as prolonged environmental stress that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, four to six million years after the extinction;
and some writers estimate that the recovery was not complete until 30 million years after the P-T extinction, that is, in the late Triassic. Subsequent to the P-T extinction, there was an increase in provincialization, with species occupying smaller ranges – perhaps removing incumbents from niches and setting the stage for an eventual rediversification.
The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora.
In media
The term extinction level event (ELE) has been used in media. The 1998 film Deep Impact describes a potential comet strike of earth as an E.L.E.
See also
Bioevent
Elvis taxon
Endangered species
Geologic time scale
Global catastrophic risk
Holocene extinction
Human extinction
Kačák Event
Lazarus taxon
List of impact craters on Earth
List of largest volcanic eruptions
List of possible impact structures on Earth
Medea hypothesis
Rare species
Signor–Lipps effect
Snowball Earth
Speculative evolution
The Sixth Extinction: An Unnatural History (nonfiction book)
Timeline of extinctions in the Holocene
Quaternary extinction event
Footnotes
References
Further reading
Edmeades B (2021) Megafauna: First victims of the human-caused extinction | Houndstooth Press | isbn 978-1-5445-2651-5
External links
– nonprofit organization producing a documentary about mass extinction titled "Call of Life: Facing the Mass Extinction"
– Calculate extinction rates for yourself!
History of climate variability and change
Evolutionary biology
Meteorological hypotheses
Natural disasters | Extinction event | [
"Physics",
"Astronomy",
"Biology"
] | 8,873 | [
"Evolutionary biology",
"Physical phenomena",
"Astronomical hypotheses",
"Evolution of the biosphere",
"Weather",
"Extinction events",
"Hypothetical impact events",
"Natural disasters",
"Biological hypotheses"
] |
9,908 | https://en.wikipedia.org/wiki/Equation%20of%20state | In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars.
Overview
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
where is the pressure, the volume, and the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
, number of moles of a substance
, , molar volume, the volume of 1 mole of gas or liquid
, ideal gas constant ≈ 8.3144621J/mol·K
, pressure at the critical point
, molar volume at the critical point
, absolute temperature at the critical point
Historical background
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676.
In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone.
Mathematically, this can be represented for species as:In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with , giving:In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
where is a parameter describing the attractive energy between particles and is a parameter describing the volume of the particles.
Ideal gas law
Classical ideal gas law
The classical ideal gas law may be written
In the form shown above, the equation of state is thus
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
where is the number density of the gas (number of atoms/molecules per unit volume), is the (constant) adiabatic index (ratio of specific heats), is the internal energy per unit mass (the "specific internal energy"), is the specific heat capacity at constant volume, and is the specific heat capacity at constant pressure.
Quantum ideal gas law
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass and spin that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with particles occupying a volume with temperature and pressure is given by
where is the Boltzmann constant and the chemical potential is given by the following implicit function
In the limiting case where , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit reduces to
With a fixed number density , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
Cubic equations of state
Cubic equations of state are called such because they can be rewritten as a cubic function of . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
Virial equations of state
Virial equation of state
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.
The BWR equation of state
where
is pressure
is molar density
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
Physically based equations of state
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
Perturbation theory-based models
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
Statistical associating fluid theory (SAFT)
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
Multiparameter equations of state
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
with
The reduced density and reduced temperature are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
List of further equations of state
Stiffened equation of state
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
where is the internal energy per unit mass, is an empirically determined constant typically taken to be about 6.1, and is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by .
Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
Morse oscillator equation of state
An equation of state of Morse oscillator has been derived, and it has the following form:
Where is the first order virial parameter and it depends on the temperature, is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. is the fractional volume of the system.
Ultrarelativistic equation of state
An ultrarelativistic fluid has equation of state
where is the pressure, is the mass density, and is the speed of sound.
Ideal Bose equation of state
The equation of state for an ideal Bose gas is
where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.
Jones–Wilkins–Lee equation of state for explosives (JWL equation)
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
The ratio is defined by using , which is the density of the explosive (solid part) and , which is the density of the detonation products. The parameters , , , and are given by several references. In addition, the initial density (solid part) , speed of detonation , Chapman–Jouguet pressure and the chemical energy per unit volume of the explosive are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.
Others
Tait equation for water and other liquids. Several equations are referred to as the Tait equation.
Murnaghan equation of state
Birch–Murnaghan equation of state
Stacey–Brennan–Irvine equation of state
Modified Rydberg equation of state
Adapted polynomial equation of state
Johnson–Holmquist equation of state
Mie–Grüneisen equation of state
Anton-Schmidt equation of state
State-transition equation
See also
Gas laws
Departure function
Table of thermodynamic equations
Real gas
Cluster expansion
Polytrope
References
External links
Equations of physics
Engineering thermodynamics
Mechanical engineering
Fluid mechanics
Thermodynamic models | Equation of state | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,273 | [
"Applied and interdisciplinary physics",
"Equations of physics",
"Thermodynamic models",
"Engineering thermodynamics",
"Statistical mechanics",
"Mathematical objects",
"Equations",
"Civil engineering",
"Thermodynamics",
"Mechanical engineering",
"Equations of state",
"Fluid mechanics"
] |
9,931 | https://en.wikipedia.org/wiki/Amplifier | An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.
An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors.
History
Vacuum tubes
The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.
The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission.
For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success.
The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread.
The amplifying vacuum tube revolutionized electrical technology. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode.
The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s.
Transistors
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier.
The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited to some high power applications, such as radio transmitters, as well as some musical instrument and high-end audiophile amplifiers.
Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier.
Ideal
In principle, an amplifier is an electrical two-port network that produces a signal at the output port that is a replica of the signal applied to the input port, but increased in magnitude.
The input port can be idealized as either being a voltage input, which takes no current, with the output proportional to the voltage across the port; or a current input, with no voltage across it, in which the output is proportional to the current through the port. The output port can be idealized as being either a dependent voltage source, with zero source resistance and its output voltage dependent on the input; or a dependent current source, with infinite source resistance and the output current dependent on the input. Combinations of these choices lead to four types of ideal amplifiers. In idealized form they are represented by each of the four types of dependent source used in linear analysis, as shown in the figure, namely:
Each type of amplifier in its ideal form has an ideal input and output resistance that is the same as that of the corresponding dependent source:
In real amplifiers the ideal impedances are not possible to achieve, but these ideal elements can be used to construct equivalent circuits of real amplifiers by adding impedances (resistance, capacitance and inductance) to the input and output. For any particular circuit, a small-signal analysis is often used to find the actual impedance. A small-signal AC test current Ix is applied to the input or output node, all external sources are set to AC zero, and the corresponding alternating voltage Vx across the test current source determines the impedance seen at that node as R = Vx / Ix.
Amplifiers designed to attach to a transmission line at input and output, especially RF amplifiers, do not fit into this classification approach. Rather than dealing with voltage or current individually, they ideally couple with an input or output impedance matched to the transmission line impedance, that is, match ratios of voltage to current. Many real RF amplifiers come close to this ideal. Although, for a given appropriate source and load impedance, RF amplifiers can be characterized as amplifying voltage or current, they fundamentally are amplifying power.
Properties
Amplifier properties are given by parameters that include:
Gain, the ratio between the magnitude of output and input signals
Bandwidth, the width of the useful frequency range
Efficiency, the ratio between the power of the output and total power consumption
Linearity, the extent to which the proportion between input and output amplitude is the same for high amplitude and low amplitude input
Noise, a measure of undesired noise mixed into the output
Output dynamic range, the ratio of the largest and the smallest useful output levels
Slew rate, the maximum rate of change of the output
Rise time, settling time, ringing and overshoot that characterize the step response
Stability, the ability to avoid self-oscillation
Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)).
Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.
Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor.
Negative feedback
Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the "closed loop performance") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps).
Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits.
Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop.
Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics.
Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator.
Categories
Active devices
All amplifiers include some form of active device: this is the device that does the actual amplification. The active device can be a vacuum tube, discrete solid state component, such as a single transistor, or part of an integrated circuit, as in an op-amp.
Transistor amplifiers (or solid state amplifiers) are the most common type of amplifier in use today. A transistor is used as the active element. The gain of the amplifier is determined by the properties of the transistor itself as well as the circuit it is contained within.
Common active devices in transistor amplifiers include bipolar junction transistors (BJTs) and metal oxide semiconductor field-effect transistors (MOSFETs).
Applications are numerous. Some common examples are audio amplifiers in a home stereo or public address system, RF high power generation for semiconductor equipment, to RF and microwave applications such as radio transmitters.
Transistor-based amplification can be realized using various configurations: for example a bipolar junction transistor can realize common base, common collector or common emitter amplification; a MOSFET can realize common gate, common source or common drain amplification. Each configuration has different characteristics.
Vacuum-tube amplifiers (also known as tube amplifiers or valve amplifiers) use a vacuum tube as the active device. While semiconductor amplifiers have largely displaced valve amplifiers for low-power applications, valve amplifiers can be much more cost effective in high power applications such as radar, countermeasures equipment, and communications equipment. Many microwave amplifiers are specially designed valve amplifiers, such as the klystron, gyrotron, traveling wave tube, and crossed-field amplifier, and these microwave valves provide much greater single-device power output at microwave frequencies than solid-state devices. Vacuum tubes remain in use in some high end audio equipment, as well as in musical instrument amplifiers, due to a preference for "tube sound".
Magnetic amplifiers are devices somewhat similar to a transformer where one winding is used to control the saturation of a magnetic core and hence alter the impedance of the other winding. They have largely fallen out of use due to development in semiconductor amplifiers but are still useful in HVDC control, and in nuclear power control circuitry due to not being affected by radioactivity.
Negative resistances can be used as amplifiers, such as the tunnel diode amplifier.
Power amplifiers
A power amplifier is an amplifier designed primarily to increase the power available to a load. In practice, amplifier power gain depends on the source and load impedances, as well as the inherent voltage and current gain. A radio frequency (RF) amplifier design typically optimizes impedances for power transfer, while audio and instrumentation amplifier designs normally optimize input and output impedance for least loading and highest signal integrity. An amplifier that is said to have a gain of 20 dB might have a voltage gain of 20 dB and an available power gain of much more than 20 dB (power ratio of 100)—yet actually deliver a much lower power gain if, for example, the input is from a 600 Ω microphone and the output connects to a 47 kΩ input socket for a power amplifier. In general, the power amplifier is the last 'amplifier' or actual circuit in a signal chain (the output stage) and is the amplifier stage that requires attention to power efficiency. Efficiency considerations lead to the various classes of power amplifiers based on the biasing of the output transistors or tubes: see power amplifier classes below.
Audio power amplifiers are typically used to drive loudspeakers. They will often have two output channels and deliver equal power to each. An RF power amplifier is found in radio transmitter final stages. A servo motor controller amplifies a control voltage to adjust the speed of a motor, or the position of a motorized system.
Operational amplifiers (op-amps)
An operational amplifier is an amplifier circuit which typically has very high open loop gain and differential inputs. Op amps have become very widely used as standardized "gain blocks" in circuits due to their versatility; their gain, bandwidth and other characteristics can be controlled by feedback through an external circuit. Though the term today commonly applies to integrated circuits, the original operational amplifier design used valves, and later designs used discrete transistor circuits.
A fully differential amplifier is similar to the operational amplifier, but also has differential outputs. These are usually constructed using BJTs or FETs.
Distributed amplifiers
These use balanced transmission lines to separate individual single stage amplifiers, the outputs of which are summed by the same transmission line. The transmission line is a balanced type with the input at one end and on one side only of the balanced transmission line and the output at the opposite end is also the opposite side of the balanced transmission line. The gain of each stage adds linearly to the output rather than multiplies one on the other as in a cascade configuration. This allows a higher bandwidth to be achieved than could otherwise be realised even with the same gain stage elements.
Switched mode amplifiers
These nonlinear amplifiers have much higher efficiencies than linear amps, and are used where the power saving justifies the extra complexity. Class-D amplifiers are the main example of this type of amplification.
Negative resistance amplifier
A negative resistance amplifier is a type of regenerative amplifier that can use the feedback between the transistor's source and gate to transform a capacitive impedance on the transistor's source to a negative resistance on its gate. Compared to other types of amplifiers, a negative resistance amplifier will require only a tiny amount of power to achieve very high gain, maintaining a good noise figure at the same time.
Applications
Video amplifiers
Video amplifiers are designed to process video signals and have varying bandwidths depending on whether the video signal is for SDTV, EDTV, HDTV 720p or 1080i/p etc. The specification of the bandwidth itself depends on what kind of filter is used—and at which point ( or for example) the bandwidth is measured. Certain requirements for step response and overshoot are necessary for an acceptable TV image.
Microwave amplifiers
Traveling wave tube amplifiers (TWTAs) are used for high power amplification at low microwave frequencies. They typically can amplify across a broad spectrum of frequencies; however, they are usually not as tunable as klystrons.
Klystrons are specialized linear-beam vacuum-devices, designed to provide high power, widely tunable amplification of millimetre and sub-millimetre waves. Klystrons are designed for large scale operations and despite having a narrower bandwidth than TWTAs, they have the advantage of coherently amplifying a reference signal so its output may be precisely controlled in amplitude, frequency and phase.
Solid-state devices such as silicon short channel MOSFETs like double-diffused metal–oxide–semiconductor (DMOS) FETs, GaAs FETs, SiGe and GaAs heterojunction bipolar transistors/HBTs, HEMTs, IMPATT diodes, and others, are used especially at lower microwave frequencies and power levels on the order of watts specifically in applications like portable RF terminals/cell phones and access points where size and efficiency are the drivers. New materials like gallium nitride (GaN) or GaN on silicon or on silicon carbide/SiC are emerging in HEMT transistors and applications where improved efficiency, wide bandwidth, operation roughly from few to few tens of GHz with output power of few watts to few hundred of watts are needed.
Depending on the amplifier specifications and size requirements microwave amplifiers can be realised as monolithically integrated, integrated as modules or based on discrete parts or any combination of those.
The maser is a non-electronic microwave amplifier.
Musical instrument amplifiers
Instrument amplifiers are a range of audio power amplifiers used to increase the sound level of musical instruments, for example guitars, during performances. An amplifier's tone mainly comes from the order and amount in which it applies EQ and distortion.
Classification of amplifier stages and systems
Common terminal
One set of classifications for amplifiers is based on which device terminal is common to both the input and the output circuit. In the case of bipolar junction transistors, the three classes are common emitter, common base, and common collector. For field-effect transistors, the corresponding configurations are common source, common gate, and common drain; for vacuum tubes, common cathode, common grid, and common plate.
The common emitter (or common source, common cathode, etc.) is most often configured to provide amplification of a voltage applied between base and emitter, and the output signal taken between collector and emitter is inverted, relative to the input. The common collector arrangement applies the input voltage between base and collector, and to take the output voltage between emitter and collector. This causes negative feedback, and the output voltage tends to follow the input voltage. This arrangement is also used as the input presents a high impedance and does not load the signal source, though the voltage amplification is less than one. The common-collector circuit is, therefore, better known as an emitter follower, source follower, or cathode follower.
Unilateral or bilateral
An amplifier whose output exhibits no feedback to its input side is described as 'unilateral'. The input impedance of a unilateral amplifier is independent of load, and output impedance is independent of signal source impedance.
An amplifier that uses feedback to connect part of the output back to the input is a bilateral amplifier. Bilateral amplifier input impedance depends on the load, and output impedance on the signal source impedance.
All amplifiers are bilateral to some degree; however they may often be modeled as unilateral under operating conditions where feedback is small enough to neglect for most purposes, simplifying analysis (see the common base article for an example).
Inverting or non-inverting
Another way to classify amplifiers is by the phase relationship of the input signal to the output signal. An 'inverting' amplifier produces an output 180 degrees out of phase with the input signal (that is, a polarity inversion or mirror image of the input as seen on an oscilloscope). A 'non-inverting' amplifier maintains the phase of the input signal waveforms. An emitter follower is a type of non-inverting amplifier, indicating that the signal at the emitter of a transistor is following (that is, matching with unity gain but perhaps an offset) the input signal. Voltage follower is also non-inverting type of amplifier having unity gain.
This description can apply to a single stage of an amplifier, or to a complete amplifier system.
Function
Other amplifiers may be classified by their function or output characteristics. These functional descriptions usually apply to complete amplifier systems or sub-systems and rarely to individual stages.
A servo amplifier indicates an integrated feedback loop to actively control the output at some desired level. A DC servo indicates use at frequencies down to DC levels, where the rapid fluctuations of an audio or RF signal do not occur. These are often used in mechanical actuators, or devices such as DC motors that must maintain a constant speed or torque. An AC servo amp. can do this for some AC motors.
A linear amplifier responds to different frequency components independently, and does not generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity (even the most linear amplifier has some nonlinearities, since the amplifying devices—transistors or vacuum tubes—follow nonlinear power laws such as square-laws and rely on circuitry techniques to reduce those effects).
A nonlinear amplifier generates significant distortion and so changes the harmonic content; there are situations where this is useful. Amplifier circuits intentionally providing a non-linear transfer function include:
a device like a silicon controlled rectifier or a transistor used as a switch may be employed to turn either fully on or off a load such as a lamp based on a threshold in a continuously variable input.
a non-linear amplifier in an analog computer or true RMS converter for example can provide a special transfer function, such as logarithmic or square-law.
a Class C RF amplifier may be chosen because it can be very efficient—but is non-linear. Following such an amplifier with a so-called tank tuned circuit can reduce unwanted harmonics (distortion) sufficiently to make it useful in transmitters, or some desired harmonic may be selected by setting the resonant frequency of the tuned circuit to a higher frequency rather than fundamental frequency in frequency multiplier circuits.
Automatic gain control circuits require an amplifier's gain be controlled by the time-averaged amplitude so that the output amplitude varies little when weak stations are being received. The non-linearities are assumed arranged so the relatively small signal amplitude suffers from little distortion (cross-channel interference or intermodulation) yet is still modulated by the relatively large gain-control DC voltage.
AM detector circuits that use amplification such as anode-bend detectors, precision rectifiers and infinite impedance detectors (so excluding unamplified detectors such as cat's-whisker detectors), as well as peak detector circuits, rely on changes in amplification based on the signal's instantaneous amplitude to derive a direct current from an alternating current input.
Operational amplifier comparator and detector circuits.
A wideband amplifier has a precise amplification factor over a wide frequency range, and is often used to boost signals for relay in communications systems. A narrowband amp amplifies a specific narrow range of frequencies, to the exclusion of other frequencies.
An RF amplifier amplifies signals in the radio frequency range of the electromagnetic spectrum, and is often used to increase the sensitivity of a receiver or the output power of a transmitter.
An audio amplifier amplifies audio frequencies. This category subdivides into small signal amplification, and power amps that are optimised to driving speakers, sometimes with multiple amps grouped together as separate or bridgeable channels to accommodate different audio reproduction requirements. Frequently used terms within audio amplifiers include:
Preamplifier (preamp.), which may include a phono preamp with RIAA equalization, or tape head preamps with CCIR equalisation filters. They may include filters or tone control circuitry.
Power amplifier (normally drives loudspeakers), headphone amplifiers, and public address amplifiers.
Stereo amplifiers imply two channels of output (left and right), though the term simply means "solid" sound (referring to three-dimensional)—so quadraphonic stereo was used for amplifiers with four channels. 5.1 and 7.1 systems refer to Home theatre systems with 5 or 7 normal spatial channels, plus a subwoofer channel.
Buffer amplifiers, which may include emitter followers, provide a high impedance input for a device (perhaps another amplifier, or perhaps an energy-hungry load such as lights) that would otherwise draw too much current from the source. Line drivers are a type of buffer that feeds long or interference-prone interconnect cables, possibly with differential outputs through twisted pair cables.
Interstage coupling method
Amplifiers are sometimes classified by the coupling method of the signal at the input, output, or between stages. Different types of these include:
Resistive-capacitive (RC) coupled amplifier, using a network of resistors and capacitors By design these amplifiers cannot amplify DC signals as the capacitors block the DC component of the input signal. RC-coupled amplifiers were used very often in circuits with vacuum tubes or discrete transistors. In the days of the integrated circuit a few more transistors on a chip are much cheaper and smaller than a capacitor.
Inductive-capacitive (LC) coupled amplifier, using a network of inductors and capacitors This kind of amplifier is most often used in selective radio-frequency circuits.
Transformer coupled amplifier, using a transformer to match impedances or to decouple parts of the circuits Quite often LC-coupled and transformer-coupled amplifiers cannot be distinguished as a transformer is some kind of inductor.
Direct coupled amplifier, using no impedance and bias matching components This class of amplifier was very uncommon in the vacuum tube days when the anode (output) voltage was at greater than several hundred volts and the grid (input) voltage at a few volts minus. So they were used only if the gain was specified down to DC (e.g., in an oscilloscope). In the context of modern electronics developers are encouraged to use directly coupled amplifiers whenever possible. In FET and CMOS technologies direct coupling is dominant since gates of MOSFETs theoretically pass no current through themselves. Therefore, DC component of the input signals is automatically filtered.
Frequency range
Depending on the frequency range and other properties amplifiers are designed according to different principles.
Frequency ranges down to DC are used only when this property is needed. Amplifiers for direct current signals are vulnerable to minor variations in the properties of components with time. Special methods, such as chopper stabilized amplifiers are used to prevent objectionable drift in the amplifier's properties for DC. "DC-blocking" capacitors can be added to remove DC and sub-sonic frequencies from audio amplifiers.
Depending on the frequency range specified different design principles must be used. Up to the MHz range only "discrete" properties need be considered; e.g., a terminal has an input impedance.
As soon as any connection within the circuit gets longer than perhaps 1% of the wavelength of the highest specified frequency (e.g., at 100 MHz the wavelength is 3 m, so the critical connection length is approx. 3 cm) design properties radically change. For example, a specified length and width of a PCB trace can be used as a selective or impedance-matching entity.
Above a few hundred MHz, it gets difficult to use discrete elements, especially inductors. In most cases, PCB traces of very closely defined shapes are used instead (stripline techniques).
The frequency range handled by an amplifier might be specified in terms of bandwidth (normally implying a response that is 3 dB down when the frequency reaches the specified bandwidth), or by specifying a frequency response that is within a certain number of decibels between a lower and an upper frequency (e.g. "20 Hz to 20 kHz plus or minus 1 dB").
Power amplifier classes
Power amplifier circuits (output stages) are classified as A, B, AB and C for analog designs—and class D and E for switching designs. The power amplifier classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency.
Example amplifier circuit
The practical amplifier circuit shown above could be the basis for a moderate-power audio amplifier. It features a typical (though substantially simplified) design as found in modern amplifiers, with a class-AB push–pull output stage, and uses some overall negative feedback. Bipolar transistors are shown, but this design would also be realizable with FETs or valves.
The input signal is coupled through capacitor C1 to the base of transistor Q1. The capacitor allows the AC signal to pass, but blocks the DC bias voltage established by resistors R1 and R2 so that any preceding circuit is not affected by it. Q1 and Q2 form a differential amplifier (an amplifier that multiplies the difference between two inputs by some constant), in an arrangement known as a long-tailed pair. This arrangement is used to conveniently allow the use of negative feedback, which is fed from the output to Q2 via R7 and R8.
The negative feedback into the difference amplifier allows the amplifier to compare the input to the actual output. The amplified signal from Q1 is directly fed to the second stage, Q3, which is a common emitter stage that provides further amplification of the signal and the DC bias for the output stages, Q4 and Q5. R6 provides the load for Q3 (a better design would probably use some form of active load here, such as a constant-current sink). So far, all of the amplifier is operating in class A. The output pair are arranged in class-AB push–pull, also called a complementary pair. They provide the majority of the current amplification (while consuming low quiescent current) and directly drive the load, connected via DC-blocking capacitor C2. The diodes D1 and D2 provide a small amount of constant voltage bias for the output pair, just biasing them into the conducting state so that crossover distortion is minimized. That is, the diodes push the output stage firmly into class-AB mode (assuming that the base-emitter drop of the output transistors is reduced by heat dissipation).
This design is simple, but a good basis for a practical design because it automatically stabilises its operating point, since feedback internally operates from DC up through the audio range and beyond. Further circuit elements would probably be found in a real design that would roll-off the frequency response above the needed range to prevent the possibility of unwanted oscillation. Also, the use of fixed diode bias as shown here can cause problems if the diodes are not both electrically and thermally matched to the output transistors if the output transistors turn on too much, they can easily overheat and destroy themselves, as the full current from the power supply is not limited at this stage.
A common solution to help stabilise the output devices is to include some emitter resistors, typically one ohm or so. Calculating the values of the circuit's resistors and capacitors is done based on the components employed and the intended use of the amp.
Notes on implementation
Any real amplifier is an imperfect realization of an ideal amplifier. An important limitation of a real amplifier is that the output it generates is ultimately limited by the power available from the power supply. An amplifier saturates and clips the output if the input signal becomes too large for the amplifier to reproduce or exceeds operational limits for the device. The power supply may influence the output, so must be considered in the design. The power output from an amplifier cannot exceed its input power.
The amplifier circuit has an "open loop" performance. This is described by various parameters (gain, slew rate, output impedance, distortion, bandwidth, signal-to-noise ratio, etc.). Many modern amplifiers use negative feedback techniques to hold the gain at the desired value and reduce distortion. Negative loop feedback has the intended effect of lowering the output impedance and thereby increasing electrical damping of loudspeaker motion at and near the resonance frequency of the speaker.
When assessing rated amplifier power output, it is useful to consider the applied load, the signal type (e.g., speech or music), required power output duration (i.e., short-time or continuous), and required dynamic range (e.g., recorded or live audio). In high-powered audio applications that require long cables to the load (e.g., cinemas and shopping centres) it may be more efficient to connect to the load at line output voltage, with matching transformers at source and loads. This avoids long runs of heavy speaker cables.
To prevent instability or overheating requires care to ensure solid state amplifiers are adequately loaded. Most have a rated minimum load impedance.
All amplifiers generate heat through electrical losses. The amplifier must dissipate this heat via convection or forced air cooling. Heat can damage or reduce electronic component service life. Designers and installers must also consider heating effects on adjacent equipment.
Different power supply types result in many different methods of bias. Bias is a technique by which active devices are set to operate in a particular region, or by which the DC component of the output signal is set to the midpoint between the maximum voltages available from the power supply. Most amplifiers use several devices at each stage; they are typically matched in specifications except for polarity. Matched inverted polarity devices are called complementary pairs. Class-A amplifiers generally use only one device, unless the power supply is set to provide both positive and negative voltages, in which case a dual device symmetrical design may be used. Class-C amplifiers, by definition, use a single polarity supply.
Amplifiers often have multiple stages in cascade to increase gain. Each stage of these designs may be a different type of amp to suit the needs of that stage. For instance, the first stage might be a class-A stage, feeding a class-AB push–pull second stage, which then drives a class-G final output stage, taking advantage of the strengths of each type, while minimizing their weaknesses.
Special types
Charge transfer amplifier
CMOS amplifiers
Current sense amplifier
Distributed amplifier
Doherty amplifier
Double-tuned amplifier
Faithful amplifier
Intermediate power amplifier
Low-noise amplifier
Negative feedback amplifier
Optical amplifier
Programmable-gain amplifier
Tuned amplifier
Valve amplifier
See also
Power added efficiency
References
External links
AES guide to amplifier classes
contains an explanation of different amplifier classes
Electronic circuits
Audiovisual introductions in 1906 | Amplifier | [
"Technology",
"Engineering"
] | 7,969 | [
"Electronic engineering",
"Electronic amplifiers",
"Amplifiers",
"Electronic circuits"
] |
9,944 | https://en.wikipedia.org/wiki/Episome | An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression.
As of 1999, there were many known sequences of DNA (deoxyribonucleic acid) that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence.
The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome.
Mechanism of episomal retention
The mechanism behind episomal retention in the case of S/MAR episomes is generally still uncertain.
As of 1985, in the case of latent Epstein-Barr virus infection, episomes seemed to be associated with nuclear proteins of the host cell through a set of viral proteins.
Episomes in prokaryotes
Episomes in prokaryotes are special sequences which can divide either separate from or integrated into the prokaryotic chromosome.
References
Molecular biology | Episome | [
"Chemistry",
"Biology"
] | 381 | [
"Biochemistry",
"Molecular biology"
] |
9,994 | https://en.wikipedia.org/wiki/Ephemeris%20time | The term ephemeris time (often abbreviated ET) can in principle refer to time in association with any ephemeris (itinerary of the trajectory of an astronomical object). In practice it has been used more specifically to refer to:
a former standard astronomical time scale adopted in 1952 by the IAU, and superseded during the 1970s. This time scale was proposed in 1948, to overcome the disadvantages of irregularly fluctuating mean solar time. The intent was to define a uniform time (as far as was then feasible) based on Newtonian theory (see below: Definition of ephemeris time (1952)). Ephemeris time was a first application of the concept of a dynamical time scale, in which the time and time scale are defined implicitly, inferred from the observed position of an astronomical object via the dynamical theory of its motion.
a modern relativistic coordinate time scale, implemented by the JPL ephemeris time argument Teph, in a series of numerically integrated Development Ephemerides. Among them is the DE405 ephemeris in widespread current use. The time scale represented by Teph is closely related to, but distinct (by an offset and constant rate) from, the TCB time scale currently adopted as a standard by the IAU (see below: JPL ephemeris time argument Teph).
Most of the following sections relate to the ephemeris time of the 1952 standard.
An impression has sometimes arisen that ephemeris time was in use from 1900: this probably arose because ET, though proposed and adopted in the period 1948–1952, was defined in detail using formulae that made retrospective use of the epoch date of 1900 January 0 and of Newcomb's Tables of the Sun.
The ephemeris time of the 1952 standard leaves a continuing legacy, through its historical unit ephemeris second which became closely duplicated in the length of the current standard SI second (see below: Redefinition of the second).
History (1952 standard)
Ephemeris time (ET), adopted as standard in 1952, was originally designed as an approach to a uniform time scale, to be freed from the effects of irregularity in the rotation of the Earth, "for the convenience of astronomers and other scientists", for example for use in ephemerides of the Sun (as observed from the Earth), the Moon, and the planets. It was proposed in 1948 by G M Clemence.
From the time of John Flamsteed (1646–1719) it had been believed that the Earth's daily rotation was uniform. But in the later nineteenth and early twentieth centuries, with increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth (i.e. the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. The evidence was compiled by W de Sitter (1927) who wrote "If we accept this hypothesis, then the 'astronomical time', given by the Earth's rotation, and used in all practical astronomical computations, differs from the 'uniform' or 'Newtonian' time, which is defined as the independent variable of the equations of celestial mechanics". De Sitter offered a correction to be applied to the mean solar time given by the Earth's rotation to get uniform time.
Other astronomers of the period also made suggestions for obtaining uniform time, including A Danjon (1929), who suggested in effect that observed positions of the Moon, Sun and planets, when compared with their well-established gravitational ephemerides, could better and more uniformly define and determine time.
Thus the aim developed, to provide a new time scale for astronomical and scientific purposes, to avoid the unpredictable irregularities of the mean solar time scale, and to replace for these purposes Universal Time (UT) and any other time scale based on the rotation of the Earth around its axis, such as sidereal time.
The American astronomer G M Clemence (1948) made a detailed proposal of this type based on the results of the English Astronomer Royal H Spencer Jones (1939). Clemence (1948) made it clear that his proposal was intended "for the convenience of astronomers and other scientists only" and that it was "logical to continue the use of mean solar time for civil purposes".
De Sitter and Clemence both referred to the proposal as 'Newtonian' or 'uniform' time. D Brouwer suggested the name 'ephemeris time'.
Following this, an astronomical conference held in Paris in 1950 recommended "that in all cases where the mean solar second is unsatisfactory as a unit of time by reason of its variability, the unit adopted should be the sidereal year at 1900.0, that the time reckoned in this unit be designated ephemeris time", and gave Clemence's formula (see Definition of ephemeris time (1952)) for translating mean solar time to ephemeris time.
The International Astronomical Union approved this recommendation at its 1952 general assembly. Practical introduction took some time (see Use of ephemeris time in official almanacs and ephemerides); ephemeris time (ET) remained a standard until superseded in the 1970s by further time scales (see Revision).
During the currency of ephemeris time as a standard, the details were revised a little. The unit was redefined in terms of the tropical year at 1900.0 instead of the sidereal year; and the standard second was defined first as 1/31556925.975 of the tropical year at 1900.0, and then as the slightly modified fraction 1/31556925.9747 instead, finally being redefined in 1967/8 in terms of the cesium atomic clock standard (see below).
Although ET is no longer directly in use, it leaves a continuing legacy. Its successor time scales, such as TDT, as well as the atomic time scale IAT (TAI), were designed with a relationship that "provides continuity with ephemeris time". ET was used for the calibration of atomic clocks in the 1950s. Close equality between the ET second with the later SI second (as defined with reference to the cesium atomic clock) has been verified to within 1 part in 1010.
In this way, decisions made by the original designers of ephemeris time influenced the length of today's standard SI second, and in turn, this has a continuing influence on the number of leap seconds which have been needed for insertion into current broadcast time scales, to keep them approximately in step with mean solar time.
Definition (1952)
Ephemeris time was defined in principle by the orbital motion of the Earth around the Sun (but its practical implementation was usually achieved in another way, see below). Its detailed definition was based on Simon Newcomb's Tables of the Sun (1895), implemented in a new way to accommodate certain observed discrepancies:
In the introduction to Tables of the Sun, the basis of the tables (p. 9) includes a formula for the Sun's mean longitude at a time, indicated by interval T (in units of Julian centuries of 36525 mean solar days), reckoned from Greenwich Mean Noon on 0 January 1900:
Ls = 279° 41' 48".04 + 129,602,768".13T +1".089T2 . . . . . (1)
Spencer Jones' work of 1939 showed that differences between the observed positions of the Sun and the predicted positions given by Newcomb's formula demonstrated the need for the following correction to the formula:
ΔLs = + 1".00 + 2".97T + 1".23T2 + 0.0748B
where "the times of observation are in Universal time, not corrected to Newtonian time," and 0.0748B represents an irregular fluctuation calculated from lunar observations.
Thus, a conventionally corrected form of Newcomb's formula, incorporating the corrections on the basis of mean solar time, would be the sum of the two preceding expressions:
Ls = 279° 41' 49".04 + 129,602,771".10T +2".32T2 +0.0748B . . . . . (2)
Clemence's 1948 proposal, however, did not adopt such a correction of mean solar time. Instead, the same numbers were used as in Newcomb's original uncorrected formula (1), but now applied somewhat prescriptively, to define a new time and time scale implicitly, based on the real position of the Sun:
Ls = 279° 41' 48".04 + 129,602,768".13E +1".089E2 . . . . . (3)
With this reapplication, the time variable, now given as E, represents time in ephemeris centuries of 36525 ephemeris days of 86400 ephemeris seconds each. The 1961 official reference summarized the concept as such: "The origin and rate of ephemeris time are defined to make the Sun's mean longitude agree with Newcomb's expression"
From the comparison of formulae (2) and (3), both of which express the same real solar motion in the same real time but defined on separate time scales, Clemence arrived at an explicit expression, estimating the difference in seconds of time between ephemeris time and mean solar time, in the sense (ET-UT):
. . . . . (4)
with the 24.349 seconds of time corresponding to the 1.00" in ΔLs.
Clemence's formula (today superseded by more modern estimations) was included in the original conference decision on ephemeris time. In view of the fluctuation term, practical determination of the difference between ephemeris time and UT depended on observation. Inspection of the formulae above shows that the (ideally constant) units of ephemeris time have been, for the whole of the twentieth century, very slightly shorter than the corresponding (but not precisely constant) units of mean solar time (which, besides their irregular fluctuations, tend to lengthen gradually). This finding is consistent with the modern results of Morrison and Stephenson (see article ΔT).
Implementations
Secondary realizations by lunar observations
Although ephemeris time was defined in principle by the orbital motion of the Earth around the Sun, it was usually measured in practice by the orbital motion of the Moon around the Earth. These measurements can be considered as secondary realizations (in a metrological sense) of the primary definition of ET in terms of the solar motion, after a calibration of the mean motion of the Moon with respect to the mean motion of the Sun.
Reasons for the use of lunar measurements were practically based: the Moon moves against the background of stars about 13 times as fast as the Sun's corresponding rate of motion, and the accuracy of time determinations from lunar measurements is correspondingly greater.
When ephemeris time was first adopted, time scales were still based on astronomical observation, as they always had been. The accuracy was limited by the accuracy of optical observation, and corrections of clocks and time signals were published in arrear.
Secondary realizations by atomic clocks
A few years later, with the invention of the cesium atomic clock, an alternative offered itself. Increasingly, after the calibration in 1958 of the cesium atomic clock by reference to ephemeris time, cesium atomic clocks running on the basis of ephemeris seconds began to be used and kept in step with ephemeris time. The atomic clocks offered a further secondary realization of ET, on a quasi-real time basis that soon proved to be more useful than the primary ET standard: not only more convenient, but also more precisely uniform than the primary standard itself. Such secondary realizations were used and described as 'ET', with an awareness that the time scales based on the atomic clocks were not identical to that defined by the primary ephemeris time standard, but rather, an improvement over it on account of their closer approximation to uniformity. The atomic clocks gave rise to the atomic time scale, and to what was first called Terrestrial Dynamical Time and is now Terrestrial Time, defined to provide continuity with ET.
The availability of atomic clocks, together with the increasing accuracy of astronomical observations (which meant that relativistic corrections were at least in the foreseeable future no longer going to be small enough to be neglected), led to the eventual replacement of the ephemeris time standard by more refined time scales including terrestrial time and barycentric dynamical time, to which ET can be seen as an approximation.
Revision of time scales
In 1976, the IAU resolved that the theoretical basis for its then-current (since 1952) standard of Ephemeris Time was non-relativistic, and that therefore, beginning in 1984, Ephemeris Time would be replaced by two relativistic timescales intended to constitute dynamical timescales: Terrestrial Dynamical Time (TDT) and Barycentric Dynamical Time (TDB). Difficulties were recognized, which led to these, in turn, being superseded in the 1990s by time scales Terrestrial Time (TT), Geocentric Coordinate Time GCT (TCG) and Barycentric Coordinate Time BCT (TCB).
JPL ephemeris time argument Teph
High-precision ephemerides of sun, moon and planets were developed and calculated at the Jet Propulsion Laboratory (JPL) over a long period, and the latest available were adopted for the ephemerides in the Astronomical Almanac starting in 1984. Although not an IAU standard, the ephemeris time argument Teph has been in use at that institution since the 1960s. The time scale represented by Teph has been characterized as a relativistic coordinate time that differs from Terrestrial Time only by small periodic terms with an amplitude not exceeding 2 milliseconds of time: it is linearly related to, but distinct (by an offset and constant rate which is of the order of 0.5 s/a) from the TCB time scale adopted in 1991 as a standard by the IAU. Thus for clocks on or near the geoid, Teph (within 2 milliseconds), but not so closely TCB, can be used as approximations to Terrestrial Time, and via the standard ephemerides Teph is in widespread use.
Partly in acknowledgement of the widespread use of Teph via the JPL ephemerides, IAU resolution 3 of 2006 (re-)defined Barycentric Dynamical Time (TDB) as a current standard. As re-defined in 2006, TDB is a linear transformation of TCB. The same IAU resolution also stated (in note 4) that the "independent time argument of the JPL ephemeris DE405, which is called Teph" (here the IAU source cites), "is for practical purposes the same as TDB defined in this Resolution". Thus the new TDB, like Teph, is essentially a more refined continuation of the older ephemeris time ET and (apart from the periodic fluctuations) has the same mean rate as that established for ET in the 1950s.
Use in official almanacs and ephemerides
Ephemeris time based on the standard adopted in 1952 was introduced into the Astronomical Ephemeris (UK) and the American Ephemeris and Nautical Almanac, replacing UT in the main ephemerides in the issues for 1960 and after. (But the ephemerides in the Nautical Almanac, by then a separate publication for the use of navigators, continued to be expressed in terms of UT.) The ephemerides continued on this basis through 1983 (with some changes due to adoption of improved values of astronomical constants), after which, for 1984 onwards, they adopted the JPL ephemerides.
Previous to the 1960 change, the 'Improved Lunar Ephemeris' had already been made available in terms of ephemeris time for the years 1952—1959 (computed by W J Eckert from Brown's theory with modifications recommended by Clemence (1948)).
Redefinition of the second
Successive definitions of the unit of ephemeris time are mentioned above (History). The value adopted for the 1956/1960 standard second:
the fraction 1/31 556 925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.
was obtained from the linear time-coefficient in Newcomb's expression for the solar mean longitude (above), taken and applied with the same meaning for the time as in formula (3) above. The relation with Newcomb's coefficient can be seen from:
1/31 556 925.9747 = 129 602 768.13 / (360×60×60×36 525×86 400).
Caesium atomic clocks became operational in 1955, and quickly confirmed the evidence that the rotation of the Earth fluctuated irregularly. This confirmed the unsuitability of the mean solar second of Universal Time as a measure of time interval for the most precise purposes. After three years of comparisons with lunar observations, Markowitz et al. (1958) determined that the ephemeris second corresponded to 9 192 631 770 ± 20 cycles of the chosen cesium resonance.
Following this, in 1967/68, the General Conference on Weights and Measures (CGPM) replaced the definition of the SI second by the following:
The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
Although this is an independent definition that does not refer to the older basis of ephemeris time, it uses the same quantity as the value of the ephemeris second measured by the cesium clock in 1958. This SI second referred to atomic time was later verified by Markowitz (1988) to be in agreement, within 1 part in 1010, with the second of ephemeris time as determined from lunar observations.
For practical purposes the length of the ephemeris second can be taken as equal to the length of the second of Barycentric Dynamical Time (TDB) or Terrestrial Time (TT) or its predecessor TDT.
The difference between ET and UT is called ΔT; it changes irregularly, but the long-term trend is parabolic, decreasing from ancient times until the nineteenth century, and increasing since then at a rate corresponding to an increase in the solar day length of 1.7 ms per century (see leap seconds).
International Atomic Time (TAI) was set equal to UT2 at 1 January 1958 0:00:00. At that time, ΔT was already about 32.18 seconds. The difference between Terrestrial Time (TT) (the successor to ephemeris time) and atomic time was later defined as follows:
1977 January 1.000 3725 TT = 1977 January 1.000 0000 TAI, i.e.
TT − TAI = 32.184 seconds
This difference may be assumed constant—the rates of TT and TAI are designed to be identical.
Notes and references
Bibliography
G M Clemence, "On the System of Astronomical Constants", Astronomical Journal, vol. 53(6) (1948), issue #1170, pp. 169–179.
G M Clemence (1971), "The Concept of Ephemeris Time", Journal for the History of Astronomy, vol. 2 (1971), pp. 73–79.
B Guinot and P K Seidelmann (1988), "Time scales – Their history, definition and interpretation", Astronomy and Astrophysics, vol. 194 (nos. 1–2) (April 1988), pp. 304–308.
'ESAA (1992)': P K Seidelmann (ed.), "Explanatory Supplement to the Astronomical Almanac", University Science Books, CA, 1992; .
'ESAE 1961': "Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac" ('prepared jointly by the Nautical Almanac Offices of the United Kingdom and the United States of America', HMSO, London, 1961).
IAU resolutions (1976): Resolutions adopted by the IAU in 1976 at Grenoble.
"Improved Lunar Ephemeris", US Government Printing Office, 1954.
W Markowitz, R G Hall, S Edelson (1955), "Ephemeris time from photographic positions of the moon", Astronomical Journal, vol. 60 (1955), p. 171.
W Markowitz, R G Hall, L Essen, J V L Parry (1958), "Frequency of cesium in terms of ephemeris time", Physical Review Letters, vol. 1 (1958), 105–107.
W Markowitz (1959), "Variations in the Rotation of the Earth, Results Obtained with the Dual-Rate Moon Camera and Photographic Zenith Tubes", Astronomical Journal, vol. 64 (1959), pp. 106–113.
Wm Markowitz (1988), "Comparisons of ET(Solar), ET(Lunar), UT and TDT", in A K Babcock & G A Wilkins (eds.), The Earth's Rotation and Reference Frames for Geodesy and Geophysics, IAU Symposia #128 (1988), pp. 413–418.
Dennis McCarthy & P. Kenneth Seidelmann (2009), TIME From Earth Rotation to Atomic Physics, Wiley-VCH, Weinheim, .
W G Melbourne, J D Mulholland, W L Sjogren, F M Sturms (1968), "Constants and Related Information for Astrodynamic Calculations", NASA Technical Report 32-1306, Jet Propulsion Laboratory, July 15, 1968.
L V Morrison, F R Stephenson (2004), "Historical values of the Earth's clock error ΔT and the calculation of eclipses", Journal for the History of Astronomy (), vol. 35(3) (2004), #120, pp. 327–336 (with addendum at vol. 36, p. 339).
Simon Newcomb (1895), Tables of the Sun ("Tables of the Motion of the Earth on its Axis and Around the Sun", in "Tables of the Four Inner Planets", vol. 6, part 1, of Astronomical Papers prepared for the use of the American Ephemeris and Nautical Almanac (1895), at pages 1–169).
W de Sitter (1927), "On the secular accelerations and the fluctuations of the longitudes of the moon, the sun, Mercury and Venus", Bull. Astron. Inst. Netherlands, vol. 4 (1927), pages 21–38.
H Spencer Jones, "The Rotation of the Earth, and the Secular Accelerations of the Sun, Moon and Planets", in Monthly Notes of the Royal Astronomical Society, vol. 99 (1939), pp. 541–558.
E M Standish, "Time scales in the JPL and CfA ephemerides", Astronomy & Astrophysics, vol. 336 (1998), 381–384.
F R Stephenson, L V Morrison (1984), "Long-term changes in the rotation of the earth – 700 B.C. to A.D. 1980", (Royal Society, Discussion on Rotation in the Solar System, London, England, Mar. 8, 9, 1984) Royal Society (London), Philosophical Transactions, Series A (), vol. 313 (1984), #1524, pp. 47–70.
F R Stephenson, L V Morrison (1995), "Long-Term Fluctuations in the Earth's Rotation: 700 BC to AD 1990", Royal Society (London), Philosophical Transactions, Series A (), vol. 351 (1995), #1695, pp. 165–202.
G M R Winkler and T C van Flandern (1977), "Ephemeris Time, relativity, and the problem of uniform time in astronomy", Astronomical Journal, vol. 82 (Jan. 1977), pp. 84–92.
Time scales
Time in astronomy | Ephemeris time | [
"Physics",
"Astronomy"
] | 5,127 | [
"Time in astronomy",
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
10,008 | https://en.wikipedia.org/wiki/Electrode | An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials (chemicals) depending on the type of battery.
Michael Faraday coined the term "" in 1833; the word recalls the Greek ἤλεκτρον (, "amber") and ὁδός (, "path, way").
The electrophore, invented by Johan Wilcke in 1762, was an early version of an electrode used to study static electricity.
Anode and cathode in electrochemical cells
Electrodes are an essential part of any battery. The first electrochemical battery was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell, it was not very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then, many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes.
Anode (-)
'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it.
Cathode (+)
The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent.
Primary cell
A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed.
The half-reactions are:
Zn(s) + 2OH−(aq) → ZnO(s) + H2O(l) + 2e− [E0oxidation = -1.28 V]
2MnO2(s) + H2O(l) + 2e− → Mn2O3(s) + 2OH−(aq) [E0reduction = +0.15 V]
Overall reaction:
Zn(s) + 2MnO2(s) ZnO(s) + Mn2O3(s) [E0total = +1.43 V]
The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide.
Secondary cell
Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in automobiles, among others. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance.
Marcus' theory of electron transfer
Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa.
We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor
D + A → D+ + A−
The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle.
Doing this and then rearranging this leads to the expression of the free energy activation () in terms of the overall free energy of the reaction ().
In which the is the reorganisation energy.
Filling this result in the classically derived Arrhenius equation
leads to
With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below.
This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions . For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the one can read the paper by Marcus.
the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory.
Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula
With being the electronic coupling constant describing the interaction between the two states (reactants and products) and being the line shape function. Taking the classical limit of this expression, meaning , and making some substitution an expression is obtained very similar to the classically derived formula, as expected.
The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor . One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation.
Efficiency
The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below.
Surface effects
The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance.
Manufacturing
The production of electrodes for Li-ion batteries is done in various steps as follows:
The various constituents of the electrode are mixed into a solvent. This mixture is designed such that it improves the performance of the electrodes. Common components of this mixture are:
The active electrode particles.
A binder used to contain the active electrode particles.
A conductive agent used to improve the conductivity of the electrode.
The mixture created is known as an ‘electrode slurry’.
The electrode slurry above is coated onto a conductor which acts as the current collector in the electrochemical cell. Typical current collectors are copper for the cathode and aluminum for the anode.
After the slurry has been applied to the conductor it is dried and then pressed to the required thickness.
Structure of the electrode
For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are:
Clustering of the active material and the conductive agent. In order for all the components of the slurry to perform their task, they should all be spread out evenly within the electrode.
An even distribution of the conductive agent over the active material. This makes sure that the conductivity of the electrode is optimal.
The adherence of the electrode to the current collectors. The adherence makes sure that the electrode does not dissolve into the electrolyte.
The density of the active material. A balance should be found between the amount of active material, the conductive agent and the binder. Since the active material is the important factor in the electrode, the slurry should be designed such that the density of the active material is as high as possible, without the conductive agent and the binder not functioning properly.
These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done.
Electrodes in lithium ion batteries
A modern application of electrodes is in lithium-ion batteries (Li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right.
Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically.
Cathodes
In Li-ion batteries, the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason, cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries For example, Chinese and American researchers have demonstrated that ultralong single wall carbon nanotubes significantly enhance lithium iron phosphate cathodes. By creating a highly efficient conductive network that securely binds lithium iron phosphate particles, adding carbon nanotubes as a conductive additive at a dosage of just 0.5 wt.% helps cathodes to achieve a remarkable rate capacity of 161.5 mAh g-1 at 0.5 C and 130.2 mAh g-1 at 5 C, whole maintaining 87.4% capacity retention after 200 cycles at 2 C.
Anodes
The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium-ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one. Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem, scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic lithium is another possible candidate for the anode. It boasts a higher specific capacity than silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest. In recent years, researchers have conducted several studies on the use of single wall carbon nanotubes (SWCNTs) as conductive additives. These SWCNTs help to preserve electron conduction, ensure stable electrochemical reactions, and maintain uniform volume changes during cycling, effectively reducing anode pulverization.
Mechanical properties
A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system's container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry.
More than just affecting the electrode's morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-ion batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress.
In this equation, μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery's performance. Furthermore, mechanical stresses may also impact the electrode's solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system.
Other anodes and cathodes
In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid.
In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving.
Welding electrodes
In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode.
Alternating current electrodes
For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second.
Chemically modified electrodes
Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation.
Uses
Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include:
Electrodes for fuel cells
Electrodes for medical purposes, such as EEG (for recording brain activity), ECG (recording heart beats), ECT (electrical brain stimulation), defibrillator (recording and delivering cardiac stimulation)
Electrodes for electrophysiology techniques in biomedical research
Electrodes for execution by the electric chair
Electrodes for electroplating
Electrodes for arc welding
Electrodes for cathodic protection
Electrodes for grounding
Electrodes for chemical analysis using electrochemical methods
Nanoelectrodes for high-precision measurements in nanoelectrochemistry
Inert electrodes for electrolysis (made of platinum)
Membrane electrode assembly
Electrodes for Taser electroshock weapon
See also
Reference electrode
Gas diffusion electrode
Cellulose electrode
Anion vs. Cation
Electron versus electron hole
Electron microscope
Tafel equation
Hot cathode
Cold cathode
Reversible charge injection limit
References
Further reading
Electricity | Electrode | [
"Chemistry"
] | 4,605 | [
"Electrochemistry",
"Electrodes"
] |
10,046 | https://en.wikipedia.org/wiki/Erie%20Canal | The Erie Canal is a historic canal in upstate New York that runs east–west between the Hudson River and Lake Erie. Completed in 1825, the canal was the first navigable waterway connecting the Atlantic Ocean to the Great Lakes, vastly reducing the costs of transporting people and goods across the Appalachians. The Erie Canal accelerated the settlement of the Great Lakes region, the westward expansion of the United States, and the economic ascendancy of New York state. It has been called "The Nation's First Superhighway".
A canal from the Hudson River to the Great Lakes was first proposed in the 1780s, but a formal survey was not conducted until 1808. The New York State Legislature authorized construction in 1817. Political opponents of the canal (referencing its lead supporter New York Governor DeWitt Clinton) denigrated the project as "Clinton's Folly" and "Clinton's Big Ditch". Nonetheless, the canal saw quick success upon opening on October 26, 1825, with toll revenue covering the state's construction debt within the first year of operation. The westward connection gave New York City a strong advantage over all other U.S. ports and brought major growth to canal cities such as Albany, Utica, Syracuse, Rochester, and Buffalo.
The construction of the Erie Canal was a landmark civil engineering achievement in the early history of the United States. When built, the canal was the second-longest in the world after the Grand Canal in China. Initially wide and deep, the canal was expanded several times, most notably from 1905 to 1918 when the "Barge Canal" was built and over half the original route was abandoned. The modern Barge Canal measures long, wide, and deep. It has 34 locks, including the Waterford Flight, the steepest locks in the United States. When leaving the canal, boats must also traverse the Black Rock Lock to reach Lake Erie or the Troy Federal Lock to reach the tidal Hudson. The overall elevation difference is about .
The Erie's peak year was 1855, when 33,000 commercial shipments took place. It continued to be competitive with railroads until about 1902, when tolls were abolished. Commercial traffic declined heavily in the latter half of the 20th century due to competition from trucking and the 1959 opening of the larger St. Lawrence Seaway. The canal's last regularly scheduled hauler, the Day Peckinpaugh, ended service in 1994.
Today, the Erie Canal is mainly used by recreational watercraft. It connects the three other canals in the New York State Canal System: the Champlain, Oswego, and Cayuga–Seneca. Some long-distance boaters take the Erie as part of the Great Loop. The canal has also become a tourist attraction in its own right—several parks and museums are dedicated to its history. The New York State Canalway Trail is a popular cycling path that follows the canal across the state. In 2000, Congress designated the Erie Canalway National Heritage Corridor to protect and promote the system.
Ambiguity in name
The waterway today referred to as the Erie Canal is quite different from the nineteenth-century Erie Canal. More than half of the original Erie Canal was destroyed or abandoned during construction of the New York State Barge Canal in the early 20th century. The sections of the original route remaining in use were widened significantly, mostly west of Syracuse, with bridges rebuilt and locks replaced. It was called the Barge Canal at the time, but that name fell into disuse with the disappearance of commercial traffic and the increase of recreational travel in the later 20th century.
History
Background
Before railroads, water transport was the most cost-effective way to ship bulk goods. A mule can only carry about but can draw a barge weighing as much as along a towpath. In total, a canal could cut transport costs by about 95 percent.
In the early years of the United States, transportation of goods between the coastal ports and the interior was slow and difficult. Close to the seacoast, rivers provided easy inland transport up to the fall line, since floating vessels encounter much less friction than land vehicles. However, the Appalachian Mountains were a great obstacle to further transportation or settlement, stretching from Maine to Alabama, with just five places where mule trains or wagon roads could be routed. Passengers and freight bound for the western parts of the country had to travel overland, a journey made more difficult by the rough condition of the roads. In 1800, it typically took 2½ weeks to travel overland from New York to Cleveland, Ohio, () and 4 weeks to Detroit ().
The principal exportable product of the Ohio Valley was grain, which was a high-volume, low-priced commodity, bolstered by supplies from the coast. Frequently it was not worth the cost of transporting it to far-away population centers. This was a factor leading to farmers in the west turning their grains into whiskey for easier transport and higher sales, and later the Whiskey Rebellion. In the 18th and early 19th centuries, it became clear to coastal residents that the city or state that succeeded in developing a cheap, reliable route to the West would enjoy economic success, and the port at the seaward end of such a route would see business increase greatly. In time, projects were devised in Virginia, Maryland, Pennsylvania, and relatively deep into the coastal states.
Topography
The Mohawk River (a tributary of the Hudson River) rises near Lake Ontario and runs in a glacial meltwater channel just north of the Catskill range of the Appalachian Mountains, separating them from the geologically distinct Adirondacks to the north. The Mohawk and Hudson valleys form the only cut across the Appalachians north of Alabama. A navigable canal through the Mohawk Valley would allow an almost complete water route from New York City in the south to Lake Ontario and Lake Erie in the west. Via the canal and these lakes, other Great Lakes, and to a lesser degree, related rivers, a large part of the continent's interior (and many settlements) would be made well connected to the Eastern seaboard.
Conception
Among the first attempts made by European colonists to improve upon the future state's navigable waterways was the construction in 1702 of the Wood Creek Carry, or Oneida Carry a short portage road connecting Wood Creek to the Mohawk River near modern-day Rome, New York. However, the first documented instance of the idea of a canal to tie the East Coast to the new western settlements via New York's waterways was discussed as early as 1724: New York provincial official Cadwallader Colden made a passing reference (in a report on fur trading) to improving the natural waterways of western New York. Colden and subsequent figures in the history of the Erie Canal and its development would draw inspiration from other great works of the so-called "canal age," including France's Canal du Midi and the Bridgewater Canal in England. The attempt in the 1780s by George Washington to build a canal from the tidewaters of the Potomac into the fledgling nation's interior was also well known to the planners of the Erie Canal.
Gouverneur Morris and Elkanah Watson were early proponents of a canal along the Mohawk River. Their efforts led to the creation of the "Western and Northern Inland Lock Navigation Companies" in 1792, which took the first steps to improve navigation on the Mohawk and construct a canal between the Mohawk and Lake Ontario, but it was soon discovered that private financing was insufficient. Christopher Colles, who was familiar with the Bridgewater Canal, surveyed the Mohawk Valley, and made a presentation to the New York state legislature in 1784, proposing a shorter canal from Lake Ontario. The proposal drew attention and some action but was never implemented.
Jesse Hawley had envisioned encouraging the growing of large quantities of grain on the western New York plains (then largely unsettled) for sale on the Eastern seaboard. However, he went bankrupt trying to ship grain to the coast. While in Canandaigua debtors' prison, Hawley began pressing for the construction of a canal along the Mohawk River valley with support from Joseph Ellicott (agent for the Holland Land Company in Batavia). Ellicott realized that a canal would add value to the land he was selling in the western part of the state. He later became the first canal commissioner.
New York legislators became interested in the possibility of building a canal across New York in the first decade of the 19th century. Shipping goods west from Albany was a costly and tedious affair; there was no railroad yet, and to cover the distance from Buffalo to New York City by stagecoach took two weeks. The problem was that the land rises about from the Hudson to Lake Erie. Locks at the time could handle up to of lift, so even with the heftiest cuttings and viaducts, fifty locks would be required along the canal. Such a canal would be expensive to build even with modern technology; in 1800, the expense was barely imaginable. President Thomas Jefferson called it "little short of madness" and rejected it.
Eventually, Hawley interested New York Governor DeWitt Clinton in the project. There was much opposition, and the project was ridiculed as "Clinton's folly" and "Clinton's ditch". In 1817, though, Clinton received approval from the legislature for $7 million for construction.
Construction
The original canal was long, from Albany on the Hudson to Buffalo on Lake Erie. The channel was cut wide and deep, with removed soil piled on the downhill side to form a walkway known as a towpath. Its construction, through limestone and mountains, proved a daunting task. To move earth, animals pulled a "slip scraper" (similar to a bulldozer). The sides of the canal were lined with stone set in clay, and the bottom was also lined with clay. The Canal was built by Irish laborers and German stonemasons. All labor on the canal depended upon human and animal power or the force of water. Engineering techniques developed during its construction included the building of aqueducts to redirect water; one aqueduct was long to span of river. As the canal progressed, the crews and engineers working on the project developed expertise and became a skilled labor force.
The men who planned and oversaw construction were novices as surveyors and as engineers. There were no civil engineers in the United States. James Geddes and Benjamin Wright, who laid out the route, were judges whose experience in surveying was in settling boundary disputes. Geddes had only used a surveying instrument for a few hours before his work on the Canal. Canvass White was a 27-year-old amateur engineer who persuaded Clinton to let him go to Britain at his own expense to study the canal system there. Nathan Roberts was a mathematics teacher and land speculator. Yet these men "carried the Erie Canal up the Niagara escarpment at Lockport, maneuvered it onto a towering embankment to cross over Irondequoit Creek, spanned the Genesee River on an awesome aqueduct, and carved a route for it out of the solid rock between Little Falls and Schenectady—and all of those venturesome designs worked precisely as planned".
Construction began on July 4, 1817, at Rome, New York. The first , from Rome to Utica, opened in 1819. At that rate, the canal would not be finished for 30 years. The main delays were caused by felling trees to clear a path through virgin forest and moving excavated soil, which took longer than expected, but the builders devised ways to solve these problems. To fell a tree, they threw rope over the top branches and winched it down. They pulled out the stumps with an innovative stump puller. Two huge wheels were mounted loose on the ends of an axle. A third wheel, slightly smaller than the others, was fixed to the center of the axle. A chain was wrapped around the axle and hooked to the stump. A rope was wrapped around the center wheel and hooked to a team of oxen. The mechanical advantage (torque) obtained ripped the stumps out of the soil. Soil to be moved was shoveled into large wheelbarrows that were dumped into mule-pulled carts. Using a scraper and a plow, a three-man team with oxen, horses and mules could build a mile in a year.
The remaining problem was finding labor; increased immigration helped fill the need. Many of the laborers working on the canal were Irish, who had recently come to the United States as a group of about 5,000. Most of them were Roman Catholic, a religion that raised much suspicion in early America because of its hierarchic structure, and many laborers on the canal suffered violent assault as the result of misjudgment and xenophobia.
Construction continued at an increased rate as new workers arrived. When the canal reached Montezuma Marsh (at the outlet of Cayuga Lake west of Syracuse), it was rumored that over 1,000 workers died of "swamp fever" (malaria), and construction was temporarily stopped. However, recent research has revealed that the death toll was likely much lower, as no contemporary reports mention significant worker mortality, and mass graves from the period have never been found in the area. Work continued on the downhill side towards the Hudson, and the crews worked on the section across the swampland when it froze in winter.
The middle section from Utica to Salina (Syracuse) was completed in 1820, and traffic on that section started up immediately. Expansion to the east and west proceeded simultaneously, and the whole eastern section, from Brockport to Albany, opened on September 10, 1823, to great fanfare. The Champlain Canal, a separate but connected north–south route from Watervliet on the Hudson to Lake Champlain, opened on the same date.
After Montezuma Marsh, the next difficulties were crossing Irondequoit Creek and the Genesee River near Rochester. The former ultimately required building the long "Great Embankment", to carry the canal at a height of above the level of the creek, which ran through a culvert underneath. The canal crossed the river on a stone aqueduct, long and wide, supported by 11 arches.
In 1823 construction reached the Niagara Escarpment, an -high wall of hard dolomitic limestone. The route followed the channel of a creek that had cut a ravine steeply down the escarpment. The construction and operation of two sets of five locks along a corridor soon gave rise to the community of Lockport. The lift-locks had a total lift of , exiting into a deeply cut channel. The final leg had to be cut deep through another limestone mass, the Onondaga ridge. Much of that section was blasted with black powder, and the inexperience of the crews often led to accidents, and sometimes to rocks falling on nearby homes.
Two villages competed to be the terminus: Black Rock, on the Niagara River, and Buffalo, at the eastern tip of Lake Erie. Buffalo expended great energy to widen and deepen Buffalo Creek to make it navigable and to create a harbor at its mouth. Buffalo won over Black Rock, and grew into a large city, eventually encompassing its former rival.
Completion
In 1824, before the canal was completed, a detailed Pocket Guide for the Tourist and Traveler, Along the Line of the Canals, and the Interior Commerce of the State of New York, was published for the benefit of travelers and land speculators.
The entire canal was officially completed on October 26, 1825. The event was marked by a statewide "Grand Celebration", culminating in a series of cannon shots along the length of the canal and the Hudson, a 90-minute cannonade from Buffalo to New York City. A flotilla of boats, led by Governor Dewitt Clinton aboard Seneca Chief, sailed from Buffalo to New York City over ten days. Clinton then ceremonially poured Lake Erie water into New York Harbor to mark the "Wedding of the Waters". On its return trip, Seneca Chief brought back a keg of Atlantic Ocean water, which was poured into Lake Erie by Buffalo's Judge Samuel Wilkeson, who would later become mayor.
The Erie Canal was thus completed in eight years at a total length of and cost $7.143 million (equivalent to $ million in ). It was acclaimed as an engineering marvel that united the country and helped New York City develop as an international trade center.
Problems developed but were quickly solved. Leaks developed along the entire length of the canal, but these were sealed using cement that hardened underwater (hydraulic cement). Erosion on the clay bottom proved to be a problem and the speed was limited to .
Branch canals
Additional feeder canals soon extended the Erie Canal into a system. These included the Cayuga-Seneca Canal south to the Finger Lakes, the Oswego Canal from Three Rivers north to Lake Ontario at Oswego, and the Champlain Canal from Troy north to Lake Champlain. From 1833 to 1877, the short Crooked Lake Canal connected Keuka Lake and Seneca Lake. The Chemung Canal connected the south end of Seneca Lake to Elmira in 1833, and was an important route for Pennsylvania coal and timber into the canal system. The Chenango Canal in 1836 connected the Erie Canal at Utica to Binghamton and caused a business boom in the Chenango River valley. The Chenango and Chemung canals linked the Erie with the Susquehanna River system. The Black River Canal connected the Black River to the Erie Canal at Rome and remained in operation until the 1920s. The Genesee Valley Canal was run along the Genesee River to connect with the Allegheny River at Olean, but the Allegheny section, which would have connected to the Ohio and Mississippi rivers, was never built. The Genesee Valley Canal was later abandoned and became the route of the Genesee Valley Canal Railroad.
First enlargement
The original design planned for an annual tonnage of 1.5 million tons (1.36 million metric tons), but this was exceeded immediately. An ambitious program to improve the canal began in 1834. During this massive series of construction projects, known as the First Enlargement, the canal was widened from and deepened from . Locks were widened and/or rebuilt in new locations, and many new navigable aqueducts were constructed. The canal was straightened and slightly re-routed in some stretches, resulting in the abandonment of short segments of the original 1825 canal. The First Enlargement was completed in 1862, with further minor enlargements in later decades.
Railroad competition
The Mohawk and Hudson Railroad opened in 1837, providing a bypass to the slowest part of the canal between Albany and Schenectady. Other railroads were soon chartered and built to continue the line west to Buffalo, and in 1842 a continuous line (which later became the New York Central Railroad and its Auburn Road in 1853) was open the whole way to Buffalo. As the railroad served the same general route as the canal, but provided for faster travel, passengers soon switched to it. However, as late as 1852, the canal carried thirteen times more freight tonnage than all the railroads in New York State combined. The New York, West Shore and Buffalo Railway was completed in 1884, as a route running closely parallel to both the canal and the New York Central Railroad. However, it went bankrupt and was acquired the next year by the New York Central. The canal continued to compete well with the railroads through 1902, when tolls were abolished.
Barge Canal
In a November 3, 1903 referendum, a majority of New Yorkers authorized an expansion of the canal at a cost of $101,000,000.
In 1905, construction of the New York State Barge Canal began, which was completed in 1918, at a cost of $96.7 million.
This new canal replaced much of the original route, leaving many abandoned sections (most notably between Syracuse and Rome). New digging and flood control technologies allowed engineers to canalize rivers that the original canal had sought to avoid, such as the Mohawk, Seneca, and Clyde rivers, and Oneida Lake. In sections that did not consist of canalized rivers (particularly between Rochester and Buffalo), the original Erie Canal channel was enlarged to wide and deep. The expansion allowed barges up to to use the Canal. This expensive project was politically unpopular in parts of the state not served by the canal, and failed to save it from becoming obsolete for commercial shipping.
Commercial decline
Freight traffic reached a total of 5.2 million short tons (4.7 million metric tons) by 1951. The growth of railroads and highways across the state, and the opening of the St. Lawrence Seaway in 1959, caused commercial traffic on the canal to decline dramatically during the second half of the 20th century. Since the 1990s, the canal system has been used primarily by recreational traffic.
New York State Canal System
In 1992, the New York State Barge Canal was renamed the New York State Canal System (including the Erie, Cayuga-Seneca, Oswego, and Champlain canals) and placed under the newly created New York State Canal Corporation, a subsidiary of the New York State Thruway Authority. While part of the Thruway, the canal system was operated using money generated by Thruway tolls. In 2017, the New York State Canal Corporation was transferred from the New York State Thruway to the New York Power Authority.
In 2000, Congress designated the Erie Canalway National Heritage Corridor, covering of navigable water from Lake Champlain to the Capital Region and west to Buffalo. The area has a population of 2.7 million; about 75% of Central and Western New York's population lives within of the Erie Canal.
There were some 42 commercial shipments on the canal in 2008, compared to 15 such shipments in 2007 and more than 33,000 shipments in 1855, the canal's peak year. The new growth in commercial traffic is due to the rising cost of diesel fuel. Canal barges can carry a short ton of cargo on one gallon of diesel fuel, while a gallon allows a train to haul the same amount of cargo and a truck . Canal barges can carry loads up to , and are used to transport objects that would be too large for road or rail shipment. In 2012, the New York State Canal System as a whole was used to ship 42,000 tons of cargo.
Travel on the canal's middle section (particularly in the Mohawk Valley) was severely hampered by flooding in late June and early July 2006. Flood damage to the canal and its facilities was estimated as at least $15 million.
Route
Original Canal
The Erie made use of the favorable conditions of New York's unique topography, which provided that area with the only break in the Appalachians south of the St. Lawrence River. The Hudson is tidal to Troy, and Albany is west of the Appalachians. It allowed for east–west navigation from the coast to the Great Lakes within US territory.
The canal began on the west side of the Hudson River at Albany, and ran north to Watervliet, where the Champlain Canal branched off. At Cohoes, it climbed the escarpment on the west side of the Hudson River—16 locks rising —and then turned west along the south shore of the Mohawk River, crossing to the north side at Crescent and again to the south at Rexford. The canal continued west near the south shore of the Mohawk River all the way to Rome, where the Mohawk turns north.
At Rome, the canal continued west parallel to Wood Creek, which flows westward into Oneida Lake, and turned southwest and west cross-country to avoid the lake. From Canastota west, it ran roughly along the north (lower) edge of the Onondaga Escarpment, passing through Syracuse, Onondaga Lake, and Rochester. Before reaching Rochester, the canal uses a series of natural ridges to cross the deep valley of Irondequoit Creek. At Lockport the canal turned southwest to rise to the top of the Niagara Escarpment, using the ravine of Eighteen Mile Creek.
The canal continued south-southwest to Pendleton, where it turned west and southwest, mainly using the channel of Tonawanda Creek. From the Tonawanda south toward Buffalo, it ran just east of the Niagara River, where it reached its "Western Terminus" at Little Buffalo Creek (later it became the Commercial Slip), which discharged into the Buffalo River just above its confluence with Lake Erie. With Buffalo's re-excavation of the Commercial Slip, completed in 2008, the Canal's original terminus is now re-watered and again accessible by boats. With several miles of the Canal inland of this location still lying under 20th-century fill and urban construction, the effective western navigable terminus of the Erie Canal is found at Tonawanda.
Barge Canal
The new alignment began on the Hudson River at the border between Cohoes and Waterford, where it ran northwest with five locks (the so-called "Waterford Flight"), running into the Mohawk River east of Crescent. The Waterford Flight is claimed to be one of the steepest series of locks in the world.
While the old Canal ran next to the Mohawk all the way to Rome, the new canal ran through the river, which was straightened or widened where necessary. At Ilion, the new canal left the river for good, but continued to run on a new alignment parallel to both the river and the old canal to Rome. From Rome, the new route continued almost due west, merging with Fish Creek just east of its entry into Oneida Lake.
From Oneida Lake, the new canal ran west along the Oneida River, with cutoffs to shorten the route. At Three Rivers, the Oneida River turns northwest, and was deepened for the Oswego Canal to Lake Ontario. The new Erie Canal turned south there along the Seneca River, which turns west near Syracuse and continues west to a point in the Montezuma Marsh. There the Cayuga and Seneca Canal continued south with the Seneca River, and the new Erie Canal again ran parallel to the old canal along the bottom of the Niagara Escarpment, in some places running along the Clyde River, and in some places replacing the old canal. At Pittsford, southeast of Rochester, the canal turned west to run around the south side of Rochester, rather than through downtown. The canal crosses the Genesee River at the Genesee Valley Park, then rejoins the old path near North Gates.
From there it was again roughly an upgrade to the original canal, running west to Lockport. This reach of from Henrietta to Lockport is called "the 60‑mile level" since there are no locks and the water level rises only over the entire segment. Diversions from and to adjacent natural streams along the way are used to maintain the canal's level. It runs southwest to Tonawanda, where the new alignment discharges into the Niagara River, which is navigable upstream to the New York Barge Canal's Black Rock Lock and thence to the Canal's original "Western Terminus" at Buffalo's Inner Harbor.
Operations
Freight boats
Canal boats up to in draft were pulled by horses and mules walking on the towpath. The canal had one towpath, generally on the north side. When canal boats met, the boat with the right of way remained on the towpath side of the canal. The other boat steered toward the berm (or heelpath) side of the canal. The driver (or "hoggee", pronounced HO-gee) of the privileged boat kept his towpath team by the canalside edge of the towpath, while the hoggee of the other boat moved to the outside of the towpath and stopped his team. His towline would be unhitched from the horses, go slack, fall into the water and sink to the bottom, while his boat coasted with its remaining momentum. The privileged boat's team would step over the other boat's towline, with its horses pulling the boat over the sunken towline without stopping. Once clear, the other boat's team would continue on its way.
Pulled by teams of horses, canal boats moved slowly, but methodically, shrinking time and distance. Efficiently, the smooth, nonstop method of transportation cut the travel time between Albany and Buffalo nearly in half, moving by day and by night. Migrants took passage on freight boats, camping on deck or on top of crates.
Passenger boats
Packet boats, serving passengers exclusively, reached speeds of up to and ran at much more frequent intervals than the cramped, bumpy stagecoach wagons. These boats, measuring up to long and wide, made ingenious use of space, accommodating up to 40 passengers at night and up to three times as many in the daytime. The best examples, furnished with carpeted floors, stuffed chairs, and mahogany tables stocked with books and current newspapers, served as sitting rooms during the days. At mealtimes, crews transformed the cabin into a dining room. Drawing a curtain across the width of the room divided the cabin into ladies' and gentlemen's sleeping quarters at night. Pull-down tiered beds folded from the walls, and additional cots could be hung from hooks in the ceiling. Some captains hired musicians and held dances.
Sunday closing debate
In 1858, the New York State Legislature debated closing the locks of the Erie Canal on Sundays. However, George Jeremiah and Dwight Bacheller, two of the bill's opponents, argued that the state had no right to stop canal traffic on the grounds that the Erie Canal and its tributaries had ceased to be wards of the state. The canal at its inception had been imagined as an extension of nature, an artificial river where there had been none. The canal succeeded by sharing more in common with lakes and seas than it had with public roads. Jeremiah and Bacheller argued, successfully, that just as it was unthinkable to halt oceangoing navigation on Sunday, so it was with the canal.
Impact
Economic impact
The Erie Canal greatly lowered the cost of shipping between the Midwest and the Northeast, bringing much lower food costs to Eastern cities and allowing the East to ship machinery and manufactured goods to the Midwest more economically. To give an example, the cost to transport a barrel of flour from Rochester to Albany dropped from $3 (before the canal) to 75¢ on the canal. The canal also made an immense contribution to the wealth and importance of New York City, Buffalo and New York State. Its impact went much further, increasing trade throughout the nation by opening eastern and overseas markets to Midwestern farm products and by enabling migration to the West. The port of New York became essentially the Atlantic home port for all of the Midwest. Because of this vital connection and others to follow, such as the railroads, New York would become known as the "Empire State" or "the great Empire State".
The Erie Canal was an immediate success. Tolls collected on freight had already exceeded the state's construction debt in its first year of official operation. By 1828, import duties collected at the New York Customs House supported federal government operations and provided funds for all the expenses in Washington except the interest on the national debt. Additionally, New York State's initial loan for the original canal had been paid by 1837. Although it had been envisioned as primarily a commercial channel for freight boats, passengers also traveled on the canal's packet boats. In 1825 more than 40,000 passengers took advantage of the convenience and beauty of canal travel. The canal's steady flow of tourists, businessmen and settlers lent it to uses never imagined by its initial sponsors. Evangelical preachers made their circuits of the upstate region, and the canal served as the last leg of the Underground Railroad ferrying freedom seekers to Buffalo near the Canada–US border. Aspiring merchants found that tourists were reliable customers. Vendors moved from boat to boat peddling items such as books, watches and fruit, while less scrupulous "confidence men" sold remedies for foot corns or passed off counterfeit bills. Tourists were carried along the "northern tour," which ultimately led to the popular honeymoon destination Niagara Falls, just north of Buffalo.
As the canal brought travelers to New York City, it took business away from other ports such as Philadelphia and Baltimore. Those cities and their states started projects to compete with the Erie Canal. In Pennsylvania, the Main Line of Public Works was a combined canal and railroad running west from Philadelphia to Pittsburgh on the Ohio River, opened in 1834. In Maryland, the Baltimore and Ohio Railroad ran west to Wheeling, West Virginia, then a part of Virginia, also on the Ohio River, and was completed in 1853.
The canal played a major role in the growth of Standard Oil, as founder John D. Rockefeller used the canal as a cheaper form of transportation – in the summer months when it was not frozen – to get his refined oil from Cleveland to New York City. In the winter months his only options were the three trunk lines: the Erie Railroad, the New York Central Railroad, or the Pennsylvania Railroad.
Migratory impact
New ethnic Irish communities formed in some towns along its route after completion, as Irish immigrants were a large portion of the construction labor force. A plaque honoring the canal's construction is located in Battery Park in southern Manhattan.
Because so many immigrants traveled on the canal, many genealogists have sought copies of canal passenger lists. Apart from the years 1827–1829, canal boat operators were not required to record passenger names or report them to the New York government. Some passenger lists survive today in the New York State Archives, and other sources of traveler information are sometimes available.
The canal allowed Buffalo to grow from just 200 settlers in 1820 to more than 18,000 people by 1840.
Cultural impact
The Canal also helped bind the still-new nation closer to Britain and Europe. Repeal of Britain's Corn Law resulted in a huge increase in exports of Midwestern wheat to Britain. Trade between the United States and Canada also increased as a result of the repeal and a reciprocity (free-trade) agreement signed in 1854. Much of this trade flowed along the Erie.
Its success also prompted imitation: a rash of canal-building followed. Also, the many technical hurdles that had to be overcome made heroes of those whose innovations made the canal possible. This led to an increased public esteem for practical education. Chicago, among other Great Lakes cities, recognized the importance of the canal to its economy, and two West Loop streets are named "Canal" and "Clinton" (for canal proponent DeWitt Clinton).
Concern that erosion caused by logging in the Adirondacks could silt up the canal contributed to the creation in 1885 of another New York National Historic Landmark, the Adirondack Park.
Many notable authors wrote about the canal, including Herman Melville, Frances Trollope, Nathaniel Hawthorne, Harriet Beecher Stowe, Mark Twain, Samuel Hopkins Adams and the Marquis de Lafayette, and many tales and songs were written about life on the canal. The popular song "Low Bridge, Everybody Down" by Thomas S. Allen was written in 1905 to memorialize the canal's early heyday, when barges were pulled by mules rather than engines.
Consisting of a massive stone aqueduct that carried boats over incredible cascades, Little Falls was one of the most popular stops for American and foreign tourists. This is shown in Scene 4 of William Dunlap's play A Trip to Niagara, where he depicts the general preference of tourists to travel by canal so that they could experience a combination of artificial and natural sights. Canal travel was, for many, an opportunity to take in the sublime and commune with nature. The play also reflects the less enthusiastic view of some who saw movement on the canal as tedious.
The Erie Canal changed property law in New York. Most importantly, it expanded the government's right to take private property. Cases surrounding the newly built Erie Canal expanded condemnation theory to permit canal builders to appropriate private land and broadened the meaning of "public use" in the 5th Amendment to the U.S. Constitution. The canal also had an impact on water access jurisprudence as well as nuisance law.
The canal today
Today, the Erie Canal is used primarily by recreational vessels, though it remains served by several commercial barge-towing companies.
The canal is open to small craft and some larger vessels from May through November each year. During winter, water is drained from parts of the canal for maintenance. The Champlain Canal, Lake Champlain, and the Chambly Canal, and Richelieu River in Canada form the Lakes to Locks Passage, making a tourist attraction of the former waterway linking eastern Canada to the Erie Canal. In 2006 recreational boating fees were suspended to attract more visitors.
The Erie Canal is a destination for tourists from all over the world, and has inspired guidebooks dedicated to exploration of the waterway. An Erie Canal Cruise company, based in Herkimer, operates from mid-May until mid-October with daily cruises. The cruise goes through the history of the canal and also takes passengers through Lock 18.
Aside from transportation, numerous businesses, farms, factories and communities alongside its banks still utilize the canal's waters for other purposes such as irrigation for farmland, hydroelectricity, research, industry, and even drinking. Use of the canal system has an estimated total economic impact of $6.2 billion annually.
Old Erie Canal
Today, the reconfiguration of the canal created during the First Enlargement is commonly referred to as the "Improved Erie Canal" or the "Old Erie Canal", to distinguish it from the canal's modern-day course. Existing remains of the 1825 canal abandoned during the Enlargement are officially referred to today as "Clinton's Ditch" (which was also the popular nickname for the entire Erie Canal project during its original 1817–1825 construction).
Sections of the Old Erie Canal not used after 1918 are owned by New York State, or have been ceded to or purchased by counties or municipalities. Many stretches of the old canal have been filled in to create roads such as Erie Boulevard in Syracuse and Schenectady, and Broad Street and the Rochester Subway in Rochester. A 36‑mile (58 km) stretch of the old canal from the town of DeWitt, New York, east of Syracuse, to just outside Rome, New York, is preserved as the Old Erie Canal State Historic Park. In 1960 the Schoharie Crossing State Historic Site, a section of the canal in Montgomery County, was one of the first sites recognized as a National Historic Landmark.
Some municipalities have preserved sections as town or county canal parks, or have plans to do so. Camillus Erie Canal Park preserves a stretch and has restored Nine Mile Creek Aqueduct, built in 1841 as part of the First Enlargement of the canal. In some communities, the old canal has refilled with overgrowth and debris. Proposals have been made to rehydrate the old canal through downtown Rochester or Syracuse as a tourist attraction. In Syracuse, the location of the old canal is represented by a reflecting pool in downtown's Clinton Square and the downtown hosts a canal barge and weigh lock structure, now dry. Buffalo's Commercial Slip is the restored and re-watered segment of the canal which formed its "Western Terminus".
In 2004, the administration of New York Governor George Pataki was criticized when officials of New York State Canal Corporation attempted to sell private development rights to large stretches of the Old Erie Canal to a single developer for $30,000, far less than the land was worth on the open market. After an investigation by the Syracuse Post-Standard newspaper, the Pataki administration nullified the deal.
Parks and museums
Parks and museums related to the Old Erie Canal include (listed from east to west):
Day Peckinpaugh ship; restoration and conversion to a floating museum was planned for completion in 2012 by the New York State Museum
Watervliet Side Cut Locks, located at Watervliet and listed on the National Register of Historic Places in 1971
Enlarged Erie Canal Historic District (Discontiguous), a national historic district located at Cohoes, New York listed on the National Register of Historic Places in 2004
Cohoes Falls Park, 231 N. Mohawk St., Cohoes, New York, offers, looking away from the river, a dramatic view of abandoned and dry Erie Canal lock 18, high above.
Enlarged Double Lock No. 23, Old Erie Canal, Rotterdam
Schoharie Crossing State Historic Site at Fort Hunter
Old Erie Canal State Historic Park, 36-mile linear park from Rome to DeWitt
Erie Canal Village, near Rome
Canastota Canal Town Museum, Canastota
Chittenango Landing Canal Boat Museum, near Chittenango
Erie Canal Museum in downtown Syracuse
Camillus Erie Canal Park in Camillus
Jordan Canal Park in Jordan, town of Elbridge
Enlarged Double Lock No. 33 Old Erie Canal, St. Johnsville
Erie Canal Lock 52 Complex, a national historic district located within the Old Erie Canal Heritage Park at Port Byron and Mentz in Cayuga County; listed on the National Register of Historic Places in 1998
Seneca River Crossing Canals Historic District, a national historic district located at Montezuma and Tyre in Cayuga County; listed on the National Register of Historic Places in 2005
Centerport Aqueduct Park near Weedsport; listed on the National Register of Historic Places in 2000
Lock Berlin Park near Clyde
Macedon Aqueduct Park near Palmyra
Old Erie Canal Lock 60 Park in Macedon
Perinton Park in Perinton near Fairport
Genesee Valley Park in the city of Rochester
Spencerport Depot & Canal Museum, Spencerport
Niagara Escarpment "Flight of Five" locks at Lockport
Erie Canal Discovery Center, 24 Church Street, Lockport (Locks 34 and 35)
Canalside Buffalo at the Canal's "Western Terminus"
Erie Canalway Trail
Records and research
Records of the planning, funding, design, construction, and administration of the Erie Canal are vast and can be found in the New York State Archives. Except for two years (1827–1829), the State of New York did not require canal boat operators to maintain or submit passenger lists.
Locks
The following list of locks is provided for the current canal, from east to west. There are a total of 36 (35 numbered) locks on the Erie Canal.
All locks on the New York State Canal System are single-chamber; the dimensions are long and wide with a minimum depth of water over the miter sills at the upstream gates upon lift. They can accommodate a vessel up to long and wide. Overall sidewall height will vary by lock, ranging between depending on the lift and navigable stages. Lock E17 at Little Falls has the tallest sidewall height at .
Distance is based on position markers from an interactive canal map provided online by the New York State Canal Corporation and may not exactly match specifications on signs posted along the canal. Mean surface elevations are comprised from a combination of older canal profiles and history books as well as specifications on signs posted along the canal. The margin of error should normally be within .
The Waterford Flight series of locks (comprising Locks E2 through E6) is one of the steepest in the world, lifting boats in less than .
All surface elevations are approximate.
Denotes federally managed locks.
There is a natural rise between locks E33 and E34 as well as a natural rise between Lock E35 and the Niagara River.
There is no Lock E1 or Lock E31 on the Erie Canal. The place of "Lock E1" on the passage from the lower Hudson River to Lake Erie is taken by the Troy Federal Lock, located just north of Troy, New York, and is not part of the Erie Canal System proper. It is operated by the United States Army Corps of Engineers. The Erie Canal officially begins at the confluence of the Hudson and Mohawk rivers at Waterford, New York.
Although the original alignment of the Erie Canal through Buffalo has been filled in, travel by water is still possible from Buffalo via the Black Rock Lock in the Niagara River to the canal's modern western terminus in Tonawanda, and eastward to Albany. The Black Rock Lock is operated by the United States Army Corps of Engineers.
Oneida Lake lies between locks E22 and E23, and has a mean surface elevation of . Lake Erie has a mean surface elevation of .
See also
Robert C. Dorn
List of canals in New York
List of canals in the United States
"Low Bridge", a song written by Thomas S. Allen, also known as "The Erie Canal Song"
John C. Mather (New York politician)
Ohio and Erie Canal, connecting Lake Erie with the Ohio River
Welland Canal, opened in 1829, bypasses the Niagara River between Lake Erie and Lake Ontario
References
Further reading
Online review.
External links
Erie & Barge Canal: A bibliography by the Buffalo History Museum.
Listing and index of maps, plans, profiles, pictures, and photographs of canals of New York State in annual reports of State Engineer and Surveyor through 1905
Erie Canal case study in Transition Times. Archived at Ghostarchive.
Information and Boater's Guide to the Erie Canal
Canalway Trail Information
Historical information (with photos) of the Erie Canal
Video showing the operations of Lock 22E in 2016.
New York State Canal Corporation Site
The Opening of the Erie Canal – An Online Exhibition by CUNY
New York Heritage online exhibit - Two Hundred Years on the Erie Canal
The Canal Society of New York State
Digging Clinton's Ditch: The Impact of the Erie Canal on America 1807–1860 Multimedia
A Glimpse at Clinton's Ditch, 1819–1820 by Richard F. Palmer
Guide to Canal Records in the New York State Archives
The Erie Canal Mapping Project
New York Heritage – Working on the Erie Canal
Photographs of the Erie Canal Relating to Fort Hunter, N.Y. Ca. 1910 (finding aid) at the New York State Library, accessed May 18, 2016.
William Jaeger's photography of the Canal remains. Archived at the Wayback Machine
American Society of Civil Engineers site- The Erie Canal was the world's longest canal and one of America's great engineering feats.
Newspaper articles and clippings about the Building of the Erie Canal at Newspapers.com
1821 establishments in New York (state)
Canals in New York (state)
Geography of Buffalo, New York
Historic American Buildings Survey in New York (state)
Historic American Engineering Record in New York (state)
Historic Civil Engineering Landmarks
Canals opened in 1825 | Erie Canal | [
"Engineering"
] | 9,401 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
10,065 | https://en.wikipedia.org/wiki/Empirical%20formula | In chemistry, the empirical formula of a chemical compound is the simplest whole number ratio of atoms present in a compound. A simple example of this concept is that the empirical formula of sulfur monoxide, or SO, is simply SO, as is the empirical formula of disulfur dioxide, S2O2. Thus, sulfur monoxide and disulfur dioxide, both compounds of sulfur and oxygen, have the same empirical formula. However, their molecular formulas, which express the number of atoms in each molecule of a chemical compound, are not the same.
An empirical formula makes no mention of the arrangement or number of atoms. It is standard for many ionic compounds, like calcium chloride (CaCl2), and for macromolecules, such as silicon dioxide (SiO2).
The molecular formula, on the other hand, shows the number of each type of atom in a molecule. The structural formula shows the arrangement of the molecule. It is also possible for different types of compounds to have equal empirical formulas.
In the early days of chemistry, information regarding the composition of compounds came from elemental analysis, which gives information about the relative amounts of elements present in a compound, which can be written as percentages or mole ratios. However, chemists were not able to determine the exact amounts of these elements and were only able to know their ratios, hence the name "empirical formula". Since ionic compounds are extended networks of anions and cations, all formulas of ionic compounds are empirical.
Examples
Glucose (), ribose (), Acetic acid (), and formaldehyde () all have different molecular formulas but the same empirical formula: . This is the actual molecular formula for formaldehyde, but acetic acid has double the number of atoms, ribose has five times the number of atoms, and glucose has six times the number of atoms.
Calculation example
A chemical analysis of a sample of methyl acetate provides the following elemental data: 48.64% carbon (C), 8.16% hydrogen (H), and 43.20% oxygen (O). For the purposes of determining empirical formulas, it's assumed that we have 100 grams of the compound. If this is the case, the percentages will be equal to the mass of each element in grams.
Step 1: Change each percentage to an expression of the mass of each element in grams. That is, 48.64% C becomes 48.64 g C, 8.16% H becomes 8.16 g H, and 43.20% O becomes 43.20 g O.
Step 2: Convert the amount of each element in grams to its amount in moles
Step 3: Divide each of the resulting values by the smallest of these values (2.7)
Step 4: If necessary, multiply these numbers by integers in order to get whole numbers; if an operation is done to one of the numbers, it must be done to all of them.
Thus, the empirical formula of methyl acetate is . This formula also happens to be methyl acetate's molecular formula.
References
Chemical formulas
Analytical chemistry | Empirical formula | [
"Chemistry"
] | 632 | [
"Chemical formulas",
"Chemical structures",
"nan"
] |
10,103 | https://en.wikipedia.org/wiki/Electroweak%20interaction | In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force.
During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around (from the Large Hadron Collider).
Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable.
History
After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding.
In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons.
Formulation
Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields , , and , and the weak hypercharge field .
This invariance is known as electroweak symmetry.
The generators of SU(2) and U(1) are given the name weak isospin (labeled ) and weak hypercharge (labeled ) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three bosons of weak isospin (, , and ), and the boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism.
In the Standard Model, the observed physical particles, the and bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1) to U(1), effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom.
The electric charge arises as the particular linear combination (nontrivial) of (weak hypercharge) and the component of weak isospin () that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and outlined in the figure.
(the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the group is unbroken, since it does not directly interact with the Higgs.
The above spontaneous symmetry breaking makes the and bosons coalesce into two different physical bosons with different masses – the boson, and the photon (),
where is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (, ) plane, by the angle . This also introduces a mismatch between the mass of the and the mass of the particles (denoted as and , respectively),
The and bosons, in turn, combine to produce the charged massive bosons :
Lagrangian
Before electroweak symmetry breaking
The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking becomes manifest,
The term describes the interaction between the three vector bosons and the vector boson,
where () and are the field strength tensors for the weak isospin and weak hypercharge gauge fields.
is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative,
where the subscript sums over the three generations of fermions; , , and are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and and are the left-handed doublet and right-handed singlet electron fields.
The Feynman slash means the contraction of the 4-gradient with the Dirac matrices, defined as
and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as
Here is the weak hypercharge and the are the components of the weak isospin.
The term describes the Higgs field and its interactions with itself and the gauge bosons,
where is the vacuum expectation value.
The term describes the Yukawa interaction with the fermions,
and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The for are matrices of Yukawa couplings.
After electroweak symmetry breaking
The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature
(assuming the Standard Model of particle physics).
Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows.
The kinetic term contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking)
where the sum runs over all the fermions of the theory (quarks and leptons), and the fields and are given as
with to be replaced by the relevant field ( ) and by the structure constants of the appropriate gauge group.
The neutral current and charged current components of the Lagrangian contain the interactions between the fermions and gauge bosons,
where The electromagnetic current is
where is the fermions' electric charges.
The neutral weak current is
where is the fermions' weak isospin.
The charged current part of the Lagrangian is given by
where is the right-handed singlet neutrino field, and the CKM matrix determines the mixing between mass and weak eigenstates of the quarks.
contains the Higgs three-point and four-point self interaction terms,
contains the Higgs interactions with gauge vector bosons,
contains the gauge three-point self interactions,
contains the gauge four-point self interactions,
contains the Yukawa interactions between the fermions and the Higgs field,
See also
Electroweak star
Fundamental forces
History of quantum field theory
Standard Model (mathematical formulation)
Unitarity gauge
Weinberg angle
Yang–Mills theory
Notes
References
Further reading
General readers
Conveys much of the Standard Model with no formal mathematics. Very thorough on the weak interaction.
Texts
Articles | Electroweak interaction | [
"Physics"
] | 1,943 | [
"Physical phenomena",
"Fundamental interactions",
"Electroweak theory"
] |
10,134 | https://en.wikipedia.org/wiki/Electromagnetic%20spectrum | The electromagnetic spectrum is the full range of electromagnetic radiation, organized by frequency or wavelength. The spectrum is divided into separate bands, with different names for the electromagnetic waves within each band. From low to high frequency these are: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The electromagnetic waves in each of these bands have different characteristics, such as how they are produced, how they interact with matter, and their practical applications.
Radio waves, at the low-frequency end of the spectrum, have the lowest photon energy and the longest wavelengths—thousands of kilometers, or more. They can be emitted and received by antennas, and pass through the atmosphere, foliage, and most building materials.
Gamma rays, at the high-frequency end of the spectrum, have the highest photon energies and the shortest wavelengths—much smaller than an atomic nucleus. Gamma rays, X-rays, and extreme ultraviolet rays are called ionizing radiation because their high photon energy is able to ionize atoms, causing chemical reactions. Longer-wavelength radiation such as visible light is nonionizing; the photons do not have sufficient energy to ionize atoms.
Throughout most of the electromagnetic spectrum, spectroscopy can be used to separate waves of different frequencies, so that the intensity of the radiation can be measured as a function of frequency or wavelength. Spectroscopy is used to study the interactions of electromagnetic waves with matter.
History and discovery
Humans have always been aware of visible light and radiant heat but for most of history it was not known that these phenomena were connected or were representatives of a more extensive principle. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Light was intensively studied from the beginning of the 17th century leading to the invention of important instruments like the telescope and microscope. Isaac Newton was the first to use the term spectrum for the range of colours that white light could be split into with a prism. Starting in 1666, Newton showed that these colours were intrinsic to light and could be recombined into white light. A debate arose over whether light had a wave nature or a particle nature with René Descartes, Robert Hooke and Christiaan Huygens favouring a wave description and Newton favouring a particle description. Huygens in particular had a well developed theory from which he was able to derive the laws of reflection and refraction. Around 1801, Thomas Young measured the wavelength of a light beam with his two-slit experiment thus conclusively demonstrating that light was a wave.
In 1800, William Herschel discovered infrared radiation. He was studying the temperature of different colours by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays", a type of light ray that could not be seen. The next year, Johann Ritter, working at the other end of the spectrum, noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions). These behaved similarly to visible violet light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation.
The study of electromagnetism began in 1820 when Hans Christian Ørsted discovered that electric currents produce magnetic fields (Oersted's law). Light was first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s, James Clerk Maxwell developed four partial differential equations (Maxwell's equations) for the electromagnetic field. Two of these equations predicted the possibility and behavior of waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave. Maxwell's equations predicted an infinite range of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum.
Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886, the physicist Heinrich Hertz built an apparatus to generate and detect what are now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio.
In 1895, Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called this radiation "x-rays" and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for this radiography.
The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900, Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he at first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta particles) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths.
The wave-particle debate was rekindled in 1901 when Max Planck discovered that light is absorbed only in discrete "quanta", now called photons, implying that light has a particle nature. This idea was made explicit by Albert Einstein in 1905, but never accepted by Planck and many other contemporaries. The modern position of science is that electromagnetic radiation has both a wave and a particle nature, the wave-particle duality. The contradictions arising from this position are still being debated by scientists and philosophers.
Range
Electromagnetic waves are typically described by any of the following three physical properties: the frequency f, wavelength λ, or photon energy E. Frequencies observed in astronomy range from (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency, so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be indefinitely long. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations:
where:
c is the speed of light in vacuum
h is the Planck constant.
Whenever electromagnetic waves travel in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, whatever medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated.
Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, infrared, visible light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries.
Spectroscopy can detect a much wider region of the EM spectrum than the visible wavelength range of 400 nm to 700 nm in a vacuum. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae and frequencies as high as have been detected from astrophysical sources.
Regions
The types of electromagnetic radiation are broadly classified into the following classes (regions, bands or types):
Gamma radiation
X-ray radiation
Ultraviolet radiation
Visible light (light that humans can see)
Infrared radiation
Microwave radiation
Radio waves
This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation.
There are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow. Radiation of each frequency and wavelength (or in each band) has a mix of properties of the two regions of the spectrum that bound it. For example, red light resembles infrared radiation, in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system.
In atomic and nuclear physics, the distinction between X-rays and gamma rays is based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process are termed gamma rays, whereas X-rays are generated by electronic transitions involving energetically deep inner atomic electrons. Electronic transitions in muonic atoms transitions are also said to produce X-rays. In astrophysics, energies below 100keV are called X-rays and higher energies are gamma rays.
The region of the spectrum where electromagnetic radiation is observed may differ from the region it was emitted in due to relative velocity of the source and observer, (the Doppler shift), relative gravitational potential (gravitational redshift), or expansion of the universe (cosmological redshift). For example, the cosmic microwave background, relic blackbody radiation from the era of recombination, started out at energies around 1eV, but as has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers on Earth.
Rationale for names
Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons arising from these qualitative interaction differences.
Types of radiation
Radio waves
Radio waves are emitted and received by antennas, which consist of conductors such as metal rod resonators. In artificial generation of radio waves, an electronic device called a transmitter generates an alternating electric current which is applied to an antenna. The oscillating electrons in the antenna generate oscillating electric and magnetic fields that radiate away from the antenna as radio waves. In reception of radio waves, the oscillating electric and magnetic fields of a radio wave couple to the electrons in an antenna, pushing them back and forth, creating oscillating currents which are applied to a radio receiver. Earth's atmosphere is mainly transparent to radio waves, except for layers of charged particles in the ionosphere which can reflect certain frequencies.
Radio waves are extremely widely used to transmit information across distances in radio communication systems such as radio broadcasting, television, two way radios, mobile phones, communication satellites, and wireless networking. In a radio communication system, a radio frequency current is modulated with an information-bearing signal in a transmitter by varying either the amplitude, frequency or phase, and applied to an antenna. The radio waves carry the information across space to a receiver, where they are received by an antenna and the information extracted by demodulation in the receiver. Radio waves are also used for navigation in systems like Global Positioning System (GPS) and navigational beacons, and locating distant objects in radiolocation and radar. They are also used for remote control, and for industrial heating.
The use of the radio spectrum is strictly regulated by governments, coordinated by the International Telecommunication Union (ITU) which allocates frequencies to different users for different uses.
Microwaves
Microwaves are radio waves of short wavelength, from about 10 centimeters to one millimeter, in the SHF and EHF frequency bands. Microwave energy is produced with klystron and magnetron tubes, and with solid state devices such as Gunn and IMPATT diodes. Although they are emitted and absorbed by short antennas, they are also absorbed by polar molecules, coupling to vibrational and rotational modes, resulting in bulk heating. Unlike higher frequency waves such as infrared and visible light which are absorbed mainly at surfaces, microwaves can penetrate into materials and deposit their energy below the surface. This effect is used to heat food in microwave ovens, and for industrial heating and medical diathermy. Microwaves are the main wavelengths used in radar, and are used for satellite communication, and wireless networking technologies such as Wi-Fi. The copper cables (transmission lines) which are used to carry lower-frequency radio waves to antennas have excessive power losses at microwave frequencies, and metal pipes called waveguides are used to carry them. Although at the low end of the band the atmosphere is mainly transparent, at the upper end of the band absorption of microwaves by atmospheric gases limits practical propagation distances to a few kilometers.
Terahertz radiation or sub-millimeter radiation is a region of the spectrum from about 100 GHz to 30 terahertz (THz) between microwaves and far infrared which can be regarded as belonging to either band. Until recently, the range was rarely studied and few sources existed for microwave energy in the so-called terahertz gap, but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment. Terahertz radiation is strongly absorbed by atmospheric gases, making this frequency range useless for long-distance communication.
Infrared radiation
The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz to 400 THz (1 mm – 750 nm). It can be divided into three parts:
Far-infrared, from 300 GHz to 30 THz (1 mm – 10 μm). The lower part of this range may also be called microwaves or terahertz waves. This radiation is typically absorbed by so-called rotational modes in gas-phase molecules, by molecular motions in liquids, and by phonons in solids. The water in Earth's atmosphere absorbs so strongly in this range that it renders the atmosphere in effect opaque. However, there are certain wavelength ranges ("windows") within the opaque range that allow partial transmission, and can be used for astronomy. The wavelength range from approximately 200 μm up to a few mm is often referred to as Submillimetre astronomy, reserving far infrared for wavelengths below 200 μm.
Mid-infrared, from 30 THz to 120 THz (10–2.5 μm). Hot objects (black-body radiators) can radiate strongly in this range, and human skin at normal body temperature radiates strongly at the lower end of this region. This radiation is absorbed by molecular vibrations, where the different atoms in a molecule vibrate around their equilibrium positions. This range is sometimes called the fingerprint region, since the mid-infrared absorption spectrum of a compound is very specific for that compound.
Near-infrared, from 120 THz to 400 THz (2,500–750 nm). Physical processes that are relevant for this range are similar to those for visible light. The highest frequencies in this region can be detected directly by some types of photographic film, and by many types of solid state image sensors for infrared photography and videography.
Visible light
Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light. By definition, visible light is the part of the EM spectrum the human eye is the most sensitive to. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underlie human vision and plant photosynthesis. The light that excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow whilst ultraviolet would appear just beyond the opposite violet end.
Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colours of light observed in the visible spectrum between 400 nm and 780 nm.
If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently understood psychophysical phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves.
Ultraviolet radiation
Next in frequency comes ultraviolet (UV). In frequency (and thus energy), UV rays sit between the violet end of the visible spectrum and the X-ray range. The UV wavelength spectrum ranges from 399 nm to 10 nm and is divided into 3 sections: UVA, UVB, and UVC.
UV is the lowest energy range energetic enough to ionize atoms, separating electrons from them, and thus causing chemical reactions. UV, X-rays, and gamma rays are thus collectively called ionizing radiation; exposure to them can damage living tissue. UV can also cause substances to glow with visible light; this is called fluorescence. UV fluorescence is used by forensics to detect any evidence like blood and urine, that is produced by a crime scene. Also UV fluorescence is used to detect counterfeit money and IDs, as they are laced with material that can glow under UV.
At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen. Due to skin cancer caused by UV, the sunscreen industry was invented to combat UV damage. Mid UV wavelengths are called UVB and UVB lights such as germicidal lamps are used to kill germs and also to sterilize water.
The Sun emits UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface. The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb. This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies. The remainder is UV-A, along with some UV-B. The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does create oxygen radicals, mutations and skin damage.
X-rays
After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water. One notable use is diagnostic X-ray imaging in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena. X-rays are also emitted by stellar corona and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 g/cm2), equivalent to 10 meters thickness of water. This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below).
Gamma rays
After hard X-rays come gamma rays, which were discovered by Paul Ulrich Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes. They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans. The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering.
See also
Notes and references
External links
Australian Radiofrequency Spectrum Allocations Chart (from Australian Communications and Media Authority)
Canadian Table of Frequency Allocations (from Industry Canada)
U.S. Frequency Allocation Chart – Covering the range 3 kHz to 300 GHz (from Department of Commerce)
UK frequency allocation table (from Ofcom, which inherited the Radiocommunications Agency's duties, pdf format)
Flash EM Spectrum Presentation / Tool – Very complete and customizable.
Poster "Electromagnetic Radiation Spectrum" (992 kB)
Waves | Electromagnetic spectrum | [
"Physics"
] | 4,876 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Motion (physics)"
] |
10,201 | https://en.wikipedia.org/wiki/Exothermic%20process | In thermodynamics, an exothermic process () is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot.
The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat).
Two types of chemical reactions
Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows:
Exothermic
An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy.
Endothermic
In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them.
Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy).
Energy release
Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by
When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e.
while at constant volume, according to the first law of thermodynamics it equals internal energy () change, i.e.
In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system.
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
Examples
Some examples of exothermic processes are:
Combustion of fuels such as wood, coal and oil/petroleum
The thermite reaction
The reaction of alkali metals and other highly electropositive metals with water
Condensation of rain from water vapor
Mixing water and strong acids or strong bases
The reaction of acids and bases
Dehydration of carbohydrates by sulfuric acid
The setting of cement and concrete
Some polymerization reactions such as the setting of epoxy resin
The reaction of most metals with halogens or oxygen
Nuclear fusion in hydrogen bombs and in stellar cores (to iron)
Nuclear fission of heavy elements
The reaction between zinc and hydrochloric acid
Respiration (breaking down of glucose to release energy in cells)
Implications for chemical reactions
Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions.
In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction.
See also
Calorimetry
Chemical thermodynamics
Differential scanning calorimetry
Endergonic
Endergonic reaction
Exergonic
Exergonic reaction
Endothermic reaction
References
External links
Observe exothermic reactions in a simple experiment
Thermodynamic processes
Chemical thermodynamics
da:Exoterm | Exothermic process | [
"Physics",
"Chemistry"
] | 989 | [
"Chemical thermodynamics",
"Thermodynamic processes",
"Thermodynamics"
] |
10,274 | https://en.wikipedia.org/wiki/Enthalpy | Enthalpy () is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work that was done against constant external pressure to establish the system's physical dimensions from to some final volume (as ), i.e. to make room for it by displacing its surroundings.
The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it.
In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU).
The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat.
In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states.
This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function.
Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature,
but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change is a positive value; for exothermic (heat-releasing) processes it is negative.
The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis.
The word "enthalpy" is derived from the Greek word enthalpein, which means "to heat".
Definition
The enthalpy of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:
where is the internal energy, is pressure, and is the volume of the system; is sometimes referred to as the pressure energy .
Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy, is referenced to a unit of mass of the system, and the molar enthalpy, where is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:
where
is the total enthalpy of all the subsystems,
refers to the various subsystems,
refers to the enthalpy of each subsystem.
A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure varies continuously with altitude, while, because of the equilibrium requirement, its temperature is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:
where
("rho") is density (mass per unit volume),
is the specific enthalpy (enthalpy per unit mass),
represents the enthalpy density (enthalpy per unit volume),
denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer.
The integral therefore represents the sum of the enthalpies of all the elements of the volume.
The enthalpy of a closed homogeneous system is its energy function with its entropy and its pressure as natural state variables which provide a differential relation for of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
where
is a small amount of heat added to the system,
is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result,
Adding to both sides of this expression gives
or
So
and the coefficients of the natural variable differentials and are just the single variables and .
Other expressions
The above expression of in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure:
Here is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion:
With this expression one can, in principle, determine the enthalpy if and are known as functions of and . However the expression is more complicated than because is not a natural variable for the enthalpy .
At constant pressure, so that For an ideal gas, reduces to this form even if the process involves a pressure change, because
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for then becomes
where is the chemical potential per particle for a type particle, and is the number of such particles. The last term can also be written as (with the number of moles of component added to the system and, in this case, the molar chemical potential) or as (with the mass of component added to the system and, in this case, the specific chemical potential).
Characteristic functions and natural state variables
The enthalpy, expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments include both one intensive and several extensive state variables. The state variables , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology.
Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system.
Physical interpretation
The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure.
In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used.
In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal.
Relationship to heat
In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads:
Now,
So
If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added:
This is why the
now-obsolete term heat content was used for enthalpy in the 19th century.
Applications
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system.
Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system.
For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process.
Heat of reaction
The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:
where
is the "enthalpy change",
is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium),
is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat.
Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction.
From the definition of enthalpy as the enthalpy change at constant pressure is However, for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide and
Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies.
Specific enthalpy
The specific enthalpy of a uniform system is defined as , where is the mass of the system. Its SI unit is joule per kilogram. It can be expressed in other specific quantities by where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density.
Enthalpy changes
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process.
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.
When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
A pressure of one atmosphere (1 atm or 1013.25 hPa) or 1 bar
A temperature of 25 °C or 298.15 K
A concentration of 1.0 M when the element or compound is present in solution
Elements or compounds in their normal physical states, i.e. standard state
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.
Chemical properties
Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely.
Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound.
Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties
Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid.
Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas.
Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas.
Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.
Open systems
In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system:
where is the average internal energy entering the system, and is the average internal energy leaving the system.
The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: Flow work described above, which is performed on the fluid (this is also often called work), and mechanical work (shaft work), which may be performed on some mechanical device such as a turbine or pump.
These two types of work are expressed in the equation
Substitution into the equation above for the control volume (cv) yields:
The definition of enthalpy, , permits us to use this thermodynamic potential to account for both internal energy and work in fluids for open systems:
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.
In terms of time derivatives, using Newton's dot notation for time derivatives, it reads:
with sums over the various places where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as
with the mass flow and the molar flow at position respectively. The term represents the rate of change of the system volume at position that results in power done by the system. The parameter represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant.
Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Diagrams
The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give as function of for various . One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Some basic applications
The points through in the figure play a role in the discussion in this section.
{| class="wikitable" style="text-align:center"
|-
|Point
! !! !! !!
|- style="background:#EEEEEE;"
| Unit || K || bar || ||
|-
| || 300 || 1 || 6.85 || 461
|-
| || 380 || 2 || 6.85 || 530
|-
| || 300 || 200 || 5.16 || 430
|-
| || 270 || 1 || 6.79 || 430
|-
| || 108 || 13 || 3.55 || 100
|-
| || 77.2 || 1 || 3.75 || 100
|-
| || 77.2 || 1 || 2.83 || 28
|-
| || 77.2 || 1 || 5.41 || 230
|}
Points and are saturated liquids, and point is a saturated gas.
Throttling
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.
For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above.
Example 1
Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 (not shown in the diagram) lying between the 400 and 450 isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value.
Example 2
Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in is equal to the enthalpy in multiplied by the liquid fraction in plus the enthalpy in multiplied by the gas fraction in So
With numbers:
so
This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.
Compressors
A power is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b''') would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature , heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
Eliminating gives for the minimal power
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least :
With the data, obtained with the diagram, we find a value of
The relation for the power can be further simplified by writing it as
With
this results in the final relation
History and etymology
The term enthalpy was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. Energy uses the root of the Greek word (ergon), meaning "work", to express the idea of capacity to perform work. Entropy uses the Greek word (tropē) meaning transformation or turning. Enthalpy uses the root of the Greek word (thalpos) "warmth, heat".
The term expresses the obsolete concept of heat content, as refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity.
Introduction of the concept of "heat content" is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850).
The term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams'', published in 1927.
Until the 1920s, the symbol was used, somewhat inconsistently, for "heat" in general. The definition of as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922.
Notes
See also
Calorimetry
Calorimeter
Departure function
Hess's law
Isenthalpic process
Laws of thermodynamics
Stagnation enthalpy
Standard enthalpy of formation
Thermodynamic databases
Thermodynamics
References
Bibliography
External links
State functions
Energy (physics)
Physical quantities | Enthalpy | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,449 | [
"State functions",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Energy (physics)",
"Enthalpy",
"Wikipedia categories named after physical quantities",
"Physical properties"
] |
10,290 | https://en.wikipedia.org/wiki/Emulsion | An emulsion is a mixture of two or more liquids that are normally immiscible (unmixable or unblendable) owing to liquid-liquid phase separation. Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion should be used when both phases, dispersed and continuous, are liquids. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, homogenized milk, liquid biomolecular condensates, and some cutting fluids for metal working.
Two liquids can form different types of emulsions. As an example, oil and water can form, first, an oil-in-water emulsion, in which the oil is the dispersed phase, and water is the continuous phase. Second, they can form a water-in-oil emulsion, in which water is the dispersed phase and oil is the continuous phase. Multiple emulsions are also possible, including a "water-in-oil-in-water" emulsion and an "oil-in-water-in-oil" emulsion.
Emulsions, being liquids, do not exhibit a static internal structure. The droplets dispersed in the continuous phase (sometimes referred to as the "dispersion medium") are usually assumed to be statistically distributed to produce roughly spherical droplets.
The term "emulsion" is also used to refer to the photo-sensitive side of photographic film. Such a photographic emulsion consists of silver halide colloidal particles dispersed in a gelatin matrix. Nuclear emulsions are similar to photographic emulsions, except that they are used in particle physics to detect high-energy elementary particles.
Etymology
The word "emulsion" comes from the Latin emulgere "to milk out", from ex "out" + mulgere "to milk", as milk is an emulsion of fat and water, along with other components, including colloidal casein micelles (a type of secreted biomolecular condensate).
Appearance and properties
Emulsions contain both a dispersed and a continuous phase, with the boundary between the phases called the "interface". Emulsions tend to have a cloudy appearance because the many phase interfaces scatter light as it passes through the emulsion. Emulsions appear white when all light is scattered equally. If the emulsion is dilute enough, higher-frequency (shorter-wavelength) light will be scattered more, and the emulsion will appear bluer – this is called the "Tyndall effect". If the emulsion is concentrated enough, the color will be distorted toward comparatively longer wavelengths, and will appear more yellow. This phenomenon is easily observable when comparing skimmed milk, which contains little fat, to cream, which contains a much higher concentration of milk fat. One example would be a mixture of water and oil.
Two special classes of emulsions – microemulsions and nanoemulsions, with droplet sizes below 100 nm – appear translucent. This property is due to the fact that light waves are scattered by the droplets only if their sizes exceed about one-quarter of the wavelength of the incident light. Since the visible spectrum of light is composed of wavelengths between 390 and 750 nanometers (nm), if the droplet sizes in the emulsion are below about 100 nm, the light can penetrate through the emulsion without being scattered. Due to their similarity in appearance, translucent nanoemulsions and microemulsions are frequently confused. Unlike translucent nanoemulsions, which require specialized equipment to be produced, microemulsions are spontaneously formed by "solubilizing" oil molecules with a mixture of surfactants, co-surfactants, and co-solvents. The required surfactant concentration in a microemulsion is, however, several times higher than that in a translucent nanoemulsion, and significantly exceeds the concentration of the dispersed phase. Because of many undesirable side-effects caused by surfactants, their presence is disadvantageous or prohibitive in many applications. In addition, the stability of a microemulsion is often easily compromised by dilution, by heating, or by changing pH levels.
Common emulsions are inherently unstable and, thus, do not tend to form spontaneously. Energy input – through shaking, stirring, homogenizing, or exposure to power ultrasound – is needed to form an emulsion. Over time, emulsions tend to revert to the stable state of the phases comprising the emulsion. An example of this is seen in the separation of the oil and vinegar components of vinaigrette, an unstable emulsion that will quickly separate unless shaken almost continuously. There are important exceptions to this rule – microemulsions are thermodynamically stable, while translucent nanoemulsions are kinetically stable.
Whether an emulsion of oil and water turns into a "water-in-oil" emulsion or an "oil-in-water" emulsion depends on the volume fraction of both phases and the type of emulsifier (surfactant) (see Emulsifier, below) present.
Instability
Emulsion stability refers to the ability of an emulsion to resist change in its properties over time. There are four types of instability in emulsions: flocculation, coalescence, creaming/sedimentation, and Ostwald ripening. Flocculation occurs when there is an attractive force between the droplets, so they form flocs, like bunches of grapes. This process can be desired, if controlled in its extent, to tune physical properties of emulsions such as their flow behaviour. Coalescence occurs when droplets bump into each other and combine to form a larger droplet, so the average droplet size increases over time. Emulsions can also undergo creaming, where the droplets rise to the top of the emulsion under the influence of buoyancy, or under the influence of the centripetal force induced when a centrifuge is used. Creaming is a common phenomenon in dairy and non-dairy beverages (i.e. milk, coffee milk, almond milk, soy milk) and usually does not change the droplet size. Sedimentation is the opposite phenomenon of creaming and normally observed in water-in-oil emulsions. Sedimentation happens when the dispersed phase is denser than the continuous phase and the gravitational forces pull the denser globules towards the bottom of the emulsion. Similar to creaming, sedimentation follows Stokes' law.
An appropriate surface active agent (or surfactant) can increase the kinetic stability of an emulsion so that the size of the droplets does not change significantly with time. The stability of an emulsion, like a suspension, can be studied in terms of zeta potential, which indicates the repulsion between droplets or particles. If the size and dispersion of droplets does not change over time, it is said to be stable. For example, oil-in-water emulsions containing mono- and diglycerides and milk protein as surfactant showed that stable oil droplet size over 28 days storage at 25 °C.
Monitoring physical stability
The stability of emulsions can be characterized using techniques such as light scattering, focused beam reflectance measurement, centrifugation, and rheology. Each method has advantages and disadvantages.
Accelerating methods for shelf life prediction
The kinetic process of destabilization can be rather long – up to several months, or even years for some products. Often the formulator must accelerate this process in order to test products in a reasonable time during product design. Thermal methods are the most commonly used – these consist of increasing the emulsion temperature to accelerate destabilization (if below critical temperatures for phase inversion or chemical degradation). Temperature affects not only the viscosity but also the interfacial tension in the case of non-ionic surfactants or, on a broader scope, interactions between droplets within the system. Storing an emulsion at high temperatures enables the simulation of realistic conditions for a product (e.g., a tube of sunscreen emulsion in a car in the summer heat), but also accelerates destabilization processes up to 200 times.
Mechanical methods of acceleration, including vibration, centrifugation, and agitation, can also be used.
These methods are almost always empirical, without a sound scientific basis.
Emulsifiers
An emulsifier is a substance that stabilizes an emulsion by reducing the oil-water interface tension. Emulsifiers are a part of a broader group of compounds known as surfactants, or "surface-active agents". Surfactants are compounds that are typically amphiphilic, meaning they have a polar or hydrophilic (i.e., water-soluble) part and a non-polar (i.e., hydrophobic or lipophilic) part. Emulsifiers that are more soluble in water (and, conversely, less soluble in oil) will generally form oil-in-water emulsions, while emulsifiers that are more soluble in oil will form water-in-oil emulsions.
Examples of food emulsifiers are:
Egg yolk – in which the main emulsifying and thickening agent is lecithin.
Mustard – where a variety of chemicals in the mucilage surrounding the seed hull act as emulsifiers
Soy lecithin is another emulsifier and thickener
Pickering stabilization – uses particles under certain circumstances
Mono- and diglycerides – a common emulsifier found in many food products (coffee creamers, ice creams, spreads, breads, cakes)
Sodium stearoyl lactylate
DATEM (diacetyl tartaric acid esters of mono- and diglycerides) – an emulsifier used primarily in baking
Proteins – those with both hydrophilic and hydrophobic regions, e.g. sodium caseinate, as in meltable cheese product
In food emulsions, the type of emulsifier greatly affects how emulsions are structured in the stomach and how accessible the oil is for gastric lipases, thereby influencing how fast emulsions are digested and trigger a satiety inducing hormone response.
Detergents are another class of surfactant, and will interact physically with both oil and water, thus stabilizing the interface between the oil and water droplets in suspension. This principle is exploited in soap, to remove grease for the purpose of cleaning. Many different emulsifiers are used in pharmacy to prepare emulsions such as creams and lotions. Common examples include emulsifying wax, polysorbate 20, and ceteareth 20.
Sometimes the inner phase itself can act as an emulsifier, and the result is a nanoemulsion, where the inner state disperses into "nano-size" droplets within the outer phase. A well-known example of this phenomenon, the "ouzo effect", happens when water is poured into a strong alcoholic anise-based beverage, such as ouzo, pastis, absinthe, arak, or raki. The anisolic compounds, which are soluble in ethanol, then form nano-size droplets and emulsify within the water. The resulting color of the drink is opaque and milky white.
Mechanisms of emulsification
A number of different chemical and physical processes and mechanisms can be involved in the process of emulsification:
Surface tension theory – according to this theory, emulsification takes place by reduction of interfacial tension between two phases
Repulsion theory – According to this theory, the emulsifier creates a film over one phase that forms globules, which repel each other. This repulsive force causes them to remain suspended in the dispersion medium
Viscosity modification – emulgents like acacia and tragacanth, which are hydrocolloids, as well as PEG (polyethylene glycol), glycerine, and other polymers like CMC (carboxymethyl cellulose), all increase the viscosity of the medium, which helps create and maintain the suspension of globules of dispersed phase
Uses
In food
Oil-in-water emulsions are common in food products:
Mayonnaise and Hollandaise sauces – these are oil-in-water emulsions stabilized with egg yolk lecithin, or with other types of food additives, such as sodium stearoyl lactylate
Homogenized milk – an emulsion of milk fat in water, with milk proteins as the emulsifier
Vinaigrette – an emulsion of vegetable oil in vinegar, if this is prepared using only oil and vinegar (i.e., without an emulsifier), an unstable emulsion results
Water-in-oil emulsions are less common in food, but still exist:
Butter – an emulsion of water in butterfat
Margarine
Other foods can be turned into products similar to emulsions, for example meat emulsion is a suspension of meat in liquid that is similar to true emulsions.
In health care
In pharmaceutics, hairstyling, personal hygiene, and cosmetics, emulsions are frequently used. These are usually oil and water emulsions but dispersed, and which is continuous depends in many cases on the pharmaceutical formulation. These emulsions may be called creams, ointments, liniments (balms), pastes, films, or liquids, depending mostly on their oil-to-water ratios, other additives, and their intended route of administration. The first 5 are topical dosage forms, and may be used on the surface of the skin, transdermally, ophthalmically, rectally, or vaginally. A highly liquid emulsion may also be used orally, or may be injected in some cases.
Microemulsions are used to deliver vaccines and kill microbes. Typical emulsions used in these techniques are nanoemulsions of soybean oil, with particles that are 400–600 nm in diameter. The process is not chemical, as with other types of antimicrobial treatments, but mechanical. The smaller the droplet the greater the surface tension and thus the greater the force required to merge with other lipids. The oil is emulsified with detergents using a high-shear mixer to stabilize the emulsion so, when they encounter the lipids in the cell membrane or envelope of bacteria or viruses, they force the lipids to merge with themselves. On a mass scale, in effect this disintegrates the membrane and kills the pathogen. The soybean oil emulsion does not harm normal human cells, or the cells of most other higher organisms, with the exceptions of sperm cells and blood cells, which are vulnerable to nanoemulsions due to the peculiarities of their membrane structures. For this reason, these nanoemulsions are not currently used intravenously (IV). The most effective application of this type of nanoemulsion is for the disinfection of surfaces. Some types of nanoemulsions have been shown to effectively destroy HIV-1 and tuberculosis pathogens on non-porous surfaces.
Applications in Pharmaceutical industry
Oral drug delivery: Emulsions may provide an efficient means of administering drugs that are poorly soluble or have low bioavailability or dissolution rates, increasing both dissolution rates and absorption to increase bioavailability and improve bioavailability. By increasing surface area provided by an emulsion, dissolution rates and absorption rates of drugs are increased, improving their bioavailability.
Topical formulations: Emulsions are widely utilized as bases for topical drug delivery formulations such as creams, lotions and ointments. Their incorporation allows lipophilic as well as hydrophilic drugs to be mixed together for maximum skin penetration and permeation of active ingredients.
Parenteral drug delivery: Emulsions serve as carriers for intravenous or intramuscular administration of drugs, solubilizing lipophilic ones while protecting from degradation and decreasing injection site irritation. Examples include propofol as a widely used anesthetic and lipid-based solutions used for total parenteral nutrition delivery.
Ocular Drug Delivery: Emulsions can be used to formulate eye drops and other ocular drug delivery systems, increasing drug retention time in the eye and permeating through corneal barriers more easily while providing sustained release of active ingredients and thus increasing therapeutic efficacy.
Nasal and Pulmonary Drug Delivery: Emulsions can be an ideal vehicle for creating nasal sprays and inhalable drug products, enhancing drug absorption through nasal and pulmonary mucosa while providing sustained release with reduced local irritation.
Vaccine Adjuvants: Emulsions can serve as vaccine adjuvants by strengthening immune responses against specific antigens. Emulsions can enhance antigen solubility and uptake by immune cells while simultaneously providing controlled release, amplifying an immunological response and thus amplifying its effect.
Taste Masking: Emulsions can be used to encase bitter or otherwise unpleasant-tasting drugs, masking their taste and increasing patient compliance - particularly with pediatric formulations.
Cosmeceuticals: Emulsions are widely utilized in cosmeceuticals products that combine cosmetic and pharmaceutical properties. These emulsions act as carriers for active ingredients like vitamins, antioxidants and skin lightening agents to provide improved skin penetration and increased stability.
In firefighting
Emulsifying agents are effective at extinguishing fires on small, thin-layer spills of flammable liquids (class B fires). Such agents encapsulate the fuel in a fuel-water emulsion, thereby trapping the flammable vapors in the water phase. This emulsion is achieved by applying an aqueous surfactant solution to the fuel through a high-pressure nozzle. Emulsifiers are not effective at extinguishing large fires involving bulk/deep liquid fuels, because the amount of emulsifier agent needed for extinguishment is a function of the volume of the fuel, whereas other agents such as aqueous film-forming foam need cover only the surface of the fuel to achieve vapor mitigation.
Chemical synthesis
Emulsions are used to manufacture polymer dispersions – polymer production in an emulsion 'phase' has a number of process advantages, including prevention of coagulation of product. Products produced by such polymerisations may be used as the emulsions – products including primary components for glues and paints. Synthetic latexes (rubbers) are also produced by this process.
See also
References
Other sources
Handbook of Nanostructured Materials and Nanotechnology; Nalwa, H.S., Ed.; Academic Press: New York, NY, USA, 2000; Volume 5, pp. 501–575
Chemical mixtures
Colloidal chemistry
Colloids
Dosage forms
Drug delivery devices
Soft matter | Emulsion | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,956 | [
"Pharmacology",
"Colloidal chemistry",
"Soft matter",
"Drug delivery devices",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"nan"
] |
10,303 | https://en.wikipedia.org/wiki/Evaporation | Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase. A high concentration of the evaporating substance in the surrounding gas significantly slows down evaporation, such as when humidity affects rate of evaporation of water. When the molecules of the liquid collide, they transfer energy to each other based on how they collide. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas. When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling.
On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated.
Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor.
Theory
For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces. When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body.
Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement.
On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen.
Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids are evaporating. It is just that the process is much slower and thus significantly less visible.
Evaporative equilibrium
If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium, the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. For a system consisting of vapor and liquid of a pure substance, this equilibrium state is directly related to the vapor pressure of the substance, as given by the Clausius–Clapeyron relation:
where P1, P2 are the vapor pressures at temperatures T1, T2 respectively, ΔHvap is the enthalpy of vaporization, and R is the universal gas constant. The rate of evaporation in an open system is related to the vapor pressure found in a closed system. If a liquid is heated, when the vapor pressure reaches the ambient pressure the liquid will boil.
The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization.
Factors influencing the rate of evaporation
Note: Air is used here as a common example of the surrounding gas; however, other gases may hold that role.
Concentration of the substance evaporating in the air If the air already has a high concentration of the substance evaporating, then the given substance will evaporate more slowly.
Flow rate of air This is in part related to the concentration points above. If "fresh" air (i.e., air which is neither already saturated with the substance nor with other substances) is moving over the substance all the time, then the concentration of the substance in the air is less likely to go up with time, thus encouraging faster evaporation. This is the result of the boundary layer at the evaporation surface decreasing with flow velocity, decreasing the diffusion distance in the stagnant layer.
The amount of minerals dissolved in the liquid
Inter-molecular forces The stronger the forces keeping the molecules together in the liquid state, the more energy one must get to escape. This is characterized by the enthalpy of vaporization.
Pressure Evaporation happens faster if there is less exertion on the surface keeping the molecules from launching themselves.
Surface area A substance that has a larger surface area will evaporate faster, as there are more surface molecules per unit of volume that are potentially able to escape.
Temperature of the substance the higher the temperature of the substance the greater the kinetic energy of the molecules at its surface and therefore the faster the rate of their evaporation.
Photomolecular effect The amount of light will affect the evaporation. When photons hits the surface area of the liquid they can make individual molecules break free and disappear into the air without any need for additional heat.
In the US, the National Weather Service measures, at various outdoor locations nationwide, the actual rate of evaporation from a standardized "pan" open water surface. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over per year.
Because it typically takes place in a complex environment, where 'evaporation is an extremely rare event', the mechanism for the evaporation of water is not completely understood. Theoretical calculations require prohibitively long and large computer simulations. 'The rate of evaporation of liquid water is one of the principal uncertainties in modern climate modeling.'
Thermodynamics
Evaporation is an endothermic process, since heat is absorbed during evaporation.
Applications
Industrial applications include many printing and coating processes; recovering salts from solutions; and drying a variety of materials such as lumber, paper, cloth and chemicals.
The use of evaporation to dry or concentrate samples is a common preparatory step for many laboratory analyses such as spectroscopy and chromatography. Systems used for this purpose include rotary evaporators and centrifugal evaporators.
When clothes are hung on a laundry line, even though the ambient temperature is below the boiling point of water, water evaporates. This is accelerated by factors such as low humidity, heat (from the sun), and wind. In a clothes dryer, hot air is blown through the clothes, allowing water to evaporate very rapidly.
The matki/matka, a traditional Indian porous clay container used for storing and cooling water and other liquids.
The botijo, a traditional Spanish porous clay container designed to cool the contained water by evaporation.
Evaporative coolers, which can significantly cool a building by simply blowing dry air over a filter saturated with water.
Combustion vaporization
Fuel droplets vaporize as they receive heat by mixing with the hot gases in the combustion chamber. Heat (energy) can also be received by radiation from any hot refractory wall of the combustion chamber.
Pre-combustion vaporization
Internal combustion engines rely upon the vaporization of the fuel in the cylinders to form a fuel/air mixture in order to burn well.
The chemically correct air/fuel mixture for total burning of gasoline has been determined to be about 15 parts air to one part gasoline or 15/1 by weight. Changing this to a volume ratio yields 8000 parts air to one part gasoline or 8,000/1 by volume.
Film deposition
Thin films may be deposited by evaporating a substance and condensing it onto a substrate, or by dissolving the substance in a solvent, spreading the resulting solution thinly over a substrate, and evaporating the solvent. The Hertz–Knudsen equation is often used to estimate the rate of evaporation in these instances.
See also
Atmometer (evaporimeter)
Cryophorus
Crystallisation
Desalination
Distillation
Eddy covariance flux (a.k.a. eddy correlation, eddy flux)
Evaporator
Evapotranspiration
Flash evaporation
Heat of vaporization
Hertz–Knudsen equation
Hydrology (agriculture)
Latent heat
Latent heat flux
Pan evaporation
Sublimation (phase transition) (phase transfer from solid directly to gas)
Transpiration
References
Further reading
Has an especially detailed discussion of film deposition by evaporation.
External links
Atmospheric thermodynamics
Meteorological phenomena
Materials science
Phase transitions
Thin film deposition
Gases | Evaporation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,020 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Earth phenomena",
"Gases",
"Thin film deposition",
"Planes (geometry)",
"Coatings",
"Phases of matter",
"Materials science",
"Critical phenomena",
"Thin films",
"Meteorological phenomena",
"nan",
"Statis... |
10,340 | https://en.wikipedia.org/wiki/Ecdysis | Ecdysis is the moulting of the cuticle in many invertebrates of the clade Ecdysozoa. Since the cuticle of these animals typically forms a largely inelastic exoskeleton, it is shed during growth and a new, larger covering is formed. The remnants of the old, empty exoskeleton are called exuviae.
After moulting, an arthropod is described as teneral, a callow; it is "fresh", pale and soft-bodied. Within one or two hours, the cuticle hardens and darkens following a tanning process analogous to the production of leather. During this short phase the animal expands, since growth is otherwise constrained by the rigidity of the exoskeleton. Growth of the limbs and other parts normally covered by the hard exoskeleton is achieved by transfer of body fluids from soft parts before the new skin hardens. A spider with a small abdomen may be undernourished but more probably has recently undergone ecdysis. Some arthropods, especially large insects with tracheal respiration, expand their new exoskeleton by swallowing or otherwise taking in air. The maturation of the structure and colouration of the new exoskeleton might take days or weeks in a long-lived insect; this can make it difficult to identify an individual if it has recently undergone ecdysis.
Ecdysis allows damaged tissue and missing limbs to be regenerated or substantially re-formed. Complete regeneration may require a series of moults, the stump becoming a little larger with each moult until the limb is a normal, or near normal, size.
Etymology
The term ecdysis comes from Ancient Greek () 'to take off, strip off'.
Process
In preparation for ecdysis, the arthropod becomes inactive for a period of time, undergoing apolysis or separation of the old exoskeleton from the underlying epidermal cells. For most organisms, the resting period is a stage of preparation during which the secretion of fluid from the moulting glands of the epidermal layer and the loosening of the underpart of the cuticle occurs.
Once the old cuticle has separated from the epidermis, a digesting fluid is secreted into the space between them. However, this fluid remains inactive until the upper part of the new cuticle has been formed. Then, by crawling movements, the organism pushes forward in the old integumentary shell, which splits down the back allowing the animal to emerge. Often, this initial crack is caused by a combination of movement and increase in pressure of hemolymph within the body, forcing an expansion across its exoskeleton, leading to an eventual crack that allows for certain organisms such as spiders to extricate themselves.
While the old cuticle is being digested, the new layer is secreted. All cuticular structures are shed at ecdysis, including the inner parts of the exoskeleton, which includes terminal linings of the alimentary tract and of the tracheae if they are present.
Insects
Each stage of development between moults for insects in the taxon Endopterygota is called an instar, or stadium, and each stage between moults of insects in the Exopterygota is called a nymph: there may be up to 15 nymphal stages. Endopterygota tend to have only four or five instars. Endopterygotes have more alternatives to moulting, such as expansion of the cuticle and collapse of air sacs to allow growth of internal organs.
The process of moulting in insects begins with the separation of the cuticle from the underlying epidermal cells (apolysis) and ends with the shedding of the old cuticle (ecdysis). In many species it is initiated by an increase in the hormone ecdysone. This hormone causes:
apolysis – the separation of the cuticle from the epidermis
secretion of new cuticle materials beneath the old
degradation of the old cuticle
After apolysis the insect is known as a pharate. Moulting fluid is then secreted into the exuvial space between the old cuticle and the epidermis, this contains inactive enzymes which are activated only after the new epicuticle is secreted. This prevents the new procuticle from getting digested as it is laid down. The lower regions of the old cuticle, the endocuticle and mesocuticle, are then digested by the enzymes and subsequently absorbed. The exocuticle and epicuticle resist digestion and are hence shed at ecdysis.
Spiders
Spiders generally change their skin for the first time while still inside the egg sac, and the spiderling that emerges broadly resembles the adult. The number of moults varies, both between species and sexes, but generally will be between five times and nine times before the spider reaches maturity. Not surprisingly, since males are generally smaller than females, the males of many species mature faster and do not undergo ecdysis as many times as the females before maturing.
Members of the Mygalomorphae are very long-lived, sometimes 20 years or more; they moult annually even after they mature.
Spiders stop feeding at some time before moulting, usually for several days. The physiological processes of releasing the old exoskeleton from the tissues beneath typically cause various colour changes, such as darkening. If the old exoskeleton is not too thick it may be possible to see new structures, such as setae, from the outside. However, contact between the nerves and the old exoskeleton is maintained until a very late stage in the process.
The new, teneral exoskeleton has to accommodate a larger frame than the previous instar, while the spider has had to fit into the previous exoskeleton until it has been shed. This means the spider does not fill out the new exoskeleton completely, so it commonly appears somewhat wrinkled.
Most species of spiders hang from silk during the entire process, either dangling from a drop line, or fastening their claws into webbed fibres attached to a suitable base. The discarded, dried exoskeleton typically remains hanging where it was abandoned once the spider has left.
To open the old exoskeleton, the spider generally contracts its abdomen (opisthosoma) to supply enough fluid to pump into the prosoma with sufficient pressure to crack it open along its lines of weakness. The carapace lifts off from the front, like a helmet, as its surrounding skin ruptures, but it remains attached at the back. Now the spider works its limbs free and typically winds up dangling by a new thread of silk attached to its own exuviae, which in turn hang from the original silk attachment.
At this point the spider is a callow; it is teneral and vulnerable. As it dangles, its exoskeleton hardens and takes shape. The process may take minutes in small spiders, or some hours in the larger Mygalomorphs. Some spiders, such as some Synema species, members of the Thomisidae (crab spiders), mate while the female is still callow, during which time she is unable to eat the male.
Eurypterids
Eurypterids are a group of chelicerates that became extinct in the Late Permian. They underwent ecdysis similarly to extant chelicerates, and most fossils are thought to be of exuviae, rather than cadavers.
See also
Ecdysteroid
References
External links
Animal developmental biology
Protostome anatomy
Ethology
it:Muta (biologia) | Ecdysis | [
"Biology"
] | 1,623 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
10,356 | https://en.wikipedia.org/wiki/Endothermic%20process | An endothermic process is a chemical or physical process that absorbs heat from its surroundings. In terms of thermodynamics, it is a thermodynamic process with an increase in the enthalpy (or internal energy ) of the system. In an endothermic process, the heat that a system absorbs is thermal energy transfer into the system. Thus, an endothermic reaction generally leads to an increase in the temperature of the system and a decrease in that of the surroundings.
The term was coined by 19th-century French chemist Marcellin Berthelot. The term endothermic comes from the Greek ἔνδον (endon) meaning 'within' and θερμ- (therm) meaning 'hot' or 'warm'.
An endothermic process may be a chemical process, such as dissolving ammonium nitrate () in water (), or a physical process, such as the melting of ice cubes.
The opposite of an endothermic process is an exothermic process, one that releases or "gives out" energy, usually in the form of heat and sometimes as electrical energy. Thus, endo in endothermic refers to energy or heat going in, and exo in exothermic refers to energy or heat going out. In each term (endothermic and exothermic) the prefix refers to where heat (or electrical energy) goes as the process occurs.
In chemistry
Due to bonds breaking and forming during various processes (changes in state, chemical reactions), there is usually a change in energy. If the energy of the forming bonds is greater than the energy of the breaking bonds, then energy is released. This is known as an exothermic reaction. However, if more energy is needed to break the bonds than the energy being released, energy is taken up. Therefore, it is an endothermic reaction.
Details
Whether a process can occur spontaneously depends not only on the enthalpy change but also on the entropy change () and absolute temperature . If a process is a spontaneous process at a certain temperature, the products have a lower Gibbs free energy than the reactants (an exergonic process), even if the enthalpy of the products is higher. Thus, an endothermic process usually requires a favorable entropy increase () in the system that overcomes the unfavorable increase in enthalpy so that still . While endothermic phase transitions into more disordered states of higher entropy, e.g. melting and vaporization, are common, spontaneous chemical processes at moderate temperatures are rarely endothermic. The enthalpy increase in a hypothetical strongly endothermic process usually results in , which means that the process will not occur (unless driven by electrical or photon energy). An example of an endothermic and exergonic process is
C6H12O6 + 6 H2O -> 12 H2 + 6 CO2
.
Examples
Evaporation
Sublimation
Cracking of alkanes
Thermal decomposition
Hydrolysis
Nucleosynthesis of elements heavier than nickel in stellar cores
High-energy neutrons can produce tritium from lithium-7 in an endothermic process, consuming 2.466 MeV. This was discovered when the 1954 Castle Bravo nuclear test produced an unexpectedly high yield.
Nuclear fusion of elements heavier than iron in supernovae
Dissolving together barium hydroxide and ammonium chloride
Distinction between endothermic and endotherm
The terms "endothermic" and "endotherm" are both derived from Greek "within" and "heat", but depending on context, they can have very different meanings.
In physics, thermodynamics applies to processes involving a system and its surroundings, and the term "endothermic" is used to describe a reaction where energy is taken "(with)in" by the system (vs. an "exothermic" reaction, which releases energy "outwards").
In biology, thermoregulation is the ability of an organism to maintain its body temperature, and the term "endotherm" refers to an organism that can do so from "within" by using the heat released by its internal bodily functions (vs. an "ectotherm", which relies on external, environmental heat sources) to maintain an adequate temperature.
References
External links
Exothermic and Endothermic – MSDS Hyper-Glossary at Interactive Learning Paradigms, Incorporated
Thermochemistry
Thermodynamic processes
Chemical thermodynamics | Endothermic process | [
"Physics",
"Chemistry"
] | 961 | [
"Chemical thermodynamics",
"Thermochemistry",
"Thermodynamic processes",
"Thermodynamics"
] |
1,023,353 | https://en.wikipedia.org/wiki/Burgers%27%20equation | Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field and diffusion coefficient (or kinematic viscosity, as in the original fluid mechanical context) , the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system:
The term can also rewritten as . When the diffusion term is absent (i.e. ), Burgers' equation becomes the inviscid Burgers' equation:
which is a prototype for conservation equations that can develop discontinuities (shock waves).
The reason for the formation of sharp gradients for small values of becomes intuitively clear when one examines the left-hand side of the equation. The term is evidently a wave operator describing a wave propagating in the positive -direction with a speed . Since the wave speed is , regions exhibiting large values of will be propagated rightwards quicker than regions exhibiting smaller values of ; in other words, if is decreasing in the -direction, initially, then larger 's that lie in the backside will catch up with smaller 's on the front side. The role of the right-side diffusive term is essentially to stop the gradient becoming infinite.
Inviscid Burgers' equation
The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition
can be constructed by the method of characteristics. Let be the parameter characterising any given characteristics in the - plane, then the characteristic equations are given by
Integration of the second equation tells us that is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e.,
where is the point (or parameter) on the x-axis (t = 0) of the x-t plane from which the characteristic curve is drawn. Since at -axis is known from the initial condition and the fact that is unchanged as we move along the characteristic emanating from each point , we write on each characteristic. Therefore, the family of trajectories of characteristics parametrized by is
Thus, the solution is given by
This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the breaking time before a shock wave can be formed is given by
Complete integral of the inviscid Burgers' equation
The implicit solution described above containing an arbitrary function is called the general integral. However, the inviscid Burgers' equation, being a first-order partial differential equation, also has a complete integral which contains two arbitrary constants (for the two independent variables). Subrahmanyan Chandrasekhar provided the complete integral in 1943, which is given by
where and are arbitrary constants. The complete integral satisfies a linear initial condition, i.e., . One can also construct the general integral using the above complete integral.
Viscous Burgers' equation
The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation,
which turns it into the equation
which can be integrated with respect to to obtain
where is an arbitrary function of time. Introducing the transformation (which does not affect the function ), the required equation reduces to that of the heat equation
The diffusion equation can be solved. That is, if , then
The initial function is related to the initial function by
where the lower limit is chosen arbitrarily. Inverting the Cole–Hopf transformation, we have
which simplifies, by getting rid of the time-dependent prefactor in the argument of the logarthim, to
This solution is derived from the solution of the heat equation for that decays to zero as ; other solutions for can be obtained starting from solutions of that satisfies different boundary conditions.
Some explicit solutions of the viscous Burgers' equation
Explicit expressions for the viscous Burgers' equation are available. Some of the physically relevant solutions are given below:
Steadily propagating traveling wave
If is such that and and , then we have a traveling-wave solution (with a constant speed ) given by
This solution, that was originally derived by Harry Bateman in 1915, is used to describe the variation of pressure across a weak shock wave. When and to
with .
Delta function as an initial condition
If , where (say, the Reynolds number) is a constant, then we have
In the limit , the limiting behaviour is a diffusional spreading of a source and therefore is given by
On the other hand, In the limit , the solution approaches that of the aforementioned Chandrasekhar's shock-wave solution of the inviscid Burgers' equation and is given by
The shock wave location and its speed are given by and
N-wave solution
The N-wave solution comprises a compression wave followed by a rarafaction wave. A solution of this type is given by
where may be regarded as an initial Reynolds number at time and with , may be regarded as the time-varying Reynold number.
Other forms
Multi-dimensional Burgers' equation
In two or more dimensions, the Burgers' equation becomes
One can also extend the equation for the vector field , as in
Generalized Burgers' equation
The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e.,
where is any arbitrary function of u. The inviscid equation is still a quasilinear hyperbolic equation for and its solution can be constructed using method of characteristics as before.
Stochastic Burgers' equation
Added space-time noise , where is an Wiener process, forms a stochastic Burgers' equation
This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field upon substituting .
See also
Chaplygin's equation
Conservation equation
Euler–Tricomi equation
Fokker–Planck equation
KdV-Burgers equation
References
External links
Burgers' Equation at EqWorld: The World of Mathematical Equations.
Burgers' Equation at NEQwiki, the nonlinear equations encyclopedia.
Conservation equations
Equations of fluid dynamics
Fluid dynamics | Burgers' equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,372 | [
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Conservation laws",
"Mathematical objects",
"Equations",
"Piping",
"Fluid dynamics",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
1,023,468 | https://en.wikipedia.org/wiki/Chabazite | Chabazite () is a tectosilicate mineral of the zeolite group, closely related to gmelinite, with the chemical formula . Recognized varieties include Chabazite-Ca, Chabazite-K, Chabazite-Na, and Chabazite-Sr, depending on the prominence of the indicated cation.
Chabazite crystallizes in the triclinic crystal system with typically rhombohedral shaped crystals that are pseudo-cubic. The crystals are typically twinned, and both contact twinning and penetration twinning may be observed. They may be colorless, white, orange, brown, pink, green, or yellow. The hardness ranges from 3 to 5 and the specific gravity from 2.0 to 2.2. The luster is vitreous.
It was named chabasie in 1792 by Bosc d'Antic and later changed to the current spelling.
Chabazite occurs most commonly in voids and amygdules in basaltic rocks.
Chabazite is found in India, Iceland, the Faroe Islands, the Giants Causeway in Northern Ireland, Bohemia, Italy, Germany, along the Bay of Fundy in Nova Scotia, Oregon, Arizona, and New Jersey.
Synthetic chabazite
Many different materials that are isostructural with the chabazite mineral have been synthesized in laboratories. SSZ-13 is a CHA type zeolite with an Si/Al ratio of 14. This is a composition not found in nature.
References
External links
Webmineral
Mineral Galleries
Mindat.org
Tectosilicates
Calcium minerals
Sodium minerals
Potassium minerals
Magnesium minerals
Aluminium minerals
12
Zeolites
Trigonal minerals
Minerals in space group 166
Minerals described in 1792 | Chabazite | [
"Chemistry"
] | 362 | [
"Hydrate minerals",
"Hydrates"
] |
1,023,575 | https://en.wikipedia.org/wiki/Glioblastoma | Glioblastoma, previously known as glioblastoma multiforme (GBM), is the most aggressive and most common type of cancer that originates in the brain, and has a very poor prognosis for survival. Initial signs and symptoms of glioblastoma are nonspecific. They may include headaches, personality changes, nausea, and symptoms similar to those of a stroke. Symptoms often worsen rapidly and may progress to unconsciousness.
The cause of most cases of glioblastoma is not known. Uncommon risk factors include genetic disorders, such as neurofibromatosis and Li–Fraumeni syndrome, and previous radiation therapy. Glioblastomas represent 15% of all brain tumors. They are thought to arise from astrocytes. The diagnosis typically is made by a combination of a CT scan, MRI scan, and tissue biopsy.
There is no known method of preventing the cancer. Treatment usually involves surgery, after which chemotherapy and radiation therapy are used. The medication temozolomide is frequently used as part of chemotherapy. High-dose steroids may be used to help reduce swelling and decrease symptoms. Surgical removal (decompression) of the tumor is linked to increased survival, but only by some months.
Despite maximum treatment, the cancer almost always recurs. The typical duration of survival following diagnosis is 10–13 months, with fewer than 5–10% of people surviving longer than five years. Without treatment, survival is typically three months. It is the most common cancer that begins within the brain and the second-most common brain tumor, after meningioma, which is benign in most cases. About 3 in 100,000 people develop the disease per year. The average age at diagnosis is 64, and the disease occurs more commonly in males than females.
Tumors of the central nervous system are the 10th leading cause of death worldwide, with up to 90% being brain tumors. Glioblastoma multiforme (GBM) is derived from astrocytes and accounts for 49% of all malignant central nervous system tumors, making it the most common form of central nervous system cancer. Despite countless efforts to develop new therapies for GBM over the years, the median survival rate of GBM patients worldwide is 8 months; radiation and chemotherapy standard-of-care treatment beginning shortly after diagnosis improve the median survival length to around 14 months and a five-year survival rate of 5–10%. The five-year survival rate for individuals with any form of primary malignant brain tumor is 20%. Even when all detectable traces of the tumor are removed through surgery, most patients with GBM experience recurrence of their cancer.
Signs and symptoms
Common symptoms include seizures, headaches, nausea and vomiting, memory loss, changes to personality, mood or concentration, and localized neurological problems. The kinds of symptoms produced depend more on the location of the tumor than on its pathological properties. The tumor can start producing symptoms quickly, but occasionally is an asymptomatic condition until it reaches an enormous size.
Risk factors
The cause of most cases is unclear. The best known risk factor is exposure to ionizing radiation, and CT scan radiation is an important cause. About 5% develop from certain hereditary syndromes.
Genetics
Uncommon risk factors include genetic disorders such as neurofibromatosis, Li–Fraumeni syndrome, tuberous sclerosis, or Turcot syndrome. Previous radiation therapy is also a risk. For unknown reasons, it occurs more commonly in males.
Environmental
Other associations include exposure to smoking, pesticides, and working in petroleum refining or rubber manufacturing.
Glioblastoma has been associated with the viruses SV40, HHV-6, and cytomegalovirus (CMV). Infection with an oncogenic CMV may even be necessary for the development of glioblastoma.
Other
Research has been done to see if consumption of cured meat is a risk factor. No risk had been confirmed as of 2003. Similarly, exposure to formaldehyde, and residential electromagnetic fields, such as from cell phones and electrical wiring within homes, have been studied as risk factors. As of 2015, they had not been shown to cause GBM.
Pathogenesis
The cellular origin of glioblastoma is unknown. Because of the similarities in immunostaining of glial cells and glioblastoma, gliomas such as glioblastoma have long been assumed to originate from glial-type stem cells found in the subventricular zone. More recent studies suggest that astrocytes, oligodendrocyte progenitor cells, and neural stem cells could all serve as the cell of origin.
GBMs usually form in the cerebral white matter, grow quickly, and can become very large before producing symptoms. Since the function of glial cells in the brain is to support neurons, they have the ability to divide, to enlarge, and to extend cellular projections along neurons and blood vessels. Once cancerous, these cells are predisposed to spread along existing paths in the brain, typically along white-matter tracts, blood vessels and the perivascular space. The tumor may extend into the meninges or ventricular wall, leading to high protein content in the cerebrospinal fluid (CSF) (> 100 mg/dl), as well as an occasional pleocytosis of 10 to 100 cells, mostly lymphocytes. Malignant cells carried in the CSF may spread (rarely) to the spinal cord or cause meningeal gliomatosis. However, metastasis of GBM beyond the central nervous system is extremely unusual. About 50% of GBMs occupy more than one lobe of a hemisphere or are bilateral. Tumors of this type usually arise from the cerebrum and may exhibit the classic infiltration across the corpus callosum, producing a butterfly (bilateral) glioma.
Glioblastoma classification
Brain tumor classification has been traditionally based on histopathology at macroscopic level, measured in hematoxylin-eosin sections. The World Health Organization published the first standard classification in 1979 and has been doing so since. The 2007 WHO Classification of Tumors of the Central Nervous System was the last classification mainly based on microscopy features. The new 2016 WHO Classification of Tumors of the Central Nervous System was a paradigm shift: some of the tumors were defined also by their genetic composition as well as their cell morphology.
In 2021, the fifth edition of the WHO Classification of Tumors of the Central Nervous System was released. This update eliminated the classification of secondary glioblastoma and reclassified those tumors as Astrocytoma, IDH mutant, grade 4. Only tumors that are IDH wild type are now classified as glioblastoma.
Molecular alterations
There are currently three molecular subtypes of glioblastoma that were identified based on gene expression:
Classical: Around 97% of tumors in this subtype carry extra copies of the epidermal growth factor receptor (EGFR) gene, and most have higher than normal expression of EGFR, whereas the gene TP53 (p53), which is often mutated in glioblastoma, is rarely mutated in this subtype. Loss of heterozygosity in chromosome 10 is also frequently seen in the classical subtype alongside chromosome 7 amplification.
The proneural subtype often has high rates of alterations in TP53 (p53), and in PDGFRA the gene encoding a-type platelet-derived growth factor receptor.
The mesenchymal subtype is characterized by high rates of mutations or other alterations in NF1, the gene encoding neurofibromin 1 and fewer alterations in the EGFR gene and less expression of EGFR than other types.
Initial analyses of gene expression had revealed a fourth neural subtype. However, further analyses revealed that this subtype is non-tumor specific and is potential contamination caused by the normal cells.
Many other genetic alterations have been described in glioblastoma, and the majority of them are clustered in two pathways, the RB and the PI3K/AKT. 68–78% and 88% of Glioblastomas have alterations in these pathways, respectively.
Another important alteration is methylation of MGMT, a "suicide" DNA repair enzyme. Methylation impairs DNA transcription and expression of the MGMT gene. Since the MGMT enzyme can repair only one DNA alkylation due to its suicide repair mechanism, reserve capacity is low and methylation of the MGMT gene promoter greatly affects DNA-repair capacity. MGMT methylation is associated with an improved response to treatment with DNA-damaging chemotherapeutics, such as temozolomide.
Studies using genome-wide profiling have revealed glioblastomas to have a remarkable genetic variety.
At least three distinct paths in the development of Glioblastomas have been identified with the aid of molecular investigations.
The first pathway involves the amplification and mutational activation of receptor tyrosine kinase (RTK) genes, leading to the dysregulation of growth factor signaling. Epithelial growth factor (EGF), vascular endothelial growth factor (VEGF), and platelet-derived growth factor (PDGF) are all recognized by transmembrane proteins called RTKs. Additionally, they can function as receptors for hormones, cytokines, and other signaling pathways.
The second method involves activating the intracellular signaling system known as phosphatidylinositol-3-OH kinase (PI3K)/AKT/mTOR, which is crucial for controlling cell survival.
The third pathway is defined by p53 and retinoblastoma (Rb) tumor suppressor pathway inactivation.
Cancer stem cells
Glioblastoma cells with properties similar to progenitor cells (glioblastoma cancer stem cells) have been found in glioblastomas. Their presence, coupled with the glioblastoma's diffuse nature results in difficulty in removing them completely by surgery, and is therefore believed to be the possible cause behind resistance to conventional treatments, and the high recurrence rate. Glioblastoma cancer stem cells share some resemblance with neural progenitor cells, both expressing the surface receptor CD133. CD44 can also be used as a cancer stem cell marker in a subset of glioblastoma tumour cells. Glioblastoma cancer stem cells appear to exhibit enhanced resistance to radiotherapy and chemotherapy mediated, at least in part, by up-regulation of the DNA damage response.
Metabolism
The IDH1 gene encodes for the enzyme isocitrate dehydrogenase 1 and is not mutated in glioblastoma. As such, these tumors behave more aggressively compared to IDH1-mutated astrocytomas.
Ion channels
Furthermore, GBM exhibits numerous alterations in genes that encode for ion channels, including upregulation of gBK potassium channels and ClC-3 chloride channels. By upregulating these ion channels, glioblastoma tumor cells are hypothesized to facilitate increased ion movement over the cell membrane, thereby increasing H2O movement through osmosis, which aids glioblastoma cells in changing cellular volume very rapidly. This is helpful in their extremely aggressive invasive behavior because quick adaptations in cellular volume can facilitate movement through the sinuous extracellular matrix of the brain.
MicroRNA
As of 2012, RNA interference, usually microRNA, was under investigation in tissue culture, pathology specimens, and preclinical animal models of glioblastoma. Additionally, experimental observations suggest that microRNA-451 is a key regulator of LKB1/AMPK signaling in cultured glioma cells and that miRNA clustering controls epigenetic pathways in the disease.
Tumor vasculature
GBM is characterized by abnormal vessels that present disrupted morphology and functionality. The high permeability and poor perfusion of the vasculature result in a disorganized blood flow within the tumor and can lead to increased hypoxia, which in turn facilitates cancer progression by promoting processes such as immunosuppression.
Diagnosis
When viewed with MRI, glioblastomas often appear as ring-enhancing lesions. The appearance is not specific, however, as other lesions such as abscess, metastasis, tumefactive multiple sclerosis, and other entities may have a similar appearance. Definitive diagnosis of a suspected GBM on CT or MRI requires a stereotactic biopsy or a craniotomy with tumor resection and pathologic confirmation. Because the tumor grade is based upon the most malignant portion of the tumor, biopsy or subtotal tumor resection can result in undergrading of the lesion. Imaging of tumor blood flow using perfusion MRI and measuring tumor metabolite concentration with MR spectroscopy may add diagnostic value to standard MRI in select cases by showing increased relative cerebral blood volume and increased choline peak, respectively, but pathology remains the gold standard for diagnosis and molecular characterization.
Distinguishing glioblastoma from high-grade astrocytoma is important. These tumors occur spontaneously (de novo) and have not progressed from a lower-grade glioma, as in high-grade astrocytomas Glioblastomas have a worse prognosis and different tumor biology, and may have a different response to therapy, which makes this a critical evaluation to determine patient prognosis and therapy. Astrocytomas carry a mutation in IDH1 or IDH2, whereas this mutation is not present in glioblastoma. Thus, IDH1 and IDH2 mutations are a useful tool to distinguish glioblastomas from astrocytomas, since histopathologically they are similar and the distinction without molecular biomarkers is unreliable. IDH-wildtype glioblastomas usually have lower OLIG2 expression compared with IDH-mutant lower grade astrocytomas. In patients aged over 55 years with a histologically typical glioblastoma, without a pre-existing lower grade glioma, with a non-midline tumor location and with retained nuclear ATRX expression, immunohistochemical negativity for IDH1 R132H suffices for the classification as IDH-wild-type glioblastoma. In all other instances of diffuse gliomas, a lack of IDH1 R132H immunopositivity should be followed by IDH1 and IDH2 DNA sequencing to detect or exclude the presence of non-canonical mutations. IDH-wild-type diffuse astrocytic gliomas without microvascular proliferation or necrosis should be tested for EGFR amplification, TERT promoter mutation and a +7/–10 cytogenetic signature as molecular characteristics of IDH-wild-type glioblastomas.
Prevention
There are no known methods to prevent glioblastoma. It is the case for most gliomas, unlike for some other forms of cancer, that they happen without previous warning and there are no known ways to prevent them.
Treatment
Treating glioblastoma is difficult due to several complicating factors:
The tumor cells are resistant to conventional therapies.
The brain is susceptible to damage from conventional therapy.
The brain has a limited capacity to repair itself.
Many drugs cannot cross the blood–brain barrier to act on the tumor.
Treatment of primary brain tumors consists of palliative (symptomatic) care and therapies intended to improve survival.
Symptomatic therapy
Supportive treatment focuses on relieving symptoms and improving the patient's neurologic function. The primary supportive agents are anticonvulsants and corticosteroids.
Historically, around 90% of patients with glioblastoma underwent anticonvulsant treatment, although only an estimated 40% of patients required this treatment. Neurosurgeons have recommended that anticonvulsants not be administered prophylactically, and should wait until a seizure occurs before prescribing this medication. Those receiving phenytoin concurrent with radiation may have serious skin reactions such as erythema multiforme and Stevens–Johnson syndrome.
Corticosteroids, usually dexamethasone, can reduce peritumoral edema (through rearrangement of the blood–brain barrier), diminishing mass effect and lowering intracranial pressure, with a decrease in headache or drowsiness.
Surgery
Surgery is the first stage of treatment of glioblastoma. An average GBM tumor contains 1011 cells, which is on average reduced to 109 cells after surgery (a reduction of 99%). Benefits of surgery include resection for a pathological diagnosis, alleviation of symptoms related to mass effect, and potentially removing disease before secondary resistance to radiotherapy and chemotherapy occurs.
The greater the extent of tumor removal, the better. In retrospective analyses, removal of 98% or more of the tumor has been associated with a significantly longer healthier time than if less than 98% of the tumor is removed. The chances of near-complete initial removal of the tumor may be increased if the surgery is guided by a fluorescent dye known as 5-aminolevulinic acid. GBM cells are widely infiltrative through the brain at diagnosis, and despite a "total resection" of all obvious tumor, most people with GBM later develop recurrent tumors either near the original site or at more distant locations within the brain. Other modalities, typically radiation and chemotherapy, are used after surgery in an effort to suppress and slow recurrent disease through damaging the DNA of rapidly proliferative GBM cells.
Between 60-85% of glioblastoma patients report cancer-related cognitive impairments following surgery, which refers to problems with executive functioning, verbal fluency, attention, speed of processing. These symptoms may be managed with cognitive behavioral therapy, physical exercise, yoga and meditation.
Radiotherapy
Subsequent to surgery, radiotherapy becomes the mainstay of treatment for people with glioblastoma. It is typically performed along with giving temozolomide. A pivotal clinical trial carried out in the early 1970s showed that among 303 GBM patients randomized to radiation or best medical therapy, those who received radiation had a median survival more than double those who did not. Subsequent clinical research has attempted to build on the backbone of surgery followed by radiation. Whole-brain radiotherapy does not improve when compared to the more precise and targeted three-dimensional conformal radiotherapy. A total radiation dose of 60–65 Gy has been found to be optimal for treatment.
GBM tumors are well known to contain zones of tissue exhibiting hypoxia, which are highly resistant to radiotherapy. Various approaches to chemotherapy radiosensitizers have been pursued, with limited success . , newer research approaches included preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as radiosensitizers, and a clinical trial was underway. Boron neutron capture therapy has been tested as an alternative treatment for glioblastoma, but is not in common use.
Chemotherapy
Most studies show no benefit from the addition of chemotherapy. However, a large clinical trial of 575 participants randomized to standard radiation versus radiation plus temozolomide chemotherapy showed that the group receiving temozolomide survived a median of 14.6 months as opposed to 12.1 months for the group receiving radiation alone. This treatment regimen is now standard for most cases of glioblastoma where the person is not enrolled in a clinical trial. Temozolomide seems to work by sensitizing the tumor cells to radiation, and appears more effective for tumors with MGMT promoter methylation. High doses of temozolomide in high-grade gliomas yield low toxicity, but the results are comparable to the standard doses. Antiangiogenic therapy with medications such as bevacizumab control symptoms, but do not appear to affect overall survival in those with glioblastoma. A 2018 systematic review found that the overall benefit of anti-angiogenic therapies was unclear. In elderly people with newly diagnosed glioblastoma who are reasonably fit, concurrent and adjuvant chemoradiotherapy gives the best overall survival but is associated with a greater risk of haematological adverse events than radiotherapy alone.
Immunotherapy
Phase 3 clinical trials of immunotherapy treatments for glioblastoma have largely failed.
Other procedures
Alternating electric field therapy is an FDA-approved therapy for newly diagnosed and recurrent glioblastoma. In 2015, initial results from a phase-III randomized clinical trial of alternating electric field therapy plus temozolomide in newly diagnosed glioblastoma reported a three-month improvement in progression-free survival, and a five-month improvement in overall survival compared to temozolomide therapy alone, representing the first large trial in a decade to show a survival improvement in this setting. Despite these results, the efficacy of this approach remains controversial among medical experts. However, increasing understanding of the mechanistic basis through which alternating electric field therapy exerts anti-cancer effects and results from ongoing phase-III clinical trials in extracranial cancers may help facilitate increased clinical acceptance to treat glioblastoma in the future.
Prognosis
The most common length of survival following diagnosis is 10 to 13 months (although recent research points to a median survival rate of 15 months), with fewer than 1–3% of people surviving longer than five years. In the United States between 2012 and 2016 five-year survival was 6.8%. Without treatment, survival is typically three months. Complete cures are extremely rare, but have been reported.
Increasing age (> 60 years) carries a worse prognostic risk. Death is usually due to widespread tumor infiltration with cerebral edema and increased intracranial pressure.
A good initial Karnofsky performance score (KPS) and MGMT methylation are associated with longer survival. A DNA test can be conducted on glioblastomas to determine whether or not the promoter of the MGMT gene is methylated. Patients with a methylated MGMT promoter have longer survival than those with an unmethylated MGMT promoter, due in part to increased sensitivity to temozolomide.
Long-term benefits have also been associated with those patients who receive surgery, radiotherapy, and temozolomide chemotherapy. However, much remains unknown about why some patients survive longer with glioblastoma. Age under 50 is linked to longer survival in GBM, as is 98%+ resection and use of temozolomide chemotherapy and better KPSs. A recent study confirms that younger age is associated with a much better prognosis, with a small fraction of patients under 40 years of age achieving a population-based cure. Cure is thought to occur when a person's risk of death returns to that of the normal population, and in GBM, this is thought to occur after 10 years.
UCLA Neuro-oncology publishes real-time survival data for patients with this diagnosis.
According to a 2003 study, GBM prognosis can be divided into three subgroups dependent on KPS, the age of the patient, and treatment.
Epidemiology
About three per 100,000 people develop the disease a year, although regional frequency may be much higher. The frequency in England doubled between 1995 and 2015.
It is the second-most common central nervous system tumor after meningioma. It occurs more commonly in males than females. Although the median age at diagnosis is 64, in 2014, the broad category of brain cancers was second only to leukemia in people in the United States under 20 years of age.
History
The term glioblastoma multiforme was introduced in 1926 by Percival Bailey and Harvey Cushing, based on the idea that the tumor originates from primitive precursors of glial cells (glioblasts), and the highly variable appearance due to the presence of necrosis, hemorrhage, and cysts (multiform).
Research
Gene therapy
Gene therapy has been explored as a method to treat glioblastoma, and while animal models and early-phase clinical trials have been successful, as of 2017, all gene-therapy drugs that had been tested in phase-III clinical trials for glioblastoma had failed. Scientists have developed the core–shell nanostructured LPLNP-PPT (long persistent luminescence nanoparticles. PPT refers to polyetherimide, PEG and trans-activator of transcription, and TRAIL is the human tumor necrosis factor-related apoptosis-induced ligand) for effective gene delivery and tracking, with positive results. This is a TRAIL ligand that has been encoded to induce apoptosis of cancer cells, more specifically glioblastomas. Although this study was still in clinical trials in 2017, it has shown diagnostic and therapeutic functionalities, and will open great interest for clinical applications in stem-cell-based therapy.
Other gene therapy approaches has also been explored in the context of glioblastoma, including suicide gene therapy. Suicide gene therapy is a two step approach which includes the delivery of a foreign enzyme-gene to the cancer cells followed by activation with an pro-drug causing toxicities in the cancer-cells which induces cell death. This approach have had success in animal models and small clinical studied but could not show survival benefit in larger clinical studies. Using new more efficient delivery vectors and suicide gene-prodrug systems could improve the clinical benefit from these types of therapies.
Oncolytic virotherapy
Oncolytic virotherapy is an emerging novel treatment that is under investigation both at preclinical and clinical stages. Several viruses including herpes simplex virus, adenovirus, poliovirus, and reovirus are currently being tested in phases I and II of clinical trials for glioblastoma therapy and have shown to improve overall survival.
Intranasal drug delivery
Direct nose-to-brain drug delivery is being explored as a means to achieve higher, and hopefully more effective, drug concentrations in the brain. A clinical phase-I/II study with glioblastoma patients in Brazil investigated the natural compound perillyl alcohol for intranasal delivery as an aerosol. The results were encouraging and, as of 2016, a similar trial has been initiated in the United States.
See also
Adegramotide
Asunercept
Glioblastoma Foundation
Lomustine
List of people with brain tumors
References
External links
Information about glioblastoma from the American Brain Tumor Association
Aging-associated diseases
Brain tumor
Cancer
Oncology
Wikipedia medicine articles ready to translate | Glioblastoma | [
"Biology"
] | 5,572 | [
"Senescence",
"Aging-associated diseases"
] |
1,023,624 | https://en.wikipedia.org/wiki/Volumetric%20flask | A volumetric flask (measuring flask or graduated flask) is a piece of laboratory apparatus, a type of laboratory flask, calibrated to contain a precise volume at a certain temperature. Volumetric flasks are used for precise dilutions and preparation of standard solutions. These flasks are usually pear-shaped, with a flat bottom, and made of glass or plastic. The flask's mouth is either furnished with a plastic snap/screw cap or fitted with a joint to accommodate a PTFE or glass stopper. The neck of volumetric flasks is elongated and narrow with an etched ring graduation marking. The marking indicates the volume of liquid contained when filled up to that point. The marking is typically calibrated "to contain" (marked "TC" or "IN") at 20 °C and indicated correspondingly on a label. The flask's label also indicates the nominal volume, tolerance, precision class, relevant manufacturing standard and the manufacturer's logo. Volumetric flasks are of various sizes, containing from a fraction of a milliliter to hundreds of liters of liquid.
Classes
Calibration and toleration standards for volumetric flasks are defined in the following standard specifications and practices: ASTM E288, E542, E694, ISO 1042, and GOST 1770-74. According to these specifications, volumetric flasks come in two different classes. The higher standard flasks (Class A, Class 1, USP or equivalent depending on the country) are made with a more accurately placed graduation mark, and have a unique serial number for traceability. Where this is not required, a lower standard (Class B or equivalent) is used for qualitative or educational work.
Modifications
Volumetric flasks are generally colourless but may be amber-coloured for the handling of light-sensitive compounds such as silver nitrate or vitamin A.
A modification of the volumetric flask exists for dealing with large quantities of solids that are to be transferred into a volumetric vessel for dissolution. Such a flask has a wide mouth and is known as a Kohlrausch volumetric flask. This kind of volumetric flask is commonly used in analysis of the sugar content in sugar beets.
While conventional volumetric flasks have a single mark, industrial volumetric tests in analytical chemistry and food chemistry may employ specialized volumetric flasks with multiple marks to combine several accurately measured volumes.
A highly specialized kind of the volumetric flask is the Le Chatelier flask for use with the volumetric procedure in specific gravity determination.
See also
Graduated cylinder
Babcock bottle
Burette
References
Further reading
Laboratory glassware
Volumetric instruments | Volumetric flask | [
"Technology",
"Engineering"
] | 562 | [
"Volumetric instruments",
"Measuring instruments"
] |
1,023,906 | https://en.wikipedia.org/wiki/Phytoalexin | Phytoalexins are antimicrobial substances, some of which are antioxidative as well. They are defined not by their having any particular chemical structure or character, but by the fact that they are defensively synthesized de novo by plants that produce the compounds rapidly at sites of pathogen infection. In general phytoalexins are broad spectrum inhibitors; they are chemically diverse, and different chemical classes of compounds are characteristic of particular plant taxa. Phytoalexins tend to fall into several chemical classes, including terpenoids, glycosteroids, and alkaloids; however, the term applies to any phytochemicals that are induced by microbial infection.
Function
Phytoalexins are produced in plants to act as toxins to the attacking organism. They may puncture the cell wall, delay maturation, disrupt metabolism or prevent reproduction of the pathogen in question. Their importance in plant defense is indicated by an increase in susceptibility of plant tissue to infection when phytoalexin biosynthesis is inhibited. Mutants incapable of phytoalexin production exhibit more extensive pathogen colonization as compared to wild types. As such, host-specific pathogens capable of degrading phytoalexins are more virulent than those unable to do so.
When a plant cell recognizes particles from damaged cells or particles from the pathogen, the plant launches a two-pronged resistance: a general short-term response and a delayed long-term specific response.
As part of the induced resistance, the short-term response, the plant deploys reactive oxygen species such as superoxide and hydrogen peroxide to kill invading cells. In pathogen interactions, the common short-term response is the hypersensitive response, in which cells surrounding the site of infection are signaled to undergo apoptosis, or programmed cell death, in order to prevent the spread of the pathogen to the rest of the plant.
Long-term resistance, or systemic acquired resistance (SAR), involves communication of the damaged tissue with the rest of the plant using plant hormones such as jasmonic acid, ethylene, abscisic acid, or salicylic acid. The reception of the signal leads to global changes within the plant, which induce expression of genes that protect from further pathogen intrusion, including enzymes involved in the production of phytoalexins. Often, if jasmonates or ethylene (both gaseous hormones) are released from the wounded tissue, neighboring plants also manufacture phytoalexins in response. For herbivores, common vectors for plant diseases, these and other wound response aromatics seem to act as a warning that the plant is no longer edible. Also, in accordance with the old adage, "an enemy of my enemy is my friend", the aromatics may alert natural enemies of the plant invaders to the presence thereof.
Recent research
Allixin (3-hydroxy-5-methoxy-6-methyl-2-pentyl-4H-pyran-4-one), a non-sulfur-containing compound having a γ-pyrone skeletal structure, was the first compound isolated from garlic as a phytoalexin, a product induced in plants by continuous stress. This compound has been shown to have unique biological properties, such as anti-oxidative effects, anti-microbial effects, anti-tumor promoting effects, inhibition of aflatoxin B2 DNA binding, and neurotrophic effects. Allixin showed an anti-tumor promoting effect in vivo, inhibiting skin tumor formation by TPA in DMBA initiated mice. Herein, allixin and/or its analogs may be expected to be useful compounds for cancer prevention or chemotherapy agents for other diseases.
Role of natural phenols in the plant defense against fungal pathogens
Polyphenols, especially isoflavonoids and related substances, play a role in the plant defense against fungal and other microbial pathogens.
In Vitis vinifera grape, trans-resveratrol is a phytoalexin produced against the growth of fungal pathogens such as Botrytis cinerea and delta-viniferin is another grapevine phytoalexin produced following fungal infection by Plasmopara viticola. Pinosylvin is a pre-infectious stilbenoid toxin (i.e. synthesized prior to infection), contrary to phytoalexins which are synthesized during infection. It is present in the heartwood of Pinaceae. It is a fungitoxin protecting the wood from fungal infection.
Sakuranetin is a flavanone, a type of flavonoid. It can be found in Polymnia fruticosa and rice, where it acts as a phytoalexin against spore germination of Pyricularia oryzae. In Sorghum, the SbF3'H2 gene, encoding a flavonoid 3'-hydroxylase, seems to be expressed in pathogen-specific 3-deoxyanthocyanidin phytoalexin synthesis, for example in Sorghum-Colletotrichum interactions.
6-Methoxymellein is a dihydroisocoumarin and a phytoalexin induced in carrot slices by UV-C, that allows resistance to Botrytis cinerea and other microorganisms.
Danielone is a phytoalexin found in the papaya fruit. This compound showed high antifungal activity against Colletotrichum gloesporioides, a pathogenic fungus of papaya.
Stilbenes are produced in Eucalyptus sideroxylon in case of pathogen attacks. Such compounds can be implied in the hypersensitive response of plants. High levels of polyphenols in some woods can explain their natural preservation against rot.
Avenanthramides are phytoalexins produced by Avena sativa in its response to Puccinia coronata var. avenae f. sp. avenae, the oat crown rust. (Avenanthramides were formerly called avenalumins.)
See also
Alexin (humoral immunity)
Allicin
Garlic
Plant defense against herbivory
Pterostilbene
Salvestrol
References
Further reading
External links
Signals Regulating Multiple Responses to Wounding and Herbivores Guy L. de Bruxelles and Michael R Roberts
The Myriad Plant Responses to Herbivores Linda L. Walling
The Chemical Defenses of Higher Plants Gerald A. Rosenthal
Induced Systemic Resistance (ISR) Against Pathogens in the Context of Induced Plant Defences Martin Heil
Notes from the Underground Donald R. Strong and Donald A. Phillips
Relationships Among Plants, Insect Herbivores, Pathogens, and Parasitoids Expressed by Secondary Metabolites Loretta L. Mannix | Phytoalexin | [
"Chemistry"
] | 1,422 | [
"Phytoalexins",
"Chemical ecology"
] |
1,024,323 | https://en.wikipedia.org/wiki/Pyrgeometer | A pyrgeometer is a device that measures near-surface infra-red (IR) radiation, approximately from 4.5 μm to 100 μm on the electromagnetic spectrum (thereby excluding solar radiation).
It measures the resistance/voltage changes in a material that is sensitive to the net energy transfer by radiation that occurs between itself and its surroundings (which can be either in or out). By also measuring its own temperature and making some assumptions about the nature of its surroundings it can infer a temperature of the local atmosphere with which it is exchanging radiation.
Since the mean free path of IR radiation in the atmosphere is ~25 meters, this device typically measures IR flux in the nearest 25 meter layer.
Pyrgeometer components
A pyrgeometer consists of the following major components:
A thermopile sensor which is sensitive to radiation in a broad range from 200 nm to 100 μm
A silicon dome or window with a solar blind filter coating. It has a transmittance between 4.5 μm and 50 μm that eliminates solar shortwave radiation.
A temperature sensor to measure the body temperature of the instrument.
A sun shield to minimize heating of the instrument due to solar radiation.
Measurement of long wave downward radiation
The atmosphere and the pyrgeometer (in effect its sensor surface) exchange long wave IR radiation. This results in a net radiation balance according to:
Where (in SI units):
= net radiation at sensor surface [W/m2]
= long-wave radiation received from the atmosphere [W/m2]
= long-wave radiation emitted by the sensor surface [W/m2]
The pyrgeometer's thermopile detects the net radiation balance between the incoming and outgoing long wave radiation flux and converts it to a voltage according to the equation below.
Where (in SI units):
=net radiation at sensor surface [W/m2]
= thermopile output voltage [V]
= sensitivity/calibration factor of instrument [V/W/m2]
The value for is determined during calibration of the instrument. The calibration is performed at the production factory with a reference instrument traceable to a regional calibration center.
To derive the absolute downward long wave flux, the temperature of the pyrgeometer has to be taken into account. It is measured using a temperature sensor inside the instrument, near the cold junctions of the thermopile. The pyrgeometer is considered to approximate a black body. Due to this it emits long wave radiation according to:
Where (in SI units):
= long-wave radiation emitted by the earth surface [W/m2]
= Stefan–Boltzmann constant [W/(m2·K4)]
= Absolute temperature of pyrgeometer detector [K]
From the calculations above the incoming long wave radiation can be derived. This is usually done by rearranging the equations above to yield the so-called pyrgeometer equation by Albrecht and Cox.
Where all the variables have the same meaning as before.
As a result, the detected voltage and instrument temperature yield the total global long wave downward radiation.
Usage
Pyrgeometers are frequently used in meteorology, climatology studies. The atmospheric long-wave downward radiation is of interest for research into long term climate changes.
The signals are generally detected using a data logging system, capable of taking high resolution samples in the millivolt range.
See also
Pyranometer
Radiometer
References
Electromagnetic radiation meters
Radiometry | Pyrgeometer | [
"Physics",
"Technology",
"Engineering"
] | 714 | [
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments",
"Radiometry"
] |
1,024,614 | https://en.wikipedia.org/wiki/Taylor%E2%80%93Proudman%20theorem | In fluid mechanics, the Taylor–Proudman theorem (after Geoffrey Ingram Taylor and Joseph Proudman) states that when a solid body is moved slowly within a fluid that is steadily rotated with a high angular velocity , the fluid velocity will be uniform along any line parallel to the axis of rotation. must be large compared to the movement of the solid body in order to make the Coriolis force large compared to the acceleration terms.
Derivation
The Navier–Stokes equations for steady flow, with zero viscosity and a body force corresponding to the Coriolis force, are
where is the fluid velocity, is the fluid density, and the pressure. If we assume that is a scalar potential and the advective term on the left may be neglected (reasonable if the Rossby number is much less than unity) and that the flow is incompressible (density is constant), the equations become:
where is the angular velocity vector. If the curl of this equation is taken, the result is the Taylor–Proudman theorem:
To derive this, one needs the vector identities
and
and
(because the curl of the gradient is always equal to zero).
Note that is also needed (angular velocity is divergence-free).
The vector form of the Taylor–Proudman theorem is perhaps better understood by expanding the dot product:
In coordinates for which , the equations reduce to
if . Thus, all three components of the velocity vector are uniform along any line parallel to the z-axis.
Taylor column
The Taylor column is an imaginary cylinder projected above and below a real cylinder that has been placed parallel to the rotation axis (anywhere in the flow, not necessarily in the center). The flow will curve around the imaginary cylinders just like the real due to the Taylor–Proudman theorem, which states that the flow in a rotating, homogeneous, inviscid fluid are 2-dimensional in the plane orthogonal to the rotation axis and thus there is no variation in the flow along the axis, often taken to be the axis.
The Taylor column is a simplified, experimentally observed effect of what transpires in the Earth's atmospheres and oceans.
History
The result known as the Taylor-Proudman theorem was first derived by Sydney Samuel Hough (1870-1923), a mathematician at Cambridge University, in 1897. Proudman published another derivation in 1916 and Taylor in 1917, then the effect was demonstrated experimentally by Taylor in 1923.
References
Eponymous theorems of physics
Fluid dynamics | Taylor–Proudman theorem | [
"Physics",
"Chemistry",
"Engineering"
] | 501 | [
"Equations of physics",
"Chemical engineering",
"Eponymous theorems of physics",
"Piping",
"Physics theorems",
"Fluid dynamics"
] |
1,024,667 | https://en.wikipedia.org/wiki/Ensemble%20%28fluid%20mechanics%29 | In continuum mechanics, an ensemble is an imaginary collection of notionally identical experiments.
Each member of the ensemble will have nominally identical boundary conditions and fluid properties. If the flow is turbulent, the details of the fluid motion will differ from member to member because the experimental setup will be microscopically different, and these slight differences become magnified as time progresses. Members of an ensemble are, by definition, statistically independent of one another. The concept of ensemble is useful in thought experiments and to improve theoretical understanding of turbulence.
A good image to have in mind is a typical fluid mechanics experiment such as a mixing box. Imagine a million mixing boxes, distributed over the earth; at a predetermined time, a million fluid mechanics engineers each start one experiment, and monitor the flow. Each engineer then sends his or her results to a central database. Such a process would give results that are close to the theoretical ideal of an ensemble.
It is common to speak of ensemble average or ensemble averaging when considering a fluid mechanical ensemble.
For a completely unrelated type of averaging, see Reynolds-averaged Navier–Stokes equations (the two types of averaging are often confused).
See also
Statistical ensemble (mathematical physics)
References
Continuum mechanics | Ensemble (fluid mechanics) | [
"Physics"
] | 245 | [
"Classical mechanics",
"Continuum mechanics"
] |
1,024,815 | https://en.wikipedia.org/wiki/CYP2D6 | Cytochrome P450 2D6 (CYP2D6) is an enzyme that in humans is encoded by the CYP2D6 gene. CYP2D6 is primarily expressed in the liver. It is also highly expressed in areas of the central nervous system, including the substantia nigra.
CYP2D6, a member of the cytochrome P450 mixed-function oxidase system, is one of the most important enzymes involved in the metabolism of xenobiotics in the body. In particular, CYP2D6 is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, via the addition or removal of certain functional groups – specifically, hydroxylation, demethylation, and dealkylation. CYP2D6 also activates some prodrugs. This enzyme also metabolizes several endogenous substances, such as N,N-Dimethyltryptamine, hydroxytryptamines, neurosteroids, and both m-tyramine and p-tyramine which CYP2D6 metabolizes into dopamine in the brain and liver.
Considerable variation exists in the efficiency and amount of CYP2D6 enzyme produced between individuals. Hence, for drugs that are metabolized by CYP2D6 (that is, are CYP2D6 substrates), certain individuals will eliminate these drugs quickly (ultrarapid metabolizers) while others slowly (poor metabolizers). If a drug is metabolized too quickly, it may decrease the drug's efficacy while if the drug is metabolized too slowly, toxicity may result. So, the dose of the drug may have to be adjusted to take into account of the speed at which it is metabolized by CYP2D6. Individuals who exhibit an ultrarapid metabolizer phenotype, metabolize prodrugs, such as codeine or tramadol, more rapidly, leading to higher than therapeutic levels. A case study of the death of an infant breastfed by an ultrarapid metabolizer mother taking codeine impacted postnatal pain relief clinical practices, but was later debunked. These drugs may also cause serious toxicity in ultrarapid metabolizer patients when used to treat other post-operative pain, such as after tonsillectomy. Other drugs may function as inhibitors of CYP2D6 activity or inducers of CYP2D6 enzyme expression that will lead to decreased or increased CYP2D6 activity respectively. If such a drug is taken at the same time as a second drug that is a CYP2D6 substrate, the first drug may affect the elimination rate of the second through what is known as a drug-drug interaction.
Gene
The gene is located on chromosome 22q13.1. near two cytochrome P450 pseudogenes (CYP2D7P and CYP2D8P). Among them, CYP2D7P originated from CYP2D6 in a stem lineage of great apes and humans, the CYP2D8P originated from CYP2D6 in a stem lineage of Catarrhine and New World monkeys' stem lineage. Alternatively spliced transcript variants encoding different isoforms have been found for this gene.
Genotype/phenotype variability
CYP2D6 shows the largest phenotypical variability among the CYPs, largely due to genetic polymorphism. The genotype accounts for normal, reduced, and non-existent CYP2D6 function in subjects. Pharmacogenomic tests are now available to identify patients with variations in the CYP2D6 allele and have been shown to have widespread use in clinical practice.
The CYP2D6 function in any particular subject may be described as one of the following:
poor metabolizer – little or no CYP2D6 function
intermediate metabolizers – metabolize drugs at a rate somewhere between the poor and extensive metabolizers
extensive metabolizer – normal CYP2D6 function
ultrarapid metabolizer – multiple copies of the CYP2D6 gene are expressed, so greater-than-normal CYP2D6 function occurs
A patient's CYP2D6 phenotype is often clinically determined via the administration of debrisoquine (a selective CYP2D6 substrate) and subsequent plasma concentration assay of the debrisoquine metabolite (4-hydroxydebrisoquine).
The type of CYP2D6 function of an individual may influence the person's response to different doses of drugs that CYP2D6 metabolizes. The nature of the effect on the drug response depends not only on the type of CYP2D6 function, but also on the extent to which processing of the drug by CYP2D6 results in a chemical that has an effect that is similar, stronger, or weaker than the original drug, or no effect at all. For example, if CYP2D6 converts a drug that has a strong effect into a substance that has a weaker effect, then poor metabolizers (weak CYP2D6 function) will have an exaggerated response to the drug and stronger side-effects; conversely, if CYP2D6 converts a different drug into a substance that has a greater effect than its parent chemical, then ultrarapid metabolizers (strong CYP2D6 function) will have an exaggerated response to the drug and stronger side-effects. Information about how human genetic variation of CYP2D6 affects response to medications can be found in databases such PharmGKB, Clinical Pharmacogenetics Implementation Consortium (CPIC).
Genetic basis of variability
The variability in metabolism is due to multiple different polymorphisms of the CYP2D6 allele, located on chromosome 22. Subjects possessing certain allelic variants will show normal, decreased, or no CYP2D6 function, depending on the allele. Pharmacogenomic tests are now available to identify patients with variations in the CYP2D6 allele and have been shown to have widespread use in clinical practice. The current known alleles of CYP2D6 and their clinical function can be found in databases such as PharmVar.
Ethnic factors in variability
Ethnicity is a factor in the occurrence of CYP2D6 variability. The reduction of the liver cytochrome CYP2D6 enzyme occurs approximately in 7–10% in white populations, and is lower in most other ethnic groups such as Asians and African-Americans at 2% each. A complete lack of CYP2D6 enzyme activity, wherein the individual has two copies of the polymorphisms that result in no CYP2D6 activity at all, is said to be about 1-2% of the population. The occurrence of CYP2D6 ultrarapid metabolizers appears to be greater among Middle Eastern and North African populations. In Ethiopia, a particularly high percentage (30%) of the population are ultrametabolizers. As a result, the analgesic codeine is banned in Ethiopia due to the high rate of adverse events associated with ultrarapid metabolism of codeine in this population.
Caucasians with European descent predominantly (around 71%) have the functional group of CYP2D6 alleles, producing extensive metabolism, while functional alleles represent only around 50% of the allele frequency in populations of Asian descent.
This variability is accounted for by the differences in the prevalence of various CYP2D6 alleles among the populations–approximately 10% of whites are intermediate metabolizers, due to decreased CYP2D6 function, because they appear to have the one (heterozygous) non-functional CYP2D6*4 allele, while approximately 50% of Asians possess the decreased functioning CYP2D6*10 allele.
Ligands
Following is a table of selected substrates, inducers and inhibitors of CYP2D6. Where classes of agents are listed, there may be exceptions within the class.
Inhibitors of CYP2D6 can be classified by their potency, such as:
Strong inhibitor being one that causes at least a 5-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or more than 80% decrease in clearance thereof.
Moderate inhibitor being one that causes at least a 2-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or 50-80% decrease in clearance thereof.
Weak inhibitor being one that causes at least a 1.25-fold but less than 2-fold increase in the plasma AUC values of sensitive substrates metabolized through CYP2D6, or 20-50% decrease in clearance thereof.
Dopamine biosynthesis
References
Further reading
External links
Flockhart Lab Cyp2D6 Substrates Page at IUPUI
PharmGKB: Annotated PGx Gene Information for CYP2D6
Pharmvar Gene:CYP2D6
2
EC 1.14.14
Amphetamine
Pharmacogenomics | CYP2D6 | [
"Chemistry"
] | 1,921 | [
"Pharmacology",
"Pharmacogenomics"
] |
1,026,522 | https://en.wikipedia.org/wiki/Boltzmann%20equation | The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element ) centered at the position , and has momentum nearly equal to a given momentum vector (thus occupying a very small region of momentum space ), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
Overview
The phase space and density function
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component , , . The entire space is 6-dimensional: a point in this space is , and each coordinate is parameterized by time t. The small volume ("differential volume element") is written
Since the probability of molecules, which all have and within , is in question, at the heart of the equation is a quantity which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time . This is a probability density function: , defined so that,
is the number of molecules which all have positions lying within a volume element about and momenta lying within a momentum space element about , at time . Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
which is a 6-fold integral. While is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one and is in question. It is not part of the analysis to use , for particle 1, , for particle 2, etc. up to , for particle N.
It is assumed the particles in the system are identical (so each has an identical mass ). For a mixture of more than one chemical species, one distribution is needed for each, see below.
Principal statement
The general equation can then be written as
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity instead of momentum ; they are related in the definition of momentum by .
The force and diffusion terms
Consider particles described by , each experiencing an external force not due to other particles (see the collision term for the latter treatment).
Suppose at time some number of particles all have position within element and momentum within . If a force instantly acts on each particle, then at time their position will be and momentum . Then, in the absence of collisions, must satisfy
Note that we have used the fact that the phase space volume element is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume changes, so
where is the total change in . Dividing () by and taking the limits and , we have
The total differential of is:
where is the gradient operator, is the dot product,
is a shorthand for the momentum analogue of , and , , are Cartesian unit vectors.
Final statement
Dividing () by and substituting into () gives:
In this context, is the force field acting on the particles in the fluid, and is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since cannot be solved unless the collision term in is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
The collision term (Stosszahlansatz) and molecular chaos
Two-body collision term
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
where and are the momenta of any two particles (labeled as A and B for convenience) before a collision, and are the momenta after the collision,
is the magnitude of the relative momenta (see relative velocity for more on this concept), and is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle into the element of the solid angle , due to the collision.
Simplifications to the collision term
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
where is the molecular collision frequency, and is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
General equation (for a mixture)
For a mixture of chemical species labelled by indices the equation for species is
where , and the collision term is
where , the magnitude of the relative momenta is
and is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.
Applications and extensions
Conservation equations
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy. For a fluid consisting of only one kind of particle, the number density is given by
The average value of any function is
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus and , where is the particle velocity vector. Define as some function of momentum only, whose total value is conserved in a collision. Assume also that the force is a function of position only, and that f is zero for . Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
where the last term is zero, since is conserved in a collision. The values of correspond to moments of velocity (and momentum , as they are linearly dependent).
Zeroth moment
Letting , the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:
where is the mass density, and is the average fluid velocity.
First moment
Letting , the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:
where is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
Second moment
Letting , the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:
where is the kinetic thermal energy density, and is the heat flux vector.
Hamiltonian mechanics
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
where is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and is the collision operator. The non-relativistic form of is
Quantum theory and violation of particle number conservation
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
General relativity and astronomy
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
where is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant phase space as opposed to fully contravariant phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
Solving the equation
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
Limitations and further uses of the Boltzmann equation
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
See also
Vlasov equation
The Vlasov–Poisson equation
Landau kinetic equation
Fokker–Planck equation
Williams–Boltzmann equation
Derivation of Navier–Stokes equation from LBE
Derivation of Jeans equation from BE
Jeans's theorem
H-theorem
Notes
References
. Very inexpensive introduction to the modern framework (starting from a formal deduction from Liouville and the Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy (BBGKY) in which the Boltzmann equation is placed). Most statistical mechanics textbooks like Huang still treat the topic using Boltzmann's original arguments. To derive the equation, these books use a heuristic explanation that does not bring out the range of validity and the characteristic assumptions that distinguish Boltzmann's from other transport equations like Fokker–Planck or Landau equations.
External links
The Boltzmann Transport Equation by Franz Vesely
Boltzmann gaseous behaviors solved
Eponymous equations of physics
Partial differential equations
Statistical mechanics
Transport phenomena
Equation
1872 in science
1872 in Germany
Thermodynamic equations | Boltzmann equation | [
"Physics",
"Chemistry",
"Engineering"
] | 3,191 | [
"Transport phenomena",
"Physical phenomena",
"Thermodynamic equations",
"Equations of physics",
"Chemical engineering",
"Eponymous equations of physics",
"Thermodynamics",
"Statistical mechanics"
] |
1,026,606 | https://en.wikipedia.org/wiki/Sweet%20crude%20oil | Sweet crude oil is a type of petroleum. The New York Mercantile Exchange designates petroleum with less than 0.5% sulfur as sweet.
Petroleum containing higher levels of sulfur is called sour crude oil.
Sweet crude oil contains small amounts of hydrogen sulfide and carbon dioxide. High-quality, low-sulfur crude oil is commonly used for processing into gasoline and is in high demand, particularly in industrialized nations. Light sweet crude oil is the most sought-after version of crude oil as it contains a disproportionately large fraction that is directly processed (fractionation) into gasoline (naphtha), kerosene, and high-quality diesel (gas oil).
The term sweet originates from the fact that a low level of sulfur provides the oil with a relatively sweet taste and pleasant smell, compared to sulfurous oil. Nineteenth-century prospectors would taste and smell small quantities of oil to determine its quality.
Producers
Producers of sweet crude oil include:
Asia/Pacific:
The Far East/Oceania:
Australia
Brunei
China
India
Indonesia
Malaysia
New Zealand
Vietnam
The Middle East
Iran
Iraq
Saudi Arabia
United Arab Emirates
North America:
Canada
United States
Europe:
Russia
Azerbaijan
The North Sea area:
Norway
United Kingdom (Brent Crude)
England
Scotland
Africa:
North Africa:
Algeria
Libya
Western Africa
Nigeria
Ghana
Central Africa
Angola
Democratic Republic of the Congo
Republic of the Congo
South Sudan
South America:
The Guianas:
Suriname, Guyana Basin
Andean Region
Colombia
Peru
Southern Cone
Argentina
Brazil
Pricing
The term "price of oil", as used in the U.S. media, generally means the cost per barrel (42 U.S. gallons) of West Texas Intermediate Crude, to be delivered to Cushing, Oklahoma during the upcoming month. This information is available from NYMEX or the U.S. Energy Information Administration.
See also
Petroleum Classification
Light crude oil
Sour crude oil
Mazut
List of crude oil products
Oil price increases since 2003
References
External links
EIA oil prices
NYMEX website
Petroleum | Sweet crude oil | [
"Chemistry"
] | 400 | [
"Petroleum",
"Chemical mixtures"
] |
10,976,170 | https://en.wikipedia.org/wiki/EF-4 | Elongation factor 4 (EF-4) is an elongation factor that is thought to back-translocate on the ribosome during the translation of RNA to proteins. It is found near-universally in bacteria and in eukaryotic endosymbiotic organelles including the mitochondria and the plastid. Responsible for proofreading during protein synthesis, EF-4 is a recent addition to the nomenclature of bacterial elongation factors.
Prior to its recognition as an elongation factor, EF-4 was known as leader peptidase A (LepA), as it is the first cistron on the operon carrying the bacterial leader peptidase. In eukaryotes it is traditionally called GUF1 (GTPase of Unknown Function 1). It has the preliminary EC number 3.6.5.n1.
Evolutionary background
LepA has a highly conserved sequence. LepA orthologs have been found in bacteria and almost all eukaryotes. The conservation in LepA has been shown to cover the entire protein. More specifically, the amino acid identity of LepA among bacterial orthologs ranges from 55%-68%.
Two forms of LepA have been observed; one form of LepA branches with mitochondrial LepA sequences, while the second form branches with cyanobacterial orthologs. These findings demonstrate that LepA is significant for bacteria, mitochondria, and plastids. LepA is absent from archaea.
Structure
The gene encoding LepA is known to be the first cistron as part of a bicistron operon. LepA is a polypeptide of 599 amino acids with a molecular weight of 67 kDa. The amino acid sequence of LepA indicates that it is a G protein, which consists of five known domains. The first four domains are strongly related to domains I, II, III, and V of primary elongation factor EF-G. However, the last domain of LepA is unique. This specific domain resides on the C-terminal end of the protein structure. This arrangement of LepA has been observed in the mitochondria of yeast cells to human cells.
Function
LepA is suspected to improve the fidelity of translation by recognizing a ribosome with mistranslocated tRNA and consequently inducing a back-translocation. By back-translocating the already post-transcriptionally modified ribosome, the EF-G factor capable of secondary translocation. Back-translocation by LepA occurs at a similar rate as an EF-G-dependent translocation. As mentioned above, EF-G's structure is highly analogous to LepA's structure; LepA's function is thus similarly analogous to EF-G's function. However, Domain IV of EF-G has been shown through several studies to occupy the decoding sequence of the A site after the tRNAs have been translocated from A and P sites to the P and E sites. Thus, domain IV of EF-G prevents back-movement of the tRNA. Despite the structural similarities between LepA and EF-G, LepA lacks this Domain IV. Thus LepA reduces the activation barrier between Pre and POST states in a similar way to EF-G but is, at the same time, able to catalyze a back-translocation rather that a canonical translocation.
Activity
LepA exhibits uncoupled GTPase activity. This activity is stimulated by the ribosome to the same extent as the activity of EF-G, which is known to have the strongest ribosome-dependent GTPase activity among all characterized G proteins involved in translation. Conversely, uncoupled GTPase activity occurs when the ribosome stimulation of GTP cleavage is not directly dependent on protein synthesis. In the presence of GTP, LepA works catalytically. On the other hand, in the presence of the nonhydrolysable GTP – GDPNP – the LepA action becomes stoichiometric, saturating at about one molecule per 70S ribosomes. This data demonstrates that GTP cleavage is required for dissociation of LepA from the ribosome, which is demonstrative of a typical G protein. At low concentrations of LepA (less than or equal to 3 molecules per 70S ribosome), LepA specifically recognizes incorrectly translocated ribosomes, back-translocates them, and thus allows EF-G to have a second chance to catalyze the correct translocation reaction. At high concentrations (about 1 molecule per 70S ribosome), LepA loses its specificity and back-translocates every POST ribosome. This places the translational machinery in a nonreproductive mode. This explains the toxicity of LepA when it is found in a cell in high concentrations. Hence, at low concentrations LepA significantly improves the yield and activity of synthesized proteins; however, at high concentrations LepA is toxic to cells.
Additionally, LepA has an effect on peptide bond formation. Through various studies in which functional derivatives of ribosomes were mixed with puromycin (an analog of the 3' end of an aa-tRNA) it was determined that adding LepA to a post transcriptionally modified ribosome prevents dipeptide formation as it inhibits the binding of aa-tRNA to the A site.
Experimental data
There have been various experiments elucidating the structure and function of LepA. One notable study is termed the "toeprinting experiment": this experiment helped to determine LepA's ability to back-translocate. In this case, a primer was extended via reverse transcription along mRNA which was ribosome-bound. The primers from modified mRNA strands from various ribosomes were extended with and without LepA. An assay was then conducted with both PRE and POST states, and cleavage studies revealed enhanced positional cleavage in the POST state as opposed to the PRE state. Since the POST state had been in the presence of LepA (plus GTP), it was determined that the strong signal characteristic of the POST state was the result of LepA which then brought the signal down to the level of the PRE state. Such a study demonstrated that that ribosome, upon binding to the LepA-GTP complex assumes the PRE state configuration.
See also
Prokaryotic elongation factors
Eukaryotic elongation factors
References
Investigation into the biological function of the highly conserved GTPase LepA PhD thesis.
External links
Protein biosynthesis | EF-4 | [
"Chemistry"
] | 1,384 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
10,977,211 | https://en.wikipedia.org/wiki/Diprenorphine | Diprenorphine (brand name Revivon; former developmental code name M5050), also known as diprenorfin, is a non-selective, high-affinity, weak partial agonist of the μ- (MOR), κ- (KOR), and δ-opioid receptor (DOR) (with equal affinity) which is used in veterinary medicine as an opioid antagonist. It is used to reverse the effects of super-potent opioid analgesics such as etorphine and carfentanil that are used for tranquilizing large animals. The drug is not approved for use in humans.
Diprenorphine is the strongest opioid antagonist that is commercially available (some 100 times more potent as an antagonist than nalorphine), and is used for reversing the effects of very strong opioids for which the binding affinity is so high that naloxone does not effectively or reliably reverse the narcotic effects. These super-potent opioids, with the single exception of buprenorphine (which has an improved safety-profile due to its partial agonism character), are not used in humans because the dose for a human is so small that it would be difficult to measure properly , so there is an excessive risk of overdose leading to fatal respiratory depression. However conventional opioid derivatives are not strong enough to rapidly tranquilize large animals, like elephants and rhinos, so drugs such as etorphine and carfentanil are available for this purpose.
Diprenorphine is considered to be the specific reversing agent/antagonist for etorphine and carfentanil, and is normally used to remobilise animals once veterinary procedures have been completed. Since diprenorphine also has partial agonistic properties of its own, it should not be used on humans if they are accidentally exposed to etorphine or carfentanil. Naloxone or naltrexone is the preferred human opioid receptor antagonist.
In theory, diprenorphine could also be used as an antidote for treating overdose of certain opioid derivatives which are used in humans, particularly buprenorphine for which the binding affinity is so high that naloxone does not reliably reverse the narcotic effects. However, diprenorphine is not generally available in hospitals; instead a vial of diprenorphine is supplied with etorphine or carfentanil specifically for reversing the effects of the drug, so the use of diprenorphine for treating a buprenorphine overdose is not usually carried out in practice.
Because diprenorphine is a weak partial agonist of the opioid receptors rather than a silent antagonist, it can produce some opioid effects in the absence of other opioids at sufficient doses. Moreover, due to partial agonism of the KOR, where it appears to possess significantly greater intrinsic activity relative to the MOR, diprenorphine can produce sedation as well as, in humans, hallucinations.
References
Delta-opioid receptor antagonists
Ethers
Kappa-opioid receptor agonists
Kappa-opioid receptor antagonists
4,5-Epoxymorphinans
Mu-opioid receptor antagonists
Hydroxyarenes
Semisynthetic opioids
Tertiary alcohols | Diprenorphine | [
"Chemistry"
] | 697 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
10,978,393 | https://en.wikipedia.org/wiki/XIAP | X-linked inhibitor of apoptosis protein (XIAP), also known as inhibitor of apoptosis protein 3 (IAP3) and baculoviral IAP repeat-containing protein 4 (BIRC4), is a protein that stops apoptotic cell death. In humans, this protein (XIAP) is produced by a gene named XIAP gene located on the X chromosome.
XIAP is a member of the inhibitor of apoptosis family of proteins (IAP). IAPs were initially identified in baculoviruses, but XIAP is one of the homologous proteins found in mammals. It is so called because it was first discovered by a 273 base pair site on the X chromosome. The protein is also called human IAP-like Protein (hILP), because it is not as well conserved as the human IAPS: hIAP-1 and hIAP-2. XIAP is the most potent human IAP protein currently identified.
Discovery
Neuronal apoptosis inhibitor protein (NAIP) was the first homolog to baculoviral IAPs that was identified in humans. With the sequencing data of NIAP, the gene sequence for a RING zinc-finger domain was discovered at site Xq24-25. Using PCR and cloning, three BIR domains and a RING finger were found on the protein, which became known as X-linked Inhibitor of Apoptosis Protein. The transcript size of Xiap is 9.0kb, with an open reading frame of 1.8kb. Xiap mRNA has been observed in all human adult and fetal tissues "except peripheral blood leukocytes". The XIAP sequences led to the discovery of other members of the IAP family.
Structure
XIAP consists of three major types of structural elements (domains). Firstly, there is the baculoviral IAP repeat (BIR) domain consisting of approximately 70 amino acids, which characterizes all IAP. Secondly, there is a UBA domain, which allows XIAP to bind to ubiquitin. Thirdly, there is a zinc-binding domain, or a "carboxy-terminal RING Finger". XIAP has been characterized with three amino-terminal BIR domains followed by one UBA domain and finally one RING domain. Between the BIR-1 and BIR-2 domains, there is a linker-BIR-2 region that is thought to contain the only element that comes into contact with the caspase molecule to form the XIAP/Caspase-7 complex. In solution the full length form of XIAP forms a homodimer of approximately 114 kDa.
Function
XIAP stops apoptotic cell death that is induced either by viral infection or by overproduction of caspases. Caspases are the enzymes primarily responsible for cell death. XIAP binds to and inhibits caspase 3, 7 and 9. The BIR2 domain of XIAP inhibits caspase 3 and 7, while BIR3 binds to and inhibits caspase 9. The RING domain utilizes E3 ubiquitin ligase activity and enables IAPs to catalyze ubiquination of self, caspase-3, or caspase-7 by degradation via proteasome activity. However, mutations affecting the RING Finger do not significantly affect apoptosis, indicating that the BIR domain is sufficient for the protein's function. When inhibiting caspase-3 and caspase-7 activity, the BIR2 domain of XIAP binds to the active-site substrate groove, blocking access of the normal protein substrate that would result in apoptosis.
Caspases are activated by cytochrome c, which is released into the cytosol by dysfunctioning mitochondria. Studies show that XIAP does not directly affect cytochrome c.
XIAP distinguishes itself from the other human IAPs because it is able to effectively prevent cell death due to "TNF-α, Fas, UV light, and genotoxic agents".
Inhibiting XIAP
XIAP is inhibited by DIABLO (Smac) and HTRA2 (Omi), two death-signaling proteins released into the cytoplasm by the mitochondria. Smac/DIABLO, a mitochondrial protein and negative regulator of XIAP, can enhance apoptosis by binding to XIAP and preventing it from binding to caspases. This allows normal caspase activity to proceed. The binding process of Smac/DIABLO to XIAP and caspase release requires a conserved tetrapeptide motif.
Clinical significance
Deregulation of XIAP can result in "cancer, neurodegenerative disorders, and autoimmunity". High proportions of XIAP may function as a tumor marker. In the development of lung cancer NCI-H460, the overexpression of XIAP not only inhibits caspase, but also stops the activity of cytochrome c (Apoptosis). In developing prostate cancer, XIAP is one of four IAPs overexpressed in the prostatic epithelium, indicating that a molecule that inhibits all IAPs may be necessary for effective treatment. Apoptotic regulation is an extremely important biological function, as evidenced by "the conservation of the IAPs from humans to Drosophila".
Mutations in the XIAP gene can result in a severe and rare type of inflammatory bowel disease. Defects in the XIAP gene can also result in an extremely rare condition called X-linked lymphoproliferative disease type 2.
Interactions
XIAP has been shown to interact with:
ALS2CR2,
Caspase 3.
Caspase 7,
Caspase-9,
Diablo homolog
HtrA serine peptidase 2,
MAGED1,
MAP3K2,
TAB1, and
XAF1.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Lymphoproliferative Disease, X-Linked
Cell signaling
Programmed cell death
Apoptosis
EC 6.3.2 | XIAP | [
"Chemistry",
"Biology"
] | 1,276 | [
"Senescence",
"Programmed cell death",
"Apoptosis",
"Signal transduction"
] |
11,958,485 | https://en.wikipedia.org/wiki/Britton%E2%80%93Robinson%20buffer | The Britton–Robinson buffer (BRB or PEM) is a "universal" pH buffer used for the pH range from 2 to 12. It has been used historically as an alternative to the McIlvaine buffer, which has a smaller pH range of effectiveness (from 2 to 8).
Universal buffers consist of mixtures of acids of diminishing strength (increasing pKa), so that the change in pH is approximately proportional to the amount of alkali added. It consists of a mixture of 0.04 M boric acid, 0.04 M phosphoric acid and 0.04 M acetic acid that has been titrated to the desired pH with 0.2 M sodium hydroxide. Britton and Robinson also proposed a second formulation that gave an essentially linear pH response to added alkali from pH 2.5 to pH 9.2 (and buffers to pH 12). This mixture consists of 0.0286 M citric acid, 0.0286 M monopotassium phosphate, 0.0286 M boric acid, 0.0286 M veronal and 0.0286 M hydrochloric acid titrated with 0.2 M sodium hydroxide.
The buffer was invented in 1931 by the English chemist Hubert Thomas Stanley "Kevin" Britton (1892–1960) and the New Zealand chemist Robert Anthony Robinson (1904–1979).
See also
Buffer solution
Good's buffers
References
Acid–base chemistry
Buffer solutions
Chemical tests | Britton–Robinson buffer | [
"Chemistry"
] | 307 | [
"Acid–base chemistry",
"Buffer solutions",
"Chemical tests",
"Equilibrium chemistry",
"nan",
"Analytical chemistry stubs"
] |
11,963,550 | https://en.wikipedia.org/wiki/Ultra%20low%20expansion%20glass | Ultra low expansion glass (ULE) is a registered trademark of Corning Incorporated. ULE has a very low coefficient of thermal expansion and contains as components silica and less than 10% titanium dioxide. Such high resistance to thermal expansion makes it very resistant to high temperature thermal shock. ULE has been made by Corning since the 1960s, but is still very important to current applications.
Applications
There are many applications for ULE, but by far the most common is for mirrors and lenses for telescopes in both space and terrestrial settings. One of the most well known examples of the use of ULE is in the Hubble Space Telescope's mirror. Another good example of its application is in the Gemini telescope's mirror bank. This type of material is needed for this application because the mirrors on telescopes, especially very large, high-precision units, cannot bend or lose their shape even slightly. If this were to happen, the telescope would be out of focus. Some other examples and uses of ULE are:
Ultra-low expansion substrates for mirrors and other optics
Length standards
Lightweight honeycomb mirror mounts
Astronomical telescopes
Precision measurement technology
Laser cavities
Another newer use for this material that is showing promise is in the semiconductor industry, this again because of the purity and extreme low expansion of ULE glass.
Structure
The structure of ULE is completely amorphous; because of this it is a glass, not a ceramic. The amorphous structure of the material comes from there being no crystal phases within the structure, so there is no long range order.
Processing
The way that ULE is made is very different from the standard way that glass is made. Instead of mixing powdered materials together into a batch, melting that batch and pouring out sheets of glass, ULE, being such a high temperature glass, has to be made in a flame hydrolysis process. In this process high purity precursors are injected into the flames, which causes them to react and form TiO2 and SiO2. The TiO2 and SiO2 then fall down and are deposited onto the growing glass.
Properties
Thermal
Ultra low expansion glass has an coefficient of thermal expansion of about 10−8/K at 5–35 °C. It has a thermal conductivity of 1.31 W/(m·°C), thermal diffusion of 0.0079 cm2/s, a mean specific heat of 767 J/(kg·°C), a strain point of 890 °C [1634 °F], and an estimated softening point of 1490 °C [2714 °F], an annealing point of 1000 °C [1832 °F].
Mechanical
Ultra low expansion glass has an ultimate tensile strength of , a Poisson’s ratio 0.17, a density of (), a specific stiffness of (), a shear modulus of (), a bulk modulus of (), and an elastic modulus of ().
Optical
ULE has a Stress Optical Coefficient of 4.15 (nm/cm)/(kg/cm3) [0.292(nm/cm) psi], and an Abbe number of 53.1.
Other
ULE has a high resistance to weathering because of its hardness, and is also unaffected by nearly all chemical agents. ULE also shows no changes when quickly cooled from 350 °C.
References
Glass compositions
Low thermal expansion materials | Ultra low expansion glass | [
"Physics",
"Chemistry"
] | 687 | [
"Glass chemistry",
"Glass compositions",
"Low thermal expansion materials",
"Materials",
"Matter"
] |
16,204,605 | https://en.wikipedia.org/wiki/Electric%20field%20NMR | Electric field NMR (EFNMR) spectroscopy is the NMR spectroscopy where additional information on a sample being probed is obtained from the effect of a strong, externally applied, electric field on the NMR signal.
See also
NMR spectroscopy
Stark effect
References
Nuclear magnetic resonance | Electric field NMR | [
"Physics",
"Chemistry"
] | 59 | [
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
16,210,493 | https://en.wikipedia.org/wiki/Heat%20spreader | A heat spreader transfers energy as heat from a hotter source to a colder heat sink or heat exchanger. There are two thermodynamic types, passive and active. The most common sort of passive heat spreader is a plate or block of material having high thermal conductivity, such as copper, aluminum, or diamond. An active heat spreader speeds up heat transfer with expenditure of energy as work supplied by an external source.
A heat pipe uses fluids inside a sealed case. The fluids circulate either passively, by spontaneous convection, triggered when a threshold temperature difference occurs; or actively, because of an impeller driven by an external source of work. Without sealed circulation, energy can be carried by transfer of fluid matter, for example externally supplied colder air, driven by an external source of work, from a hotter body to another external body, though this is not exactly heat transfer as defined in physics.
Exemplifying increase of entropy according to the second law of thermodynamics, a passive heat spreader disperses or "spreads out" heat, so that the heat exchanger(s) may be more fully utilized. This has the potential to increase the heat capacity of the total assembly, but the additional thermal junctions limit total thermal capacity. The high conduction properties of the spreader will make it more effective to function as an air heat exchanger, as opposed to the original (presumably smaller) source. The low heat conduction of air in convection is matched by the higher surface area of the spreader, and heat is transferred more effectively.
A heat spreader is generally used when the heat source tends to have a high heat-flux density, (high heat flow per unit area), and for whatever reason, heat can not be conducted away effectively by the heat exchanger. For instance, this may be because it is air-cooled, giving it a lower heat transfer coefficient than if it were liquid-cooled. A high enough heat exchanger transfer coefficient is sufficient to avoid the need for a heat spreader.
The use of a heat spreader is an important part of an economically optimal design for transferring heat from high to low heat flux media. Examples include:
A copper-clad bottom on a steel or stainless steel stove-top cooking container
Air-cooling integrated circuits such as a microprocessor
Air-cooling a photovoltaic cell in a concentrated photovoltaics system
Diamond has a very high thermal conductivity. Synthetic diamond is used as submounts for high-power integrated circuits and laser diodes.
Composite materials can be used, such as the metal matrix composites (MMCs) copper–tungsten, AlSiC (silicon carbide in aluminium matrix), Dymalloy (diamond in copper-silver alloy matrix), and E-Material (beryllium oxide in beryllium matrix). Such materials are often used as substrates for chips, as their thermal expansion coefficient can be matched to ceramics and semiconductors.
Research
In May 2022, researchers at the University of Illinois at Urbana-Champaign and University of California, Berkeley devised a new solution that could cool modern electronics more efficiently than other existing strategies. Their proposed method is based on the use of heat spreaders consisting of an electrical insulating layer of poly (2-chloro-p-xylylene) (Parylene C) and a coating of copper. This solution would also require less expensive materials.
See also
Computer module
Heat pipe
Heat sink
Thermal conductivity of diamond
Thermal grease
Thermal interface material
References
Computer hardware cooling
Heat transfer
Residential heating appliances | Heat spreader | [
"Physics",
"Chemistry"
] | 729 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
16,211,513 | https://en.wikipedia.org/wiki/Stebbins%E2%80%93Whitford%20effect | The Stebbins–Whitford effect refers to the excess reddening of the spectra of elliptical galaxies as shown by measurements published by Joel Stebbins and Albert Whitford In 1948. The spectra were shifted much more to the red than the Hubble redshift could account for. Furthermore, this excess reddening increased with the distance of the galaxies.
The effect was only found for elliptical and not for spiral galaxies. One possible explanation was that younger galaxies contain more red giants than older galaxies. This kind of evolution could not exist according to the steady-state theory. Later analysis of the same data showed that the data was inadequate to establish the claimed effect. After further measurements and analysis Whitford withdrew the claim in 1956.
References
See also
Scientific phenomena named after people
Extragalactic astronomy
Astronomical spectroscopy
Physical cosmology
Elliptical galaxies | Stebbins–Whitford effect | [
"Physics",
"Chemistry",
"Astronomy"
] | 172 | [
"Astronomical sub-disciplines",
"Spectrum (physical sciences)",
"Theoretical physics",
"Astrophysics",
"Astronomical spectroscopy",
"Extragalactic astronomy",
"Spectroscopy",
"Physical cosmology"
] |
16,215,763 | https://en.wikipedia.org/wiki/Chromium%28IV%29%20chloride | Chromium(IV) chloride () is an unstable chromium compound. It is generated by combining chromium(III) chloride and chlorine gas at elevated temperatures, but reverts to those substances at room temperature.
References
Chromium–halogen compounds
Chlorides
Metal halides
Chromium(IV) compounds | Chromium(IV) chloride | [
"Chemistry"
] | 71 | [
"Chlorides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts",
"Metal halides"
] |
7,319,263 | https://en.wikipedia.org/wiki/Entropy%20%28energy%20dispersal%29 | In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'.
In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature.
Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology.
Comparisons with traditional approach
The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.
Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state.
The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships.
Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical.
Description
Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning.
In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules.
The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities.
Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates.
This approach provides a good basis for understanding the conventional approach, except in very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. Thus in situations such as the entropy of mixing when the two or more different substances being mixed are at the same temperature and pressure so there will be no net exchange of heat or work, the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component's energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates.
Current adoption
Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states:
The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006)
History
The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state.
Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction.
In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy."
In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott.
In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature.
See also
Introduction to entropy
References
Further reading
Carson, E. M., and Watson, J. R., (Department of Educational and Professional Studies, King's College, London), 2002, "Undergraduate students' understandings of entropy and Gibbs Free energy," University Chemistry Education - 2002 Papers, Royal Society of Chemistry.
Lambert, Frank L. (2002). "Disorder - A Cracked Crutch For Supporting Entropy Discussions," Journal of Chemical Education 79: 187-92.
Texts using the energy dispersal approach
Atkins, P. W., Physical Chemistry for the Life Sciences. Oxford University Press, ; W. H. Freeman,
Benjamin Gal-Or, "Cosmology, Physics and Philosophy", Springer-Verlag, New York, 1981, 1983, 1987
Bell, J., et al., 2005. Chemistry: A General Chemistry Project of the American Chemical Society, 1st ed. W. H. Freeman, 820pp,
Brady, J.E., and F. Senese, 2004. Chemistry, Matter and Its Changes, 4th ed. John Wiley, 1256pp,
Brown, T. L., H. E. LeMay, and B. E. Bursten, 2006. Chemistry: The Central Science, 10th ed. Prentice Hall, 1248pp,
Ebbing, D.D., and S. D. Gammon, 2005. General Chemistry, 8th ed. Houghton-Mifflin, 1200pp,
Ebbing, Gammon, and Ragsdale. Essentials of General Chemistry, 2nd ed.
Hill, Petrucci, McCreary and Perry. General Chemistry, 4th ed.
Kotz, Treichel, and Weaver. Chemistry and Chemical Reactivity, 6th ed.
Moog, Spencer, and Farrell. Thermodynamics, A Guided Inquiry.
Moore, J. W., C. L. Stanistski, P. C. Jurs, 2005. Chemistry, The Molecular Science, 2nd ed. Thompson Learning. 1248pp,
Olmsted and Williams, Chemistry, 4th ed.
Petrucci, Harwood, and Herring. General Chemistry, 9th ed.
Silberberg, M.S., 2006. Chemistry, The Molecular Nature of Matter and Change, 4th ed. McGraw-Hill, 1183pp,
Suchocki, J., 2004. Conceptual Chemistry 2nd ed. Benjamin Cummings, 706pp,
External links
welcome to entropy site A large website by Frank L. Lambert with links to work on the energy dispersal approach to entropy.
The Second Law of Thermodynamics (6)
Thermodynamic entropy | Entropy (energy dispersal) | [
"Physics"
] | 2,529 | [
"Statistical mechanics",
"Entropy",
"Physical quantities",
"Thermodynamic entropy"
] |
7,320,111 | https://en.wikipedia.org/wiki/RFQ%20beam%20cooler | A radio-frequency quadrupole (RFQ) beam cooler is a device for particle beam cooling, especially suited for ion beams. It lowers the temperature of a particle beam by reducing its energy dispersion and emittance, effectively increasing its brightness (brilliance). The prevalent mechanism for cooling in this case is buffer-gas cooling, whereby the beam loses energy from collisions with a light, neutral and inert gas (typically helium). The cooling must take place within a confining field in order to counteract the thermal diffusion that results from the ion-atom collisions.
The quadrupole mass analyzer (a radio frequency quadrupole used as a mass filter) was invented by Wolfgang Paul in the late 1950s to early 60s at the University of Bonn, Germany. Paul shared the 1989 Nobel Prize in Physics for his work. Samples for mass analysis are ionized, for example by laser (matrix-assisted laser desorption/ionization) or discharge (electrospray or inductively coupled plasma) and the resulting beam is sent through the RFQ and "filtered" by scanning the operating parameters (chiefly the RF amplitude). This gives a mass spectrum, or fingerprint, of the sample. Residual gas analyzers use this principle as well.
Applications of ion cooling to nuclear physics
Despite its long history, high-sensitivity high-accuracy mass measurements of atomic nuclei continue to be very important areas of research for many branches of physics. Not only do these measurements provide a better understanding of nuclear structures and nuclear forces but they also offer insight into how matter behaves in some of nature's harshest environments. At facilities such as ISOLDE at CERN and TRIUMF in Vancouver, for instance, measurement techniques are now being extended to short-lived radionuclei that only occur naturally in the interior of exploding stars. Their short half-lives and very low production rates at even the most powerful facilities require the very highest in sensitivity of such measurements.
Penning traps, the central element in modern high-accuracy high-sensitivity mass measurement installations, enable measurements of accuracies approaching 1 part in 1011 on single ions. However, to achieve this Penning traps must have the ion to be measured delivered to it very precisely and with certainty that it is indeed the desired ion. This imposes severe requirements on the apparatus that must take the atomic nucleus out of the target in which it has been created, sort it from the myriad of other ions that are emitted from the target and then direct it so that it can be captured in the measurement trap.
Cooling these ion beams, particularly radioactive ion beams, has been shown to drastically improve the accuracy and sensitivity of mass measurements by reducing the phase space of the ion collections in question. Using a light neutral background gas, typically helium, charged particles originating from on-line mass separators undergo a number of soft collisions with the background gas molecules resulting in fractional losses of the ions' kinetic energy and a reduction of the ion ensemble's overall energy. In order for this to be effective, however, the ions need to be contained using transverse radiofrequency quadrupole (RFQ) electric fields during the collisional cooling process (also known as buffer gas cooling). These RFQ coolers operate on the same principles as quadrupole ion traps and have been shown to be particularly well suited for buffer gas cooling given their capacity for total confinement of ions having a large dispersion of velocities, corresponding to kinetic energies up to tens of electron volts. A number of the RFQ coolers have already been installed at research facilities around the world and a list of their characteristics can be found below.
List of facilities containing RFQ coolers
See also
Quadrupole mass analyzer
References
Bibliography
External links
LEBIT Project NSCL/MSU
ISOLTRAP Experimental Setup
TITAN: TRIUMF's Ion Trap for Atomic and Nuclear science
TRIMP – Trapped Radioactive Isotopes: Micro-laboratories for fundamental Physics
The SHIPTRAP Experiment
The ISCOOL project
Measuring instruments
Mass spectrometry
Accelerator physics | RFQ beam cooler | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 829 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Experimental physics",
"Mass spectrometry",
"Accelerator physics",
"Matter"
] |
7,320,365 | https://en.wikipedia.org/wiki/Sieve%20analysis | A sieve analysis (or gradation test) is a practice or procedure used in geology, civil engineering, and chemical engineering to assess the particle size distribution (also called gradation) of a granular material by allowing the material to pass through a series of sieves of progressively smaller mesh size and weighing the amount of material that is stopped by each sieve as a fraction of the whole mass.
The size distribution is often of critical importance to the way the material performs in use. A sieve analysis can be performed on any type of non-organic or organic granular materials including sand, crushed rock, clay, granite, feldspar, coal, soil, a wide range of manufactured powder, grain and seeds, down to a minimum size depending on the exact method. Being such a simple technique of particle sizing, it is probably the most common.
Procedure
A gradation test is performed on a sample of aggregate in a laboratory. A typical sieve analysis uses a column of sieves with wire mesh screens of graded mesh size.
A representative weighed sample is poured into the top sieve which has the largest screen openings. Each lower sieve in the column has smaller openings than the one above. At the base is a pan, called the receiver.
The column is typically placed in a mechanical shaker, which shakes the column, usually for a set period, to facilitate exposing all of the material to the screen openings so that particles small enough to fit through the holes can fall through to the next layer. After the shaking is complete the material on each sieve is weighed. The mass of the sample of each sieve is then divided by the total mass to give a percentage retained on each sieve.
The size of the average particle on each sieve is then analysed to get a cut-off point or specific size range, which is then captured on a screen.
The results of this test are used to describe the properties of the aggregate and to see if it is appropriate for various civil engineering purposes such as selecting the appropriate aggregate for concrete mixes and asphalt mixes as well as sizing of water production well screens.
The results of this test are provided in graphical form to identify the type of gradation of the aggregate. The complete procedure for this test is outlined in the American Society for Testing and Materials (ASTM) C 136 and the American Association of State Highway and Transportation Officials (AASHTO) T 27
A suitable sieve size for the aggregate underneath the nest of sieves to collect the aggregate that passes through the smallest. The entire nest is then agitated, and the material whose diameter is smaller than the mesh opening pass through the sieves. After the aggregate reaches the pan, the amount of material retained in each sieve is then weighed.
Preparation
In order to perform the test, a sufficient sample of the aggregate must be obtained from the source. To prepare the sample, the aggregate should be mixed thoroughly and be reduced to a suitable size for testing. The total mass of the sample is also required.
Results
The results are presented in a graph of percent passing versus the sieve size. On the graph the sieve size scale is logarithmic. To find the percent of aggregate passing through each sieve, first find the percent retained in each sieve. To do so, the following equation is used,
%Retained = ×100%
where WSieve is the mass of aggregate in the sieve and WTotal is the total mass of the aggregate. The next step is to find the cumulative percent of aggregate retained in each sieve. To do so, add up the total amount of aggregate that is retained in each sieve and the amount in the previous sieves. The cumulative percent passing of the aggregate is found by subtracting the percent retained from 100%.
%Cumulative Passing = 100% - %Cumulative Retained.
The values are then plotted on a graph with cumulative percent passing on the y axis and logarithmic sieve size on the x axis.
There are two versions of the %Passing equations. the .45 power formula is presented on .45 power gradation chart, whereas the more simple %Passing is presented on a semi-log gradation chart. version of the percent passing graph is shown on .45 power chart and by using the .45 passing formula.
.45 power percent passing formula
% Passing = Pi = x100%
Where:
SieveLargest - Largest diameter sieve used in (mm).
Aggregatemax_size - Largest piece of aggregate in the sample in (mm).
Percent passing formula
%Passing = x100%
Where:
WBelow - The total mass of the aggregate within the sieves below the current sieve, not including the current sieve's aggregate.
WTotal - The total mass of all of the aggregate in the sample.
Methods
There are different methods for carrying out sieve analyses, depending on the material to be measured.
Throw-action
Here a throwing motion acts on the sample. The vertical throwing motion is overlaid with a slight circular motion which results in distribution of the sample amount over the whole sieving surface. The particles are accelerated in the vertical direction (are thrown upwards). In the air they carry out free rotations and interact with the openings in the mesh of the sieve when they fall back. If the particles are smaller than the openings, they pass through the sieve. If they are larger, they are thrown. The rotating motion while suspended increases the probability that the particles present a different orientation to the mesh when they fall back again, and thus might eventually pass through the mesh.
Modern sieve shakers work with an electro-magnetic drive which moves a spring-mass system and transfers the resulting oscillation to the sieve stack. Amplitude and sieving time are set digitally and are continuously observed by an integrated control-unit. Therefore, sieving results are reproducible and precise (an important precondition for a significant analysis). Adjustment of parameters like amplitude and sieving time serves to optimize the sieving for different types of material. This method is the most common in the laboratory sector.
Horizontal
In horizontal sieve shaker the sieve stack moves in horizontal circles in a plane. Horizontal sieve shakers are preferably used for needle-shaped, flat, long or fibrous samples, as their horizontal orientation means that only a few disoriented particles enter the mesh and the sieve is not blocked so quickly. The large sieving area enables the sieving of large amounts of sample, for example as encountered in the particle-size analysis of construction materials and aggregates.
Tapping
A horizontal circular motion overlies a vertical motion which is created by a tapping impulse. These motional processes are characteristic of hand sieving and produce a higher degree of sieving for denser particles (e.g. abrasives) than throw-action sieve shakers.
Wet
Most sieve analyses are carried out dry. But there are some applications which can only be carried out by wet sieving. This is the case when the sample which has to be analysed is e.g. a suspension which must not be dried; or when the sample is a very fine powder which tends to agglomerate (mostly < 45 μm) – in a dry sieving process this tendency would lead to a clogging of the sieve meshes and this would make a further sieving process impossible. A wet sieving process is set up like a dry process: the sieve stack is clamped onto the sieve shaker and the sample is placed on the top sieve. Above the top sieve a water-spray nozzle is placed which supports the sieving process additionally to the sieving motion. The rinsing is carried out until the liquid which is discharged through the receiver is clear. Sample residues on the sieves have to be dried and weighed. When it comes to wet sieving it is very important not to change the sample in its volume (no swelling, dissolving or reaction with the liquid).
Air Circular Jet
Air jet sieving machines are ideally suited for very fine powders which tend to agglomerate and cannot be separated by vibrational sieving.
The reason for the effectiveness of this sieving method is based on two components:
A rotating slotted nozzle inside the sieving chamber and a powerful industrial vacuum cleaner which is connected to the chamber. The vacuum cleaner generates a vacuum inside the sieving chamber and sucks in fresh air through the slotted nozzle. When passing the narrow slit of the nozzle the air stream is accelerated and blown against the sieve mesh, dispersing the particles. Above the mesh, the air jet is distributed over the complete sieve surface and is sucked in with low speed through the sieve mesh. Thus the finer particles are transported through the mesh openings into the vacuum cleaner.
Types of gradation
Dense gradation
A dense gradation refers to a sample that is approximately of equal amounts of various sizes of aggregate. By having a dense gradation, most of the air voids between the material are filled with particles. A dense gradation will result in an even curve on the gradation graph.
Narrow gradation
Also known as uniform gradation, a narrow gradation is a sample that has aggregate of approximately the same size. The curve on the gradation graph is very steep, and occupies a small range of the aggregate.
Gap gradation
A gap gradation refers to a sample with very little aggregate in the medium size range. This results in only coarse and fine aggregate. The curve is horizontal in the medium size range on the gradation graph.
Open gradation
An open gradation refers an aggregate sample with very little fine aggregate particles. This results in many air voids, because there are no fine particles to fill them. On the gradation graph, it appears as a curve that is horizontal in the small size range.
Rich gradation
A rich gradation refers to a sample of aggregate with a high proportion of particles of small sizes.
Types of sieves
Woven wire mesh sieves
Woven wire mesh sieves are according to technical requirements of ISO 3310-1. These sieves usually have nominal aperture ranging from 20 micrometers to 3.55 millimeters, with diameters ranging from 100 to 450 millimeters.
Perforated plate sieves
Perforated plate sieves conform to ISO 3310-2 and can have round or square nominal apertures ranging from 1 millimeter to 125 millimeters. The diameters of the sieves range from 200 to 450 millimeters.
American standard sieves
American standard sieves also known as ASTM sieves conform to ASTM E11 standard. The nominal aperture of these sieves range from 20 micrometers to 200 millimeters, however these sieves have only and diameter sizes.
Limitations of sieve analysis
Sieve analysis has, in general, been used for decades to monitor material quality based on particle size. For coarse material, sizes that range down to #100 mesh (150 μm), a sieve analysis and particle size distribution is accurate and consistent.
However, for material that is finer than 100 mesh, dry sieving can be significantly less accurate. This is because the mechanical energy required to make particles pass through an opening and the surface attraction effects between the particles themselves and between particles and the screen increase as the particle size decreases. Wet sieve analysis can be utilized where the material analyzed is not affected by the liquid - except to disperse it. Suspending the particles in a suitable liquid transports fine material through the sieve much more efficiently than shaking the dry material.
Sieve analysis assumes that all particle will be round (spherical) or nearly so and will pass through the square openings when the particle diameter is less than the size of the square opening in the screen. For elongated and flat particles a sieve analysis will not yield reliable mass-based results, as the particle size reported will assume that the particles are spherical, where in fact an elongated particle might pass through the screen end-on, but would be prevented from doing so if it presented itself side-on.
Properties
Gradation affects many properties of an aggregate, including bulk density, physical stability and permeability. With careful selection of the gradation, it is possible to achieve high bulk density, high physical stability, and low permeability. This is important because in pavement design, a workable, stable mix with resistance to water is important. With an open gradation, the bulk density is relatively low, due to the lack of fine particles, the physical stability is moderate, and the permeability is quite high. With a rich gradation, the bulk density will also be low, the physical stability is low, and the permeability is also low. The gradation can be affected to achieve the desired properties for the particular engineering application.
Engineering applications
Gradation is usually specified for each engineering application it is used for. For example, foundations might only call for coarse aggregates, and therefore an open gradation is needed.
Sieve analysis determines the particle size distribution of a given soil sample and hence helps in easy identification of a soil's mechanical properties. These mechanical properties determine whether a given soil can support the proposed engineering structure. It also helps determine what modifications can be applied to the soil and the best way to achieve maximum soil strength.
See also
Soil gradation
Automated sieving using photoanalysis
Optical granulometry
References
External links
List of ASTM test methods for sieve analysis of various materials
ASTM C136 / C136M - 14 Standard Test Method for Sieve Analysis of Fine and Coarse Aggregates
ASTM B214 - 16 Standard Test Method for Sieve Analysis of Metal Powders
Chemical engineering
Soil Classification System, Unified
Granulometric analyses
Particle technology | Sieve analysis | [
"Chemistry",
"Engineering"
] | 2,830 | [
"Particle technology",
"Chemical engineering",
"nan",
"Environmental engineering"
] |
13,507,050 | https://en.wikipedia.org/wiki/Dutch%20Design%20Week | Dutch Design Week (also known as DDW) is an event about Dutch design, hosted in Eindhoven, Netherlands. The event takes place around the last week of October and is a nine-day event with exhibitions, studio visits, workshops, seminars, and parties across the city.
The event hosts companies including Philips, Philips Design and DAF, as well as the Design Academy Eindhoven and the Eindhoven University of Technology.
The initiative began in 2002 as a non-commercial fair and by 2018 had 355,000 visitors.
The DDW consists of around 120 venues. The main venues during the event are among others the Klokgebouw (Strijp-S), Design Academy Eindhoven and the Faculty of Industrial Design at the Eindhoven University of Technology, where successful and well-visited expositions are organized.
Whereas the main goal remains to create a non-commercial event, many conflicts of interest and the rapid growth did contribute to a more commercial approach since 2007.
Pop venue Effenaar and classical music venue Muziekgebouw Frits Philips both organize the musical program DDW Music around the festival with live performances as well as exhibitions related to experimental musical instruments, sound art and sound installations.
Dutch Design Week 2020 was an online-only event. A digital festival, initially planned to work alongside a programme of studio tours and socially distanced activities, became the centrepiece of the festival as all physical events had been cancelled due to a rise in coronavirus cases in the city.
Theme
Since the 2012 edition Dutch Design Week picks a yearly theme overarching the entire week.
Ambassadors
Since 2009 Dutch Design Week picks multiple ambassadors from the field who are advocates of Dutch Design.
See also
Dutch Design
References
External links
Dutch Design Week
Dutch design
Culture in Eindhoven
Industrial design
Events in Eindhoven | Dutch Design Week | [
"Engineering"
] | 373 | [
"Industrial design",
"Design engineering",
"Design"
] |
13,507,300 | https://en.wikipedia.org/wiki/Lambda%20Piscium | Lambda Piscium, Latinized from λ Piscium, is a solitary, white-hued star in the zodiac constellation of Pisces. With an apparent visual magnitude of 4.49, it is visible to the naked eye, forming the southeast corner of the "Circlet" asterism in Pisces. Based upon a measured annual parallax shift of 30.59 mas as seen from Earth, it is located 107 light years distant from the Sun. Lambda Piscium is a member of the Ursa Major Stream, lying among the outer parts, or corona, of this moving group of stars that roughly follow a common heading through space.
This well-studied star has a stellar classification A7 V, indicating it is an A-type main-sequence star that is generating energy through hydrogen fusion at its core. It has 1.8 times the mass of the Sun and double the Sun's radius. The star is radiating 13.3 times the Sun's luminosity from its photosphere at an effective temperature of 7,734 K. Lambda Piscium is around 583 million years old and is spinning with a projected rotational velocity of 70 km/s.
Naming
In Chinese, (), meaning Cloud and Rain, refers to an asterism consisting of λ Piscium, κ Piscium, 12 Piscium and 21 Piscium. Consequently, the Chinese name for λ Piscium itself is (, .)
References
A-type main-sequence stars
Ursa Major moving group
Pisces (constellation)
Piscium, Lambda
BD+00 5037
Piscium, 018
222603
116928
8984 | Lambda Piscium | [
"Astronomy"
] | 348 | [
"Pisces (constellation)",
"Constellations"
] |
13,513,932 | https://en.wikipedia.org/wiki/Omega%20Piscium | Omega Piscium (Omega Psc, ω Piscium, ω Psc) is a star approximately 106 light years away from Earth, in the constellation Pisces. It has a spectral type of F4IV, meaning it is a subgiant/dwarf star, and it has a temperature of 6,600 kelvins. It may or may not be a close binary star system. Variations in its spectrum were once interpreted as giving it an orbital period of 2.16 days, but this claim was later debunked as false. It is 20 times brighter than the Sun and is 1.8 times greater in mass, if it is a single star.
It is part of the drawn asterism in classic and modern renderings as the start of the tail, east of the Circlet of Pisces, a near-circle which forms all but the tail (the head and body) of the western (fatter) "fish" in the constellation of two fishes.
Right ascension
Considering stars with Flamsteed numbers, Greek letters, and proper names, Omega Piscium at J2000 (namely in the year 2000) was the named star with the highest right ascension (akin to terrestrial longitude). Due to the 26,000-year movement of the Earth's axis tracing an imperfect circle (axial precession), it has since increased to just beyond 0 hours, which it reached in J2013.
At the cusp of sunrise on the March Equinox in the present era the circlet appears just above the sunrise being the westernmost part of the asterism; the easternmost parts can be most easily seen after sunset, just above the Sun on a maximal horizon, such as the sea. A month later the progress of the Earth around the plane of the ecliptic (its orbit) by a mean 2 hours of Right Ascension (18° of orbit) means that the sun rises and sets in an outer part of Aries bordering Cetus.
Naming
ω Piscium is the star's Bayer designation.
In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Dzaneb al Samkat, which was translated into Latin as Cauda Piscis, meaning the tail of fish.
In Chinese, (), meaning Thunderbolt, refers to an asterism consisting of refers to an asterism consisting of ω Piscium, β Piscium, γ Piscium, θ Piscium and ι Piscium. Consequently, the Chinese name for ω Piscium itself is (, .)
References
F-type subgiants
Pisces (constellation)
Piscium, Omega
Durchmusterung objects
Piscium, 028
224617
118268
9072 | Omega Piscium | [
"Astronomy"
] | 578 | [
"Pisces (constellation)",
"Constellations"
] |
13,514,617 | https://en.wikipedia.org/wiki/Xylose%20metabolism | D-Xylose is a five-carbon aldose (pentose, monosaccharide) that can be catabolized or metabolized into useful products by a variety of organisms.
There are at least four different pathways for the catabolism of D-xylose: An oxido-reductase pathway is present in eukaryotic microorganisms. Prokaryotes typically use an isomerase pathway, and two oxidative pathways, called Weimberg and Dahms pathways respectively, are also present in prokaryotic microorganisms.
Pathways
The oxido-reductase pathway
This pathway is also called the “Xylose Reductase-Xylitol Dehydrogenase” or XR-XDH pathway. Xylose reductase (XR) and xylitol dehydrogenase (XDH) are the first two enzymes in this pathway. XR is reducing D-xylose to xylitol using NADH or NADPH. Xylitol is then oxidized to D-xylulose by XDH, using the cofactor NAD. In the last step D-xylulose is phosphorylated by an ATP utilising kinase, XK, to result in D-xylulose-5-phosphate which is an intermediate of the pentose phosphate pathway.
The isomerase pathway
In this pathway the enzyme xylose isomerase converts D-xylose directly into D-xylulose. D-xylulose is then phosphorylated to D-xylulose-5-phosphate as in the oxido-reductase pathway. At equilibrium, the isomerase reaction results in a mixture of 83% D-xylose and 17% D-xylulose because the conversion of xylose to xylulose is energetically unfavorable.
Weimberg pathway
The Weimberg pathway is an oxidative pathway where the D-xylose is oxidized to D-xylono-lactone by a D-xylose dehydrogenase followed by a lactonase to hydrolyze the lactone to D-xylonic acid. A xylonate dehydratase is splitting off a water molecule resulting in 2-keto 3-deoxy-xylonate. 2-keto-3-deox-D-xylonate dehydratase forms the α-ketoglutarate semialdehyde. This is subsequently oxidised via α-ketoglutarate semialdehyde dehydrogenase to yield 2-ketoglutarate which serves as a key intermediate in the citric acid cycle.
Dahms pathway
The Dahms pathway starts as the Weimberg pathway but the 2-keto-3 deoxy-xylonate is split by an aldolase to pyruvate and glycolaldehyde.
Biotechnological applications
It is desirable to ferment D-xylose to ethanol. This can be accomplished either by native xylose fermenting yeasts such as Scheffersomyces Pichia stipitis or by metabolically engineered strains of Saccharomyces cerevisiae. Pichia stipitis is not as ethanol tolerant as the traditional ethanol producing yeast Saccharomyces cerevisiae. S. cerevisiae on the other hand can not ferment D-xylose to ethanol. In attempts to generate S. cerevisiae strains that are able to ferment D-xylose the XYL1 and XYL2 genes of P. stipitis coding for the D-xylose reductase (XR) and xylitol dehydrogenase (XDH), respectively were introduced in S. cerevisiae by means of genetic engineering. XR catalyze the formation of xylitol from D-xylose and XDH the formation of D-xylulose from xylitol. Saccharomyces cerevisiae can naturally ferment D-xylulose through the pentose phosphate pathway.
In another approach, bacterial xylose isomerases have been introduced into S. cerevisiae. This enzyme catalyze the direct formation of D-xylulose from D-xylose. Many attempts at expressing bacterial isomerases were not successful due to misfolding or other problems, but a xylose isomerase from the anaerobic fungus Piromyces Sp. has proven effective. One advantage claimed for S. cerevisiae engineered with the xylose isomerase is that the resulting cells can grow anaerobically on xylose after evolutionary adaptation.
Studies on flux through the oxidative pentose phosphate pathway during D-xylose metabolism have revealed that limiting the rate of this step may be beneficial to the efficiency of fermentation to ethanol. Modifications to this flux that may improve ethanol production include deleting the GND1 gene, or the ZWF1 gene. Since the pentose phosphate pathway produces additional NADPH during metabolism, limiting this step will help to correct the already evident imbalance between NAD(P)H and NAD+ cofactors and reduce xylitol byproduct formation.
Another experiment comparing the two D-xylose metabolizing pathways revealed that the XI pathway was best able to metabolize D-xylose to produce the greatest ethanol yield, while the XR-XDH pathway reached a much faster rate of ethanol production.
Overexpression of the four genes encoding non-oxidative pentose phosphate pathway enzymes Transaldolase, Transketolase, Ribulose-5-phosphate epimerase and Ribose-5-phosphate ketol-isomerase led to both higher D-xylulose and D-xylose fermentation rate.
The aim of this genetic recombination in the laboratory is to develop a yeast strain that efficiently produces ethanol. However, the effectiveness of D-xylose metabolizing laboratory strains do not always reflect their metabolism abilities on raw xylose products in nature. Since D-xylose is mostly isolated from agricultural residues such as wood stocks then the native or genetically altered yeasts will need to be effective at metabolizing these less pure natural sources.
Varying expression of the XR and XDH enzyme levels have been tested in the laboratory in the attempt to optimize the efficiency of the D-xylose metabolism pathway.
References
Carbohydrate metabolism
Monosaccharides
Metabolism | Xylose metabolism | [
"Chemistry",
"Biology"
] | 1,407 | [
"Carbohydrates",
"Carbohydrate metabolism",
"Monosaccharides",
"Carbohydrate chemistry",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
13,515,967 | https://en.wikipedia.org/wiki/Relaxosome | The relaxosome is the complex of proteins that facilitates plasmid transfer during bacterial conjugation. The proteins are encoded by the tra operon on a fertility plasmid in the region near the origin of transfer, oriT. The most important of these proteins is relaxase, which is responsible for beginning the conjugation process by cutting at the nic site via transesterification. This nicking results in a DNA-Protein complex with the relaxosome bound to a single strand of the plasmid DNA and an exposed 3' hydroxyl group. Relaxase also unwinds the plasmid being conjugated with its helicase properties. The relaxosome interacts with integration host factors within the oriT.
Other genes that code for relaxosome components include TraH, which stabilizes the relaxosome's structural formation, TraI, which encodes for the relaxase protein, TraJ, which recruits the complex to the oriT site, TraK, which increases the 'nicked' state of the target plasmid, and TraY, which imparts single-stranded DNA character on the oriT site. TraM plays a particularly important role in relaxase interaction by stimulating 'relaxed' DNA formation.
References
Molecular biology | Relaxosome | [
"Chemistry",
"Biology"
] | 265 | [
"Biochemistry",
"Bacteria stubs",
"Bacteria",
"Molecular biology"
] |
14,653,232 | https://en.wikipedia.org/wiki/Fibronectin%20type%20II%20domain | Fibronectin type II domain is a collagen-binding protein domain. Fibronectin is a multi-domain glycoprotein, found in a soluble form in plasma, and in an insoluble form in loose connective tissue and basement membranes, that binds cell surfaces and various compounds including collagen, fibrin, heparin, DNA, and actin. Fibronectins are involved in a number of important functions e.g., wound healing; cell adhesion; blood coagulation; cell differentiation and migration; maintenance of the cellular cytoskeleton; and tumour metastasis. The major part of the sequence of fibronectin consists of the repetition of three types of domains, which are called type I, II, and III.
Type II domain is approximately sixty amino acids long, contains four conserved cysteines involved in disulfide bonds and is part of the collagen-binding region of fibronectin. Type II domains occur two times in fibronectin. Type II domains have also been found in a range of proteins including blood coagulation factor XII; bovine seminal plasma proteins PDC-109 (BSP-A1/A2) and BSP-A3; cation-independent mannose-6-phosphate receptor; mannose receptor of macrophages; 180 Kd secretory phospholipase A2 receptor; DEC-205 receptor; 72 Kd and 92 Kd type IV collagenase (); and hepatocyte growth factor activator.
Fibronectin type II domain and Lipid bilayer interaction
Fibronectin type II domain is part of the extracellular portions of EphA2 receptor proteins. FN2 domain on EphA2 receptors bears positively-charged components, namely K441 and R443, which attract and almost exclusively bind to anionic lipids such as anionic membrane lipid phosphatidylglycerol. K441 and R443 together make up a membrane-binding motif that allows EphA2 receptors to attach to the cell membrane.
Human proteins containing this domain
BSPH1; ELSPBP1; F12; FN1; HGFAC; IGF2R; LY75; MMP2;
MMP9; MRC1; MRC1L1; MRC2; PLA2R1; SEL1L;
Fibronectin type I domain: F12; FN1; HGFAC; PLAT;
References
External links
Fibronectin type-II collagen-binding domain in PROSITE
Protein domains
Peripheral membrane proteins | Fibronectin type II domain | [
"Biology"
] | 561 | [
"Protein domains",
"Protein classification"
] |
14,654,371 | https://en.wikipedia.org/wiki/Protein%20kinase%20domain | The protein kinase domain is a structurally conserved protein domain containing the catalytic function of protein kinases. Protein kinases are a group of enzymes that move a phosphate group onto proteins, in a process called phosphorylation. This functions as an on/off switch for many cellular processes, including metabolism, transcription, cell cycle progression, cytoskeletal rearrangement and cell movement, apoptosis, and differentiation. They also function in embryonic development, physiological responses, and in the nervous and immune system. Abnormal phosphorylation causes many human diseases, including cancer, and drugs that affect phosphorylation can treat those diseases.
Protein kinases possess a catalytic subunit which transfers the gamma phosphate from nucleoside triphosphates (almost always ATP) to the side chain of an amino acid in a protein, resulting in a conformational and/or dynamic changes affecting protein function. These enzymes fall into two broad classes, characterised with respect to substrate specificity: serine/threonine specific and tyrosine specific.
Function
Protein kinase function has been evolutionarily conserved from Escherichia coli to Homo sapiens. Protein kinases play a role in a multitude of cellular processes, including division, proliferation, apoptosis, and differentiation. Phosphorylation usually results in a functional change of the target protein by changing structure, dynamics, enzyme activity, cellular location, or association with other proteins.
Structure
The catalytic subunits of protein kinases are highly conserved, and the structures of over 280 of the approximately 494 kinase domains from 481 human genes have been determined, leading to large screens to develop kinase-specific inhibitors for the treatments of a number of diseases. Humans have only 437 kinase domains that have catalytic activity; the rest are pseudokinases or catalyze other reactions.
Eukaryotic protein kinases are enzymes that belong to a very extensive family of proteins which share a conserved catalytic core common with both serine/threonine and tyrosine protein kinases. The domain consists of two sub-domains referred to as the N- and C-terminal domains. The N-terminal domain consists of five beta sheet strands and an alpha helix called the C-helix, and the C-terminal domain usually consists of six alpha helices (labeled D, E, F, G, H, and I). The C-terminal domain contains two long loops, called the catalytic loop and the activation loop, which are essential for catalytic activity. The catalytic loop includes the "HRD motif" (for the amino acid sequence His-Arg-Asp), whose aspartic acid residue interacts directly with the hydroxyl group of the target serine, threonine, or tyrosine residue that is phosphorylated.
The activation loop starts with the DFG motif (for the amino acid sequence Asp-Phe-Gly), which helps to bind ATP and magnesium in the active site. Broadly, the state or conformation of the kinase may be classified as DFGin or DFGout, depending on whether the Asp residue of the DFG motif is in or out of the active site. In the active form, the first few residues of the activation loop adopt a specific form of the DFGin conformation. Some inactive structures may adopt one of several other DFGin conformations, while other inactive structures are DFGout.
Examples
The following is a list of human proteins containing the protein kinase domain:
AAK1 ; AATK ; ABL1 ; ABL2 ; ACVR1 ; ACVR1B ; ACVR1C ; ACVR2A ; ACVR2B ; ACVRL1 ; AKT1 ; AKT2 ; AKT3 ; ALK ; AMHR2 ; ANKK1 ; ARAF ; AURKA ; AURKB ; AURKC ; AXL ; BLK ; BMP2K ; BMPR1A ; BMPR1B ; BMPR2 ; BMX ; BRAF ; BRSK1 ; BRSK2 ; BTK ; BUB1 ; BUB1B ; CAMK1 ; CAMK1D ; CAMK1G ; CAMK2A ; CAMK2B ; CAMK2D ; CAMK2G ; CAMK4 ; CAMKK1 ; CAMKK2 ; CAMKV ; CASK ; CDC42BPA ; CDC42BPB ; CDC42BPG ; CDC7 ; CDK1 ; CDK10 ; CDK11A ; CDK11B ; CDK12 ; CDK13 ; CDK14 ; CDK15 ; CDK16 ; CDK17 ; CDK18 ; CDK19 ; CDK2 ; CDK20 ; CDK3 ; CDK4 ; CDK5 ; CDK6 ; CDK7 ; CDK8 ; CDK9 ; CDKL1 ; CDKL2 ; CDKL3 ; CDKL4 ; CDKL5 ; CHEK1 ; CHEK2 ; CHUK ; CIT ; CLK1 ; CLK2 ; CLK3 ; CLK4 ; CSF1R ; CSK ; CSNK1A1 ; CSNK1A1L ; CSNK1D ; CSNK1E ; CSNK1G1 ; CSNK1G2 ; CSNK1G3 ; CSNK2A1 ; CSNK2A2 ; CSNK2A3 ; DAPK1 ; DAPK2 ; DAPK3 ; DCLK1 ; DCLK2 ; DCLK3 ; DDR1 ; DDR2 ; DMPK ; DSTYK ; DYRK1A ; DYRK1B ; DYRK2 ; DYRK3 ; DYRK4 ; EGFR ; EIF2AK1 ; EIF2AK2 ; EIF2AK3 ; EIF2AK4 ; EPHA1 ; EPHA10 ; EPHA2 ; EPHA3 ; EPHA4 ; EPHA5 ; EPHA6 ; EPHA7 ; EPHA8 ; EPHB1 ; EPHB2 ; EPHB3 ; EPHB4 ; EPHB6 ; ERBB2 ; ERBB3 ; ERBB4 ; ERN1 ; ERN2 ; FER ; FES ; FGFR1 ; FGFR2 ; FGFR3 ; FGFR4 ; FGR ; FLT1 ; FLT3 ; FLT4 ; FRK ; FYN ; GAK ; GRK1 ; GRK2 ; GRK3 ; GRK4 ; GRK5 ; GRK6 ; GRK7 ; GSG2 ; GSK3A ; GSK3B ; GUCY2C ; GUCY2D ; GUCY2F ; HCK ; HIPK1 ; HIPK2 ; HIPK3 ; HIPK4 ; HUNK ; ICK ; IGF1R ; IKBKB ; IKBKE ; ILK ; INSR ; INSRR ; IRAK1 ; IRAK2 ; IRAK3 ; IRAK4 ; ITK ; JAK1 ; JAK2 ; JAK3 ; KALRN ; KDR ; KIT ; KSR1 ; KSR2 ; LATS1 ; LATS2 ; LCK ; LIMK1 ; LIMK2 ; LMTK2 ; LMTK3 ; LRRK1 ; LRRK2 ; LTK ; LYN ; MAK ; MAP2K1 ; MAP2K2 ; MAP2K3 ; MAP2K4 ; MAP2K5 ; MAP2K6 ; MAP2K7 ; MAP3K1 ; MAP3K10 ; MAP3K11 ; MAP3K12 ; MAP3K13 ; MAP3K14 ; MAP3K15 ; MAP3K19 ; MAP3K2 ; MAP3K20 ; MAP3K21 ; MAP3K3 ; MAP3K4 ; MAP3K5 ; MAP3K6 ; MAP3K7 ; MAP3K8 ; MAP3K9 ; MAP4K1 ; MAP4K2 ; MAP4K3 ; MAP4K4 ; MAP4K5 ; MAPK1 ; MAPK10 ; MAPK11 ; MAPK12 ; MAPK13 ; MAPK14 ; MAPK15 ; MAPK3 ; MAPK4 ; MAPK6 ; MAPK7 ; MAPK8 ; MAPK9 ; MAPKAPK2 ; MAPKAPK3 ; MAPKAPK5 ; MARK1 ; MARK2 ; MARK3 ; MARK4 ; MAST1 ; MAST2 ; MAST3 ; MAST4 ; MASTL ; MATK ; MELK ; MERTK ; MET ; MINK1 ; MKNK1 ; MKNK2 ; MLKL ; MOK ; MOS ; MST1R ; MUSK ; MYLK ; MYLK2 ; MYLK3 ; MYLK4 ; MYO3A ; MYO3B ; NEK1 ; NEK10 ; NEK11 ; NEK2 ; NEK3 ; NEK4 ; NEK5 ; NEK6 ; NEK7 ; NEK8 ; NEK9 ; NIM1K ; NLK ; NPR1 ; NPR2 ; NRBP1 ; NRBP2 ; NRK ; NTRK1 ; NTRK2 ; NTRK3 ; NUAK1 ; NUAK2 ; OBSCN ; OXSR1 ; PAK1 ; PAK2 ; PAK3 ; PAK4 ; PAK5 ; PAK6 ; PAN3 ; PASK ; PBK ; PDGFRA ; PDGFRB ; PDIK1L ; PDPK1 ; PDPK2P ; PEAK1 ; PEAK3 ; PHKG1 ; PHKG2 ; PIK3R4 ; PIM1 ; PIM2 ; PIM3 ; PINK1 ; PKDCC ; PKMYT1 ; PKN1 ; PKN2 ; PKN3 ; PLK1 ; PLK2 ; PLK3 ; PLK4 ; PLK5 ; PNCK ; POMK ; PRKAA1 ; PRKAA2 ; PRKACA ; PRKACB ; PRKACG ; PRKCA ; PRKCB ; PRKCD ; PRKCE ; PRKCG ; PRKCH ; PRKCI ; PRKCQ ; PRKCZ ; PRKD1 ; PRKD2 ; PRKD3 ; PRKG1 ; PRKG2 ; PRKX ; PRKY ; PRPF4B ; PSKH1 ; PSKH2 ; PTK2 ; PTK2B ; PTK6 ; PTK7 ; PXK ; RAF1 ; RET ; RIOK1 ; RIOK2 ; RIOK3 ; RIPK1 ; RIPK2 ; RIPK3 ; RIPK4 ; RNASEL ; ROCK1 ; ROCK2 ; ROR1 ; ROR2 ; ROS1 ; RPS6KA1 ; RPS6KA2 ; RPS6KA3 ; RPS6KA4 ; RPS6KA5 ; RPS6KA6 ; RPS6KB1 ; RPS6KB2 ; RPS6KC1 ; RPS6KL1 ; RSKR ; RYK ; SBK1 ; SBK2 ; SBK3 ; SCYL1 ; SCYL2 ; SCYL3 ; SGK1 ; SGK2 ; SGK223 ; SGK3 ; SIK1 ; SIK1B ; SIK2 ; SIK3 ; SLK ; SNRK ; SPEG ; SRC ; SRMS ; SRPK1 ; SRPK2 ; SRPK3 ; STK10 ; STK11 ; STK16 ; STK17A ; STK17B ; STK24 ; STK25 ; STK26 ; STK3 ; STK31 ; STK32A ; STK32B ; STK32C ; STK33 ; STK35 ; STK36 ; STK38 ; STK38L ; STK39 ; STK4 ; STK40 ; STKLD1 ; STRADA ; STRADB ; STYK1 ; SYK ; TAOK1 ; TAOK2 ; TAOK3 ; TBCK ; TBK1 ; TEC ; TEK ; TESK1 ; TESK2 ; TEX14 ; TGFBR1 ; TGFBR2 ; TIE1 ; TLK1 ; TLK2 ; TNIK ; TNK1 ; TNK2 ; TNNI3K ; TP53RK ; TRIB1 ; TRIB2 ; TRIB3 ; TRIO ; TSSK1B ; TSSK2 ; TSSK3 ; TSSK4 ; TSSK6 ; TTBK1 ; TTBK2 ; TTK ; TTN ; TXK ; TYK2 ; TYRO3 ; UHMK1 ; ULK1 ; ULK2 ; ULK3 ; ULK4 ; VRK1 ; VRK2 ; VRK3 ; WEE1 ; WEE2 ; WNK1 ; WNK2 ; WNK3 ; WNK4 ; YES1 ; ZAP70
References
Protein domains
Peripheral membrane proteins | Protein kinase domain | [
"Biology"
] | 2,781 | [
"Protein domains",
"Protein classification"
] |
14,655,528 | https://en.wikipedia.org/wiki/Machine%20Check%20Architecture | In computing, Machine Check Architecture (MCA) is an Intel and AMD mechanism in which the CPU reports hardware errors to the operating system.
Intel's P6 and Pentium 4 family processors, AMD's K7 and K8 family processors, as well as the Itanium architecture implement a machine check architecture that provides a mechanism for detecting and reporting hardware (machine) errors, such as: system bus errors, ECC errors, parity errors, cache errors, and translation lookaside buffer errors. It consists of a set of model-specific registers (MSRs) that are used to set up machine checking and additional banks of MSRs used for recording errors that are detected.
See also
Machine-check exception (MCE)
High availability (HA)
Reliability, availability and serviceability (RAS)
Windows Hardware Error Architecture (WHEA)
References
External links
Microsoft's article on Itanium's MCA
Linux x86 daemon for processing of machine checks
Computer architecture
X86 architecture | Machine Check Architecture | [
"Technology",
"Engineering"
] | 203 | [
"Computer engineering",
"Computer architecture",
"Computer hardware stubs",
"Computing stubs",
"Computers"
] |
14,655,680 | https://en.wikipedia.org/wiki/P-type%20ATPase | The P-type ATPases, also known as E1-E2 ATPases, are a large group of evolutionarily related ion and lipid pumps that are found in bacteria, archaea, and eukaryotes. P-type ATPases are α-helical bundle primary transporters named based upon their ability to catalyze auto- (or self-) phosphorylation (hence P) of a key conserved aspartate residue within the pump and their energy source, adenosine triphosphate (ATP). In addition, they all appear to interconvert between at least two different conformations, denoted by E1 and E2. P-type ATPases fall under the P-type ATPase (P-ATPase) Superfamily (TC# 3.A.3) which, as of early 2016, includes 20 different protein families.
Most members of this transporter superfamily catalyze cation uptake and/or efflux, however one subfamily, the flippases, (TC# 3.A.3.8) is involved in flipping phospholipids to maintain the asymmetric nature of the biomembrane.
In humans, P-type ATPases serve as a basis for nerve impulses, relaxation of muscles, secretion and absorption in the kidney, absorption of nutrient in the intestine and other physiological processes. Prominent examples of P-type ATPases are the sodium-potassium pump (Na+/K+-ATPase), the proton-potassium pump (H+/K+-ATPase), the calcium pump (Ca2+-ATPase) and the plasma membrane proton pump (H+-ATPase) of plants and fungi.
General transport reaction
The generalized reaction for P-type ATPases is
nLigand1 (out) + mLigand2 (in) + ATP → nLigand1 (in) + mLigand2 (out) + ADP + Pi.
where the ligand can be either a metal ion or a phospholipid molecule.
Discovery
The first P-type ATPase discovered was the Na+/K+-ATPase, which Nobel laureate Jens Christian Skou isolated in 1957. The Na+/K+-ATPase was only the first member of a large and still-growing protein family (see Swiss-Prot Prosite motif PS00154).
Structure
P-type ATPases have a single catalytic subunit of 70 - 140 kDa. The catalytic subunit hydrolyzes ATP, contains the aspartyl phosphorylation site and binding sites for the transported ligand(s) and catalyzes ion transport. Various subfamilies of P-type ATPases also need additional subunits for proper function. Additional subunits that lack catalytic activity are present in the ATPase complexes of P1A, P2A, P2C and P4 ATPases. E.g. the catalytic alpha subunit of Na+/K+-ATPase consists of two additional subunits, beta and gamma, involved in trafficking, folding, and regulation of these pumps. The first P-type ATPase to be crystallized was SERCA1a, a sarco(endo)plasmic reticulum Ca2+-ATPase of fast twitch muscle from adult rabbit. It is generally acknowledged that the structure of SERCA1a is representative for the superfamily of P-type ATPases.
The catalytic subunit of P-type ATPases is composed of a cytoplasmic section and a transmembrane section with binding sites for the transported ligand(s). The cytoplasmic section consists of three cytoplasmic domains, designated the P, N, and A domains, containing over half the mass of the protein.
Membrane section
The transmembrane section (M domain) typically has ten transmembrane helices (M1-M10), with the binding sites for transported ligand(s) located near the midpoint of the bilayer. While most subfamilies have 10 transmembrane helices, there are some notable exceptions. The P1A ATPases are predicted to have 7, and the large subfamily of heavy metal pumps P1B) is predicted to have 8 transmembrane helices. P5 ATPases appear to have a total of 12 transmembrane helices.
Common for all P-type ATPases is a core of 6 transmembrane-spanning segments (also called the 'transport (T) domain'; M1-M6 in SERCA), that harbors the binding sites for the translocated ligand(s). The ligand(s) enter through a half-channel to the binding site and leave on the other side of the membrane through another half-channel.
Varying among P-type ATPase is the additional number of transmembrane-spanning segments (also called the 'support (S) domain', which between subfamilies ranges from 2 to 6. Extra transmembrane-segments likely provides structural support for the T domain and can also have specialized functions.
Phosphorylation (P) domain
The P domain contains the canonical aspartic acid residue phosphorylated (in a conserved DKTGT motif; the 'D' is the one letter abbreviation of the amino acid aspartate) during the reaction cycle. It is composed of two parts widely separated in sequence. These two parts assemble into a seven-strand parallel β-sheet with eight short associated a-helices, forming a Rossmann fold.
The folding pattern and the locations of the critical amino acids for phosphorylation in P-type ATPases has the haloacid dehalogenase fold characteristic of the haloacid dehalogenase (HAD) superfamily, as predicted by sequence homology. The HAD superfamily functions on the common theme of an aspartate ester formation by an SN2 reaction mechanism. This SN2 reaction is clearly observed in the solved structure of SERCA with ADP plus AlF4−.
Nucleotide binding (N) domain
The N domain serves as a built-in protein kinase that functions to phosphorylate the P domain. The N domain is inserted between the two segments of the P domain, and is formed of a seven-strand antiparallel β-sheet between two helix bundles. This domain contains the ATP-binding pocket, pointing out toward the solvent near the P-domain.
Actuator (A) domain
The A domain serves as a built-in protein phosphatase that functions to dephosphorylate the phosphorylated P domain. The A domain is the smallest of the three cytoplasmic domains. It consists of a distorted jellyroll structure and two short helices. It is the actuator domain modulating the occlusion of the transported ligand(s) in the transmembrane binding sites, and it is pivot in transposing the energy from the hydrolysis of ATP in the cytoplasmic domains to the vectorial transport of cations in the transmembrane domain. The A domain dephosphorylates the P domain as part of the reaction cycle using a highly conserved TGES motif located at one end of the jellyroll.
Regulatory (R) domain
Some members of the P-type ATPase family have additional regulatory (R) domains fused to the pump. Heavy metal P1B pumps can have several N- and C-terminal heavy metal-binding domains that have been found to be involved in regulation. The P2B Ca2+ ATPases have autoinbitory domains in their amino-terminal (plants) or carboxy-terminal (animals) regions, which contain binding sites for calmodulin, which, in the presence of Ca2+, activates P2B ATPases by neutralizing the terminal constraint. The P3A plasma membrane proton pumps have a C-terminal regulatory domain, which, when unphosphorylated, inhibits pumping.
Mechanism
All P-type ATPases use the energy derived from ATP to drive transport. They form a high-energy aspartyl-phosphoanhydride intermediate in the reaction cycle, and they interconvert between at least two different conformations, denoted by E1 and E2. The E1-E2 notation stems from the initial studies on this family of enzymes made on the Na+/K+-ATPase, where the sodium form and the potassium form are referred to as E1 and E2, respectively, in the "Post-Albers scheme". The E1-E2 schema has been proven to work, but there exist more than two major conformational states. The E1-E2 notation highlights the selectivity of the enzyme. In E1, the pump has high affinity for the exported substrate and low affinity for the imported substrate. In E2, it has low affinity of the exported substrate and high affinity for the imported substrate. Four major enzyme states form the cornerstones in the reaction cycle. Several additional reaction intermediates occur interposed. These are termed E1~P, E2P, E2-P*, and E1/E2.
ATP hydrolysis occurs in the cytoplasmic headpiece at the interface between domain N and P. Two Mg-ion sites form part of the active site. ATP hydrolysis is tightly coupled to translocation of the transported ligand(s) through the membrane, more than 40 Å away, by the A domain.
Classification
A phylogenetic analysis of 159 sequences made in 1998 by Axelsen and Palmgren suggested that P-type ATPases can be divided into five subfamilies (types; designated as P1-P5), based strictly on a conserved sequence kernel excluding the highly variable N and C terminal regions. Chan et al. (2010) also analyzed P-type ATPases in all major prokaryotic phyla for which complete genome sequence data were available and compared the results with those for eukaryotic P-type ATPases. The phylogenetic analysis grouped the proteins independent of the organism from which they are isolated and showed that the diversification of the P-type ATPase family occurred prior to the separation of eubacteria, archaea, and eucaryota. This underlines the significance of this protein family for cell survival under stress conditions.
P1 ATPases
P1 ATPases (or Type I ATPases) consists of the transition/heavy metal ATPases. Topological type I (heavy metal) P-type ATPases predominate in prokaryotes (approx. tenfold).
P1A ATPases (potassium pumps)
P1A ATPases (or Type IA) are involved in K+ import (TC# 3.A.3.7). They are atypical P-type ATPases because, unlike other P-type ATPases, they function as part of a heterotetrameric complex (called KdpFABC), where the actual K+ transport is mediated by another subcomponent of the complex.
P1B ATPases (heavy metal pumps)
P1B ATPases (or Type IB ATPases) are involved in transport of the soft Lewis acids: Cu+, Ag+, Cu2+, Zn2+, Cd2+, Pb2+ and Co2+ (TC#s 3.A.3.5 and 3.A.3.6). They are key elements for metal resistance and metal homeostasis in a wide range of organisms.
Metal binding to transmembrane metal-binding sites (TM-MBS) in Cu+-ATPases is required for enzyme phosphorylation and subsequent transport. However, Cu+ does not access Cu+-ATPases in a free (hydrated) form but is bound to a chaperone protein. The delivery of Cu+ by Archaeoglobus fulgidus Cu+-chaperone, CopZ (see TC# 3.A.3.5.7), to the corresponding Cu+-ATPase, CopA (TC# 3.A.3.5.30), has been studied. CopZ interacted with and delivered the metal to the N-terminal metal binding domain(s) of CopA (MBDs). Cu+-loaded MBDs, acting as metal donors, were unable to activate CopA or a truncated CopA lacking MBDs. Conversely, Cu+-loaded CopZ activated the CopA ATPase and CopA constructs in which MBDs were rendered unable to bind Cu+. Furthermore, under nonturnover conditions, CopZ transferred Cu+ to the TM-MBS of a CopA lacking MBDs altogether. Thus, MBDs may serve a regulatory function without participating directly in metal transport, and the chaperone delivers Cu+ directly to transmembrane transport sites of Cu+-ATPases. Wu et al. (2008) have determined structures of two constructs of the Cu (CopA) pump from Archaeoglobus fulgidus by cryoelectron microscopy of tubular crystals, which revealed the overall architecture and domain organization of the molecule. They localized its N-terminal MBD within the cytoplasmic domains that use ATP hydrolysis to drive the transport cycle and built a pseudoatomic model by fitting existing crystallographic structures into the cryoelectron microscopy maps for CopA. The results also similarly suggested a Cu-dependent regulatory role for the MBD.
In the Archaeoglobus fulgidus CopA (TC# 3.A.3.5.7), invariant residues in helixes 6, 7 and 8 form two transmembrane metal binding sites (TM-MBSs). These bind Cu+ with high affinity in a trigonal planar geometry. The cytoplasmic Cu+ chaperone CopZ transfers the metal directly to the TM-MBSs; however, loading both of the TM-MBSs requires binding of nucleotides to the enzyme. In agreement with the classical transport mechanism of P-type ATPases, occupancy of both transmembrane sites by cytoplasmic Cu+ is a requirement for enzyme phosphorylation and subsequent transport into the periplasmic or extracellular milieu. Transport studies have shown that most Cu+-ATPases drive cytoplasmic Cu+ efflux, albeit with quite different transport rates in tune with their various physiological roles. Archetypical Cu+-efflux pumps responsible for Cu+ tolerance, like the Escherichia coli CopA, have turnover rates ten times higher than those involved in cuproprotein assembly (or alternative functions). This explains the incapability of the latter group to significantly contribute to the metal efflux required for survival in high copper environments. Structural and mechanistic details of copper-transporting P-type ATPase functionhave been described.
P2 ATPases
P2 ATPases (or Type II ATPases) are split into four groups. Topological type II ATPases (specific for Na+,K+, H+ Ca2+, Mg2+ and phospholipids) predominate in eukaryotes (approx. twofold).
P2A ATPases (calcium pumps)
P2A ATPases (or Type IIA ATPases) are Ca2+ ATPases that transport Ca2+. P2A ATPases are split into two groups. Members of the first group are called sarco/endoplasmatic reticulum Ca2+-ATPases (also referred to as SERCA). These pumps have two Ca2+ ion binding sites and are often regulated by inhibitory accessory proteins having a single trans-membrane spanning segment (e.g.phospholamban and sarcolipin. In the cell, they are located in the sarcoplasmic or endoplasmatic reticulum. SERCA1a is a type IIA pump. The second group of P2A ATPases is called secretory pathway Ca2+-ATPases (also referred to as SPCA). These pumps have a single Ca2+ ion binding site and are located in secretory vesicles (animals) or the vacuolar membrane (fungi). (TC# 3.A.3.2)
Crystal structures of Sarcoplasimc/endoplasmic reticulum ATP driven calcium pumps can be found in RCSB.
SERCA1a is composed of a cytoplasmic section and a transmembrane section with two Ca2+-binding sites. The cytoplasmic section consists of three cytoplasmic domains, designated the P, N, and A domains, containing over half the mass of the protein. The transmembrane section has ten transmembrane helices (M1-M10), with the two Ca2+-binding sites located near the midpoint of the bilayer. The binding sites are formed by side-chains and backbone carbonyls from M4, M5, M6, and M8. M4 is unwound in this region due to a conserved proline (P308). This unwinding of M4 is recognised as a key structural feature of P-type ATPases.
Structures are available for both the E1 and E2 states of the Ca2+ ATPase showing that Ca2+ binding induces major changes in all three cytoplasmic domains relative to each other.
In the case of SERCA1a, energy from ATP is used to transport 2 Ca2+-ions from the cytoplasmic side to the lumen of the sarcoplasmatic reticulum, and to countertransport 1-3 protons into the cytoplasm. Starting in the E1/E2 state, the reaction cycle begins as the enzyme releases 1-3 protons from the cation-ligating residues, in exchange for cytoplasmic Ca2+-ions. This leads to assembly of the phosphorylation site between the ATP-bound N domain and the P domain, while the A domain directs the occlusion of the bound Ca2+. In this occluded state, the Ca2+ ions are buried in a proteinaceous environment with no access to either side of the membrane. The Ca2E1~P state becomes formed through a kinase reaction, where the P domain becomes phosphorylated, producing ADP. The cleavage of the β-phosphodiester bond releases the gamma-phosphate from ADP and unleashes the N domain from the P domain.
This then allows the A domain to rotate toward the phosphorylation site, making a firm association with both the P and the N domains. This movement of the A domain exerts a downward push on M3-M4 and a drag on M1-M2, forcing the pump to open at the luminal side and forming the E2P state. During this transition, the transmembrane Ca2+-binding residues are forced apart, destroying the high-affinity binding site. This is in agreement with the general model form substrate translocation, showing that energy in primary transport is not used to bind the substrate but to release it again from the buried counter ions. At the same time the N domain becomes exposed to the cytosol, ready for ATP exchange at the nucleotide-binding site.
As the Ca2+ dissociate to the luminal side, the cation binding sites are neutralised by proton binding, which makes a closure of the transmembrane segments favourable. This closure is coupled to a downward rotation of the A domain and a movement of the P domain, which then leads to the E2-P* occluded state. Meanwhile, the N domain exchanges ADP for ATP.
The P domain is dephosphorylated by the A domain, and the cycle completes when the phosphate is released from the enzyme, stimulated by the newly bound ATP, while a cytoplasmic pathway opens to exchange the protons for two new Ca2+ ions.
Xu et al. proposed how Ca2+ binding induces conformational changes in TMS 4 and 5 in the membrane domain (M) that in turn induce rotation of the phosphorylation domain (P). The nucleotide binding (N) and β-sheet (β) domains are highly mobile, with N flexibly linked to P, and β flexibly linked to M. Modeling of the fungal H+ ATPase, based on the structures of the Ca2+ pump, suggested a comparable 70º rotation of N relative to P to deliver ATP to the phosphorylation site.
One report suggests that this sarcoplasmic reticulum (SR) Ca2+ ATPase is homodimeric.
Crystal structures have shown that the conserved TGES loop of the Ca2+-ATPase is isolated in the Ca2E1 state but becomes inserted in the catalytic site in E2 states. Anthonisen et al. (2006) characterized the kinetics of the partial reaction steps of the transport cycle and the binding of the phosphoryl analogs BeF, AlF, MgF, and vanadate in mutants with alterations to conserved TGES loop residues. The data provide functional evidence supporting a role of Glu183 in activating the water molecule involved in the E2P → E2 dephosphorylation and suggest a direct participation of the side chains of the TGES loop in the control and facilitation of the insertion of the loop in the catalytic site. The interactions of the TGES loop furthermore seem to facilitate its disengagement from the catalytic site during the E2 → Ca2E1 transition.
Crystal Structures of Calcium ATPase are available in RCSB and include: , , , , among others.
P2B ATPases (calcium pumps)
P2B (or Type IIB ATPases) are Ca2+ ATPases that transport Ca2+. These pumps have a single Ca2+ ion binding site and are regulated by binding of calmodulin to autoinhibitory built-in domains situated at either the carboxy-terminal (animals) or amino-terminal (plants) end of the pump protein. In the cell, they are situated in the plasma membrane (animals and plants) and the internal membranes (plants). Plasma membrane Ca2+-ATPase (PMCA) of animals is a P2B ATPase (TC# 3.A.3.2)
P2C ATPases (sodium/potassium and proton/potassium pumps)
P2C ATPases (or Type IIC) include the closely related Na+/K+ and H+/K+ ATPases from animal cells. (TC# 3.A.3.1)
The X-ray crystal structure at 3.5 Å resolution of the pig renal Na+/K+-ATPase has been determined with two rubidium ions bound in an occluded state in the transmembrane part of the α-subunit. Several of the residues forming the cavity for rubidium/potassium occlusion in the Na+/K+-ATPase are homologous to those binding calcium in the Ca2+-ATPase of the sarco(endo)plasmic reticulum. The carboxy terminus of the α-subunit is contained within a pocket between transmembrane helices and seems to be a novel regulatory element controlling sodium affinity, possibly influenced by the membrane potential.
Crystal Structures are available in RCSB and include: , , , , among others.
P2D ATPases (sodium pumps)
P2D ATPases (or Type IID) include a small number of Na+ (and K+) exporting ATPases found in fungi and mosses. (Fungal K+ transporters; TC# 3.A.3.9)
P3 ATPases
P3 ATPases (or Type III ATPases) are split into two groups.
P3A ATPases (proton pumps)
P3A ATPases (or Type IIIA) contain the plasma membrane H+-ATPases from prokaryotes, protists, plants and fungi.
Plasma membrane H+-ATPase is best characterized in plants and yeast. It maintains the level of intracellular pH and transmembrane potential. Ten transmembrane helices and three cytoplasmic domains define the functional unit of ATP-coupled proton transport across the plasma membrane, and the structure is locked in a functional state not previously observed in P-type ATPases. The transmembrane domain reveals a large cavity, which is likely to be filled with water, located near the middle of the membrane plane where it is lined by conserved hydrophilic and charged residues. Proton transport against a high membrane potential is readily explained by this structural arrangement.
P3B ATPases (magnesium pumps)
P3B ATPases (or Type IIIB) are presumed Mg2+-ATPases found in eubacteria and plants. Fungal H+ transporters (TC# 3.A.3.3) and Mg2+ (TC# 3.A.3.4)
P4 ATPases (phospholipid flippases)
P4 ATPases (or Type IV ATPases) are flippases involved in the transport of phospholipids, such as phosphatidylserine, phosphatidylcholine and phosphatidylethanolamine.
P5 ATPases
P5 ATPases (or Type V ATPases) have unknown specificity. This large group is found only in eukaryotes and is further divided into two groups.
P5A ATPases
P5A ATPases (or Type VA) are involved in regulation of homeostasis in the endoplasmic reticulum.
P5B ATPases
P5B ATPases (or Type VB) are found in the lysosomal membrane of animals. Mutations in these pumps are linked to a variety of neurological diseases.
Further phylogenetic classification
In addition to the subfamilies of P-type ATPases listed above, several prokaryotic families of unknown function have been identified. The Transporter Classification Database provides a representative list of members of the P-ATPase superfamily, which as of early 2016 consisting of 20 families. Members of the P-ATPase superfamily are found in bacteria, archaea and eukaryotes. Clustering on the phylogenetic tree is usually in accordance with specificity for the transported ion(s).
In eukaryotes, they are present in the plasma membranes or endoplasmic reticular membranes. In prokaryotes, they are localized to the cytoplasmic membranes.
P-type ATPases from 26 eukaryotic species were analyzed later.
Chan et al., (2010) conducted an equivalent but more extensive analysis of the P-type ATPase Superfamily in Prokaryotes and compared them with those from Eukaryotes. While some families are represented in both types of organisms, others are found only in one of the other type. The primary functions of prokaryotic P-type ATPases appear to be protection from environmental stress conditions. Only about half of the P-type ATPase families are functionally characterized.
Horizontal Gene Transfer
Many P-type ATPase families are found exclusively in prokaryotes (e.g. Kdp-type K+ uptake ATPases (type III) and all prokaryotic functionally uncharacterized P-type ATPase (FUPA) families), while others are restricted to eukaryotes (e.g. phospholipid flippases and all 13 eukaryotic FUPA families). Horizontal gene transfer has occurred frequently among bacteria and archaea, which have similar distributions of these enzymes, but rarely between most eukaryotic kingdoms, and even more rarely between eukaryotes and prokaryotes. In some bacterial phyla (e.g. Bacteroidota and Fusobacteriota), ATPase gene gain and loss as well as horizontal transfer occurred seldom in contrast to most other bacterial phyla. Some families (i.e., Kdp-type ATPases) underwent far less horizontal gene transfer than other prokaryotic families, possibly due to their multisubunit characteristics. Functional motifs are better conserved across family lines than across organismal lines, and these motifs can be family specific, facilitating functional predictions. In some cases, gene fusion events created P-type ATPases covalently linked to regulatory catalytic enzymes. In one family (FUPA Family 24), a type I ATPase gene (N-terminal) is fused to a type II ATPase gene (C-terminal) with retention of function only for the latter. Genome minimalization led to preferential loss of P-type ATPase genes. Chan et al. (2010) suggested that in prokaryotes and some unicellular eukaryotes, the primary function of P-type ATPases is protection from extreme environmental stress conditions. The classification of P-type ATPases of unknown function into phylogenetic families provides guides for future molecular biological studies.
Human genes
Human genes encoding P-type ATPases or P-type ATPase-like proteins include:
P1B: Cu++ ATPase: ATP7A, ATP7B
P2A: SERCA Ca2+ ATPase: ATP2A1, ATP2A2, ATP2A3
P2A: secretory pathway Ca2+-ATPase: ATP2C1, ATP2C2
P2B: Ca2+ ATPase: ATP2B1, ATP2B2, ATP2B3, ATP2B4
P2C: Na+/K+ ATPase: ATP1A1, ATP1A2, ATP1A3, ATP1A4, ATP1B1, ATP1B2, ATP1B3, ATP1B4
P2C: H+/K+ ATPase, gastric: ATP4A;
P2C: H+/K+ ATPase, nongastric: ATP12A
P4: Flippase: ATP8A1, ATP8B1, ATP8B2, ATP8B3, ATP8B4, ATP9A, ATP9B, ATP10A, ATP10B, ATP10D, ATP11A, ATP11B, ATP11C
P5: ATP13A1, ATP13A2, ATP13A3, ATP13A4, ATP13A5
See also
H+/ K+-ATPase
Na+/ K+-ATPase
Plasma membrane H+-ATPase
Sarco/endoplasmatic reticulum Ca2+-ATPase
V-ATPase
References
EC 3.6.3
Integral membrane proteins
Transport proteins
Physiology | P-type ATPase | [
"Biology"
] | 6,533 | [
"Physiology"
] |
14,655,845 | https://en.wikipedia.org/wiki/Complex-base%20system | In arithmetic, a complex-base system is a positional numeral system whose radix is an imaginary (proposed by Donald Knuth in 1955) or complex number (proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965).
In general
Let be an integral domain , and the (Archimedean) absolute value on it.
A number in a positional number system is represented as an expansion
where
{| class="table left"
|-
| || || is the radix (or base) with ,
|-
| || || is the exponent (position or place),
|-
| || || are digits from the finite set of digits , usually with
|}
The cardinality is called the level of decomposition.
A positional number system or coding system is a pair
with radix and set of digits , and we write the standard set of digits with digits as
Desirable are coding systems with the features:
Every number in , e. g. the integers , the Gaussian integers or the integers , is uniquely representable as a finite code, possibly with a sign ±.
Every number in the field of fractions , which possibly is completed for the metric given by yielding or , is representable as an infinite series which converges under for , and the measure of the set of numbers with more than one representation is 0. The latter requires that the set be minimal, i.e. for real numbers and for complex numbers.
In the real numbers
In this notation our standard decimal coding scheme is denoted by
the standard binary system is
the negabinary system is
and the balanced ternary system is
All these coding systems have the mentioned features for and , and the last two do not require a sign.
In the complex numbers
Well-known positional number systems for the complex numbers include the following ( being the imaginary unit):
, e.g. and
, the quater-imaginary base, proposed by Donald Knuth in 1955.
and
(see also the section Base −1 ± i below).
, where , and is a positive integer that can take multiple values at a given . For and this is the system
.
, where the set consists of complex numbers , and numbers , e.g.
, where
Binary systems
Binary coding systems of complex numbers, i.e. systems with the digits , are of practical interest.
Listed below are some coding systems (all are special cases of the systems above) and resp. codes for the (decimal) numbers .
The standard binary (which requires a sign, first line) and the "negabinary" systems (second line) are also listed for comparison. They do not have a genuine expansion for .
As in all positional number systems with an Archimedean absolute value, there are some numbers with multiple representations. Examples of such numbers are shown in the right column of the table. All of them are repeating fractions with the repetend marked by a horizontal line above it.
If the set of digits is minimal, the set of such numbers has a measure of 0. This is the case with all the mentioned coding systems.
The almost binary quater-imaginary system is listed in the bottom line for comparison purposes. There, real and imaginary part interleave each other.
Base
Of particular interest are the quater-imaginary base (base ) and the base systems discussed below, both of which can be used to finitely represent the Gaussian integers without sign.
Base , using digits and , was proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965.
Connection to the twindragon
The rounding region of an integer – i.e., a set of complex (non-integer) numbers that share the integer part of their representation in this system – has in the complex plane a fractal shape: the twindragon (see figure). This set is, by definition, all points that can be written as with . can be decomposed into 16 pieces congruent to . Notice that if is rotated counterclockwise by 135°, we obtain two adjacent sets congruent to , because . The rectangle in the center intersects the coordinate axes counterclockwise at the following points: , , and , and . Thus, contains all complex numbers with absolute value ≤ .
As a consequence, there is an injection of the complex rectangle
into the interval of real numbers by mapping
with .
Furthermore, there are the two mappings
and
both surjective, which give rise to a surjective (thus space-filling) mapping
which, however, is not continuous and thus not a space-filling curve. But a very close relative, the Davis-Knuth dragon, is continuous and a space-filling curve.
See also
Dragon curve
References
External links
"Number Systems Using a Complex Base" by Jarek Duda, the Wolfram Demonstrations Project
"The Boundary of Periodic Iterated Function Systems" by Jarek Duda, the Wolfram Demonstrations Project
"Number Systems in 3D" by Jarek Duda, the Wolfram Demonstrations Project
"Large introduction to complex base numeral systems" with Mathematica sources by Jarek Duda
Non-standard positional numeral systems
Fractals
Ring theory | Complex-base system | [
"Mathematics"
] | 1,079 | [
"Functions and mappings",
"Mathematical analysis",
"Ring theory",
"Mathematical objects",
"Fractals",
"Fields of abstract algebra",
"Mathematical relations",
"Complex numbers",
"Numbers"
] |
14,657,447 | https://en.wikipedia.org/wiki/Spectrochemistry | Spectrochemistry is the application of spectroscopy in several fields of chemistry. It includes analysis of spectra in chemical terms, and use of spectra to derive the structure of chemical compounds, and also to qualitatively and quantitively analyze their presence in the sample. It is a method of chemical analysis that relies on the measurement of wavelengths and intensity of electromagnetic radiation.
History
It was not until 1666 that Isaac Newton showed that white lights from the sun could be dissipated into a continuous series of colors. So Newton introduced the concept which he called spectrum to describe this phenomenon. He used a small aperture to define the beam of light, a lens to collimate it, a glass prism to disperse it, and a screen to display the resulting spectrum. Newton's analysis of light was the beginning of the science of spectroscopy. Later, It became clear that the Sun's radiation might have components outside the visible portion of the spectrum. In 1800 William Hershel showed that the sun's radiation extended into infrared, and in 1801 John Wilhelm Ritter also made a similar observation in the ultraviolet. Joseph Von Fraunhofer extended Newton's discovery by observing the sun's spectrum when sufficiently dispersed was blocked by a fine dark lines now known as Fraunhofer lines. Fraunhofer also developed diffracting grating, which disperses the lights in much the same way as does a glass prism but with some advantages. the grating applied interference of lights to produce diffraction provides a direct measuring of wavelengths of diffracted beams. So by extending Thomas Young's study which demonstrated that a light beam passes slit emerges in patterns of light and dark edges Fraunhofer was able to directly measure the wavelengths of spectral lines. However, despite his enormous achievements, Fraunhofer was unable to understand the origins of the special line in which he observed. It was not until 33 years after his passing that Gustav Kirchhoff established that each element and compound has its unique spectrum and that by studying the spectrum of an unknown source, one could determine its chemical compositions, and with these advancements, spectroscopy became a truly scientific method of analyzing the structures of chemical compounds. Therefore, by recognizing that each atom and molecule has its spectrum Kirchhoff and Robert Bunsen established spectroscopy as a scientific tool for probing atomic and molecular structures and founded the field of spectrochemical analysis for analyzing the composition of materials.
IR Spectra Tables & Charts
IR Spectrum Table by Frequency
IR Spectra Table by Compound Class
To use an IR spectrum table, first need to find the frequency or compound in the first column, depending on which type of chart that is being used. Then find the corresponding values for absorption, appearance and other attributes. The value for absorption is usually in cm−1.
NOTE: NOT ALL FREQUENCIES HAVE A RELATED COMPOUND.
Applications
Evaluation of Dual - Spectrum IR Spectrogram System on Invasive Ductal Carcinoma (IDC) Breast cancer
Invasive Ductal Carcinoma (IDC) is one of the common types of breast cancer which accounts for 8 out of 10 of all invasive breast cancers. According to the American Cancer Society, more than 180,000 women in the United States find out that they have breast cancers each year, and most are diagnosed with this specific type of cancer. While it is essential to detect breast cancer early to reduce the death rate there may be already more than 10,000,000 cells in breast cancer when it can be observed by x-ray mammograms. however, the IR Spectrum proposed by Szu et al seems to be more promising in detecting breast cancer cells several months ahead of a mammogram. Clinical tests have been carried out with approval of Institutional Review Board of National Taiwan University Hospital. So from August 2007 to June 2008 35 patients aged between (30-66) with an average age of 49 were enlisted in this project. the results established that about 63% of the success rate could be achieved with the cross-sectional data. Therefore the results concluded that breast cancers may be detected more accurately by cross-referencing S1 maps of multiple three-points.
Molecular spectroscopic Methods to Elucidation of Lignin Structure
A Ligninin plant cell is a complex amorphous polymer and it is biosynthesized from three aromatic alcohols, namely P-Coumaryl, Coniferyl, and Sinapyl alcohols. Lignin is a highly branched polymer and accounts for 15-30% by weight of lignocellulosic biomass (LCBM), so the structure of lignin will vary significantly according to the type of LCBM and the composition will depend on the degradation process. This biosynthesis process is mainly consists of radical coupling reactions and it generates a particular lignin polymer in each plant species. So due to having a complex structure, various molecular spectroscopic methods have been applied to resolve the aromatic units and different interunit linkages in lignin from distinct plant species.
References
Spectroscopy | Spectrochemistry | [
"Physics",
"Chemistry"
] | 1,014 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
14,660,718 | https://en.wikipedia.org/wiki/Geometallurgy | Geometallurgy relates to the practice of combining geology or geostatistics with metallurgy, or, more specifically, extractive metallurgy, to create a spatially or geologically based predictive model for mineral processing plants. It is used in the hard rock mining industry for risk management and mitigation during mineral processing plant design. It is also used, to a lesser extent, for production planning in more variable ore deposits.
There are four important components or steps to developing a geometallurgical program,:
the geologically informed selection of a number of ore samples
laboratory-scale test work to determine the ore's response to mineral processing unit operations
the distribution of these parameters throughout the orebody using an accepted geostatistical technique
the application of a mining sequence plan and mineral processing models to generate a prediction of the process plant behavior
Sample selection
The sample mass and size distribution requirements are dictated by the kind of mathematical model that will be used to simulate the process plant, and the test work required to provide the appropriate model parameters. Flotation testing usually requires several kg of sample and grinding/hardness testing can required between 2 and 300 kg.
The sample selection procedure is performed to optimize granularity, sample support, and cost. Samples are usually core samples composited over the height of the mining bench. For hardness parameters, the variogram often increases rapidly near the origin and can reach the sill at distances significantly smaller than the typical drill hole collar spacing. For this reason the incremental model precision due to additional test work is often simply a consequence of the central limit theorem, and secondary correlations are sought to increase the precision without incurring additional sampling and testing costs. These secondary correlations can involve multi-variable regression analysis with other, non-metallurgical, ore parameters and/or domaining by rock type, lithology, alteration, mineralogy, or structural domains.
Test work
The following tests are commonly used for geometallurgical modeling:
Bond ball mill work index test
Modified or comparative Bond ball mill index
Bond rod mill work index and Bond low energy impact crushing work index
SAGDesign test
SMC test
JK drop-weight test
Point load index test
Sag Power Index test (SPI(R))
MFT test
FKT, SKT, and SKT-WS tests
Geostatistics
Block kriging is the most common geostatistical method used for interpolating metallurgical index parameters and it is often applied on a domain basis. Classical geostatistics require that the estimation variable be additive, and there is currently some debate on the additive nature of the metallurgical index parameters measured by the above tests. The Bond ball mill work index test is thought to be additive because of its units of energy; nevertheless, experimental blending results show a non-additive behavior. The SPI(R) value is known not to be an additive parameter, however errors introduced by block kriging are not thought to be significant . These issues, among others, are being investigated as part of the Amira P843 research program on Geometallurgical mapping and mine modelling.
Mine plan and process models
The following process models are commonly applied to geometallurgy:
The Bond equation
The SPI calibration equation, CEET
FLEET*
SMC model
Aminpro-Grind, Aminpro-Flot models
See also
Extractive metallurgy
Geostatistics
Mining
Mineral Processing
Notes
General references
Isaaks, Edward H., and Srivastava, R. Mohan. An Introduction to Applied Geostatistics. Oxford University Press, Oxford, NY, USA, 1989.
David, M., Handbook of Applied Advanced Geostatistical Ore Reserve Estimation. Elsevier, Amsterdam, 1988.
Mineral Processing Plant Design, Practice, and Control - Proceedings. Ed. Mular, A., Halbe, D., and Barratt, D. Society for Mining, Metallurgy, and Exploration, Inc. 2002.
Mineral Comminution Circuits - Their Operation and Optimisation. Ed. Napier-Munn, T.J., Morrell, S., Morrison, R.D., and Kojovic, T. JKMRC, The University of Queensland, 1996
Economic geology
Metallurgy
Mining
Materials science | Geometallurgy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 889 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
14,661,057 | https://en.wikipedia.org/wiki/Nanomechanics | Nanomechanics is a branch of nanoscience studying fundamental mechanical (elastic, thermal and kinetic) properties of physical systems at the nanometer scale. Nanomechanics has emerged on the crossroads of biophysics, classical mechanics, solid-state physics, statistical mechanics, materials science, and quantum chemistry. As an area of nanoscience, nanomechanics provides a scientific foundation of nanotechnology.
Nanomechanics is that branch of nanoscience which deals with the study and application of fundamental mechanical properties of physical systems at the nanoscale, such as elastic, thermal and kinetic material properties.
Often, nanomechanics is viewed as a branch of nanotechnology, i.e., an applied area with a focus on the mechanical properties of engineered nanostructures and nanosystems (systems with nanoscale components of importance). Examples of the latter include nanomachines, nanoparticles, nanopowders, nanowires, nanorods, nanoribbons, nanotubes, including carbon nanotubes (CNT) and boron nitride nanotubes (BNNTs); nanoshells, nanomembranes, nanocoatings, nanocomposite/nanostructured materials, (fluids with dispersed nanoparticles); nanomotors, etc.
Some of the well-established fields of nanomechanics are: nanomaterials, nanotribology (friction, wear and contact mechanics at the nanoscale), nanoelectromechanical systems (NEMS), and nanofluidics.
As a fundamental science, nanomechanics is based on some empirical principles (basic observations), namely general mechanics principles and specific principles arising from the smallness of physical sizes of the object of study.
General mechanics principles include:
Energy and momentum conservation principles
Variational Hamilton's principle
Symmetry principles
Due to smallness of the studied object, nanomechanics also accounts for:
Discreteness of the object, whose size is comparable with the interatomic distances
Plurality, but finiteness, of degrees of freedom in the object
Importance of thermal fluctuations
Importance of entropic effects (see configuration entropy)
Importance of quantum effects (see quantum machine)
These principles serve to provide a basic insight into novel mechanical properties of nanometer objects. Novelty is understood in the sense that these properties are not present in similar macroscale objects or much different from the properties of those (e.g., nanorods vs. usual macroscopic beam structures). In particular, smallness of the subject itself gives rise to various surface effects determined by higher surface-to-volume ratio of nanostructures, and thus affects mechanoenergetic and thermal properties (melting point, heat capacitance, etc.) of nanostructures. Discreteness serves a fundamental reason, for instance, for the dispersion of mechanical waves in solids, and some special behavior of basic elastomechanics solutions at small scales. Plurality of degrees of freedom and the rise of thermal fluctuations are the reasons for thermal tunneling of nanoparticles through potential barriers, as well as for the cross-diffusion of liquids and solids. Smallness and thermal fluctuations provide the basic reasons of the Brownian motion of nanoparticles. Increased importance of thermal fluctuations and configuration entropy at the nanoscale give rise to superelasticity, entropic elasticity (entropic forces), and other exotic types of elasticity of nanostructures. Aspects of configuration entropy are also of great interest in the context self-organization and cooperative behavior of open nanosystems.
Quantum effects determine forces of interaction between individual atoms in physical objects, which are introduced in nanomechanics by means of some averaged mathematical models called interatomic potentials.
Subsequent utilization of the interatomic potentials within the classical multibody dynamics provide deterministic mechanical models of nano structures and systems at the atomic scale/resolution. Numerical methods of solution of these models are called molecular dynamics (MD), and sometimes molecular mechanics (especially, in relation to statically equilibrated (still) models). Non-deterministic numerical approaches include Monte Carlo, Kinetic More-Carlo (KMC), and other methods. Contemporary numerical tools include also hybrid multiscale approaches allowing concurrent or sequential utilization of the atomistic scale methods (usually, MD) with the continuum (macro) scale methods (usually, field emission microscopy) within a single mathematical model. Development of these complex methods is a separate subject of applied mechanics research.
Quantum effects also determine novel electrical, optical and chemical properties of nanostructures, and therefore they find even greater attention in adjacent areas of nanoscience and nanotechnology, such as nanoelectronics, advanced energy systems, and nanobiotechnology.
See also
Molecular machine
Geometric phase (section Stochastic Pump Effect)
Nanoelectromechanical relay
References
Sattler KD. Handbook of Nanophysics: Vol. 1 Principles and Methods. CRC Press, 2011.
Bhushan B (editor). Springer Handbook of Nanotechnology, 2nd edition. Springer, 2007.
Liu WK, Karpov EG, Park HS. Nano Mechanics and Materials: Theory, Multiscale Methods and Applications. Wiley, 2006.
Cleland AN. Foundations of Nanomechanics. Springer, 2003.
Valeh I. Bakhshali. Nanomechanics and its applications: mechanical properties of materials. International E-Conference on Engineering, Technology and Management - ICETM 2020.
Nanotechnology
ja:ナノマシン | Nanomechanics | [
"Materials_science",
"Engineering"
] | 1,162 | [
"Nanotechnology",
"Materials science"
] |
9,501,745 | https://en.wikipedia.org/wiki/Thermal%20effusivity | In thermodynamics, a material's thermal effusivity, also known as thermal responsivity, is a measure of its ability to exchange energy with its surroundings. It is an intensive quantity defined as the square root of the product of the material's thermal conductivity () and its volumetric heat capacity () or as the ratio of thermal conductivity to the square root of thermal diffusivity ().
Some authors use the symbol to denote the thermal responsivity, although its usage along with an exponential becomes difficult. The SI units for thermal effusivity are or, equivalently, .
Thermal effusivity can also be a measure of a solid or rigid material's thermal inertia.
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature when placed into thermal contact.
If and are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's intensive heat transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation, and measures the speed at which thermal equilibrium can be reached by a body. By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function.
Applications
Temperature at a contact surface
If two semi-infinite bodies initially at temperatures and are brought in perfect thermal contact, the temperature at the contact surface will be a weighted mean based on their relative effusivities. This relationship can be demonstrated with a very simple "control volume" back-of-the-envelope calculation:
Consider the following 1D heat conduction problem. Region 1 is material 1, initially at uniform temperature , and region 2 is material 2, initially at uniform temperature . Given some period of time after being brought into contact, heat will have diffused across the boundary between the two materials. The thermal diffusivity of a material is . From the heat equation (or diffusion equation), a characteristic diffusion length into material 1 is
, where .
Similarly, a characteristic diffusion length into material 2 is
, where .
Assume that the temperature within the characteristic diffusion length on either side of the boundary between the two materials is uniformly at the contact temperature (this is the essence of a control-volume approach). Conservation of energy dictates that
.
Substitution of the expressions above for and and elimination of yields an expression for the contact temperature.
This expression is valid for all times for semi-infinite bodies in perfect thermal contact. It is also a good first guess for the initial contact temperature for finite bodies.
Even though the underlying heat equation is parabolic and not hyperbolic (i.e. it does not support waves), if we in some rough sense allow ourselves to think of a temperature jump as two materials are brought into contact as a "signal", then the transmission of the temperature signal from 1 to 2 is . Clearly, this analogy must be used with caution; among other caveats, it only applies in a transient sense, to media which are large enough (or time scales short enough) to be considered effectively infinite in extent.
Heat sensed by human skin
An application of thermal effusivity is the quasi-qualitative measurement of coolness or warmth "feel" of materials, also known as thermoception. It is a particularly important metric for textiles, fabrics, and building materials. Rather than temperature, skin thermoreceptors are highly responsive to the inward or outward flow of heat. Thus, despite having similar temperatures near room temperature, a high effusivity metal object is detected as cool while a low effusivity fabric is sensed as being warmer.
Diathermal walls
For a diathermal wall having a stepped "constant heat" boundary condition imposed at onto one side, thermal effusivity performs nearly the same role in limiting the initial dynamic thermal response (rigorously, during times less than the heat diffusion time to transit the wall) as the insulation U-factor plays in defining the static temperature obtained by the side after a long time. A dynamic U-factor and a diffusion time for the wall of thickness , thermal diffusivity and thermal conductivity are specified by:
; during where and
Planetary science
For planetary surfaces, thermal inertia is a key phenomenon controlling the diurnal and seasonal surface temperature variations. The thermal inertia of a terrestrial planet such as Mars can be approximated from the thermal effusivity of its near-surface geologic materials. In remote sensing applications, thermal inertia represents a complex combination of particle size, rock abundance, bedrock outcropping and the degree of induration (i.e. thickness and hardness).
A rough approximation to thermal inertia is sometimes obtained from the amplitude of the diurnal temperature curve (i.e. maximum minus minimum surface temperature). The temperature of a material with low thermal effusivity changes significantly during the day, while the temperature of a material with high thermal effusivity does not change as drastically. Deriving and understanding the thermal inertia of the surface can help to recognize small-scale features of that surface. In conjunction with other data, thermal inertia can help to characterize surface materials and the geologic processes responsible for forming these materials.
On Earth, thermal inertia of the global ocean is a major factor influencing climate inertia. Ocean thermal inertia is much greater than land inertia because of convective heat transfer, especially through the upper mixed layer. The thermal effusivities of stagnant and frozen water underestimate the vast thermal inertia of the dynamic and multi-layered ocean.
Thermographic inspection
Thermographic inspection encompasses a variety of nondestructive testing methods that utilize the wave-like characteristics of heat propagation through a transfer medium. These methods include Pulse-echo thermography and thermal wave imaging. Thermal effusivity and diffusivity of the materials being inspected can serve to simplify the mathematical modelling of, and thus interpretation of results from these techniques.
Measurement interpretation
When a material is measured from the surface with short test times by any transient method or instrument, the heat transfer mechanisms generally include thermal conduction, convection, radiation and phase changes. The diffusive process of conduction may dominate the thermal behavior of solid bodies near and below room temperature.
A contact resistance (due to surface roughness, oxidation, impurities, etc.) between the sensor and sample may also exist. Evaluations with high heat dissipation (driven by large temperature differentials) can likewise be influenced by an interfacial thermal resistance. All of these factors, along with the body's finite dimensions, must be considered during execution of measurements and interpretation of results.
Thermal effusivity of selected materials and substances
This is a list of the thermal effusivity of some common substances, evaluated at room temperature unless otherwise indicated.
(*) minimal advection
See also
Thermal contact conductance
Thermal diffusivity
Heat equation
Heat capacity
References
External links
You
Thermodynamic properties
Physical quantities
Heat conduction
Materials testing | Thermal effusivity | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,590 | [
"Physical phenomena",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Materials science",
"Materials testing",
"Thermodynamics",
"Heat conduction",
"Physical properties"
] |
9,501,969 | https://en.wikipedia.org/wiki/Boyle%20temperature | The Boyle temperature, named after Robert Boyle, is formally defined as the temperature for which the second virial coefficient, , becomes zero.
It is at this temperature that the attractive forces and the repulsive forces acting on the gas particles balance out
This is the virial equation of state and describes a real gas.
Since higher order virial coefficients are generally much smaller than the second coefficient, the gas tends to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature (or when or are minimized).
In any case, when the pressures are low, the second virial coefficient will be the only relevant one because the remaining concern terms of higher order on the pressure. Also at Boyle temperature the dip in a PV diagram tends to a straight line over a period of pressure. We then have
where is the compressibility factor.
Expanding the van der Waals equation in one finds that .
See also
Virial equation of state
References
Temperature
Thermodynamics
Robert Boyle | Boyle temperature | [
"Physics",
"Chemistry",
"Mathematics"
] | 203 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Dynamical systems"
] |
9,502,303 | https://en.wikipedia.org/wiki/Flux-corrected%20transport | Flux-corrected transport (FCT) is a conservative shock-capturing scheme for solving Euler equations and other hyperbolic equations which occur in gas dynamics, aerodynamics, and magnetohydrodynamics. It is especially useful for solving problems involving shock or contact discontinuities. An FCT algorithm consists of two stages, a transport stage and a flux-corrected anti-diffusion stage. The numerical errors introduced in the first stage (i.e., the transport stage) are corrected in the anti-diffusion stage.
References
Jay P. Boris and David L. Book, "Flux-corrected transport, I: SHASTA, a fluid transport algorithm that works", J. Comput. Phys. 11, pp. 38 (1973).
External links
Fully multidimensional flux-corrected transport algorithms for fluids
See also
Computational fluid dynamics
Computational magnetohydrodynamics
Shock capturing methods
Volume of fluid method
Computational fluid dynamics | Flux-corrected transport | [
"Physics",
"Chemistry"
] | 192 | [
"Computational physics stubs",
"Computational fluid dynamics",
"Computational physics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,503,346 | https://en.wikipedia.org/wiki/Multiple%20EM%20for%20Motif%20Elicitation | Multiple Expectation maximizations for Motif Elicitation (MEME) is a tool for discovering motifs in a group of related DNA or protein sequences.
A motif is a sequence pattern that occurs repeatedly in a group of related protein or DNA sequences and is often associated with some biological function. MEME represents motifs as position-dependent letter-probability matrices which describe the probability of each possible letter at each position in the pattern. Individual MEME motifs do not contain gaps. Patterns with variable-length gaps are split by MEME into two or more separate motifs.
MEME takes as input a group of DNA or protein sequences (the training set) and outputs as many motifs as requested. It uses statistical modeling techniques to automatically choose the best width, number of occurrences, and description for each motif.
MEME is the first of a collection of tools for analyzing motifs called the MEME suite.
Definition
The MEME algorithm could be understood from two different perspectives. From a biological point of view, MEME identifies and characterizes shared motifs in a set of unaligned sequences. From the computer science aspect, MEME finds a set of non-overlapping, approximately matching substrings given a starting set of strings.
Use
MEME can be used to find similar biological functions and structures in different sequences. It is necessary to take into account that the sequences variation can be significant and that the motifs are sometimes very small. It is also useful to take into account that the binding sites for proteins are very specific. This makes it easier to reduce wet-lab experiments (saving cost and time). Indeed, to better discover the motifs relevant from a biological point it is necessary to carefully choose: the best width of motifs, the number of occurrences in each sequence, and the composition of each motif.
Algorithm components
The algorithm uses several types of well known functions:
Expectation maximization (EM).
EM based heuristic for choosing the EM starting point.
Maximum likelihood ratio based (LRT-based) heuristic for determining the best number of model-free parameters.
Multi-start for searching over possible motif widths.
Greedy search for finding multiple motifs.
However, one often doesn't know where the starting position is. Several possibilities exist: exactly one motif per sequence, or one or zero motif per sequence, or any number of motifs per sequence.
See also
Sequence motif
Sequence alignment
References
External links
The MEME Suite — Motif-based sequence analysis tools
GPU Accelerated version of MEME
EXTREME — An online EM implementation of the MEME model for fast motif discovery in large ChIP-Seq and DNase-Seq Footprinting data
Bioinformatics | Multiple EM for Motif Elicitation | [
"Engineering",
"Biology"
] | 533 | [
"Bioinformatics",
"Biological engineering"
] |
9,504,117 | https://en.wikipedia.org/wiki/Charge%20density%20wave | A charge density wave (CDW) is an ordered quantum fluid of electrons in a linear chain compound or layered crystal. The electrons within a CDW form a standing wave pattern and sometimes collectively carry an electric current. The electrons in such a CDW, like those in a superconductor, can flow through a linear chain compound en masse, in a highly correlated fashion. Unlike a superconductor, however, the electric CDW current often flows in a jerky fashion, much like water dripping from a faucet, due to its electrostatic properties. In a CDW, the combined effects of pinning (due to impurities) and electrostatic interactions (due to the net electric charges of any CDW kinks) likely play critical roles in the CDW current's jerky behavior, as discussed in sections 4 & 5 below.
Most CDW's in metallic crystals form due to the wave-like nature of electrons – a manifestation of quantum mechanical wave–particle duality – causing the electronic charge density to become spatially modulated, i.e., to form periodic "bumps" in charge. This standing wave affects each electronic wave function, and is created by combining electron states, or wavefunctions, of opposite momenta. The effect is somewhat analogous to the standing wave in a guitar string, which can be viewed as the combination of two interfering, traveling waves moving in opposite directions (see interference (wave propagation)).
The CDW in electronic charge is accompanied by a periodic distortion – essentially a superlattice – of the atomic lattice. The metallic crystals look like thin shiny ribbons (e.g., quasi-1-D NbSe3 crystals) or shiny flat sheets (e.g., quasi-2-D, 1T-TaS2 crystals). The CDW's existence was first predicted in the 1930s by Rudolf Peierls. He argued that a 1-D metal would be unstable to the formation of energy gaps at the Fermi wavevectors ±kF, which reduce the energies of the filled electronic states at ±kF as compared to their original Fermi energy EF. The temperature below which such gaps form is known as the Peierls transition temperature, TP.
The electron spins are spatially modulated to form a standing spin wave in a spin density wave (SDW). A SDW can be viewed as two CDWs for the spin-up and spin-down subbands, whose charge modulations are 180° out-of-phase.
Fröhlich model of superconductivity
In 1954, Herbert Fröhlich proposed a microscopic theory, in which energy gaps at ±kF would form below a transition temperature as a result of the interaction between the electrons and phonons of wavevector Q=2kF. Conduction at high temperatures is metallic in a quasi-1-D conductor, whose Fermi surface consists of fairly flat sheets perpendicular to the chain direction at ±kF. The electrons near the Fermi surface couple strongly with the phonons of 'nesting' wave number Q = 2kF. The 2kF mode thus becomes softened as a result of the electron-phonon interaction. The 2kF phonon mode frequency decreases with decreasing temperature, and finally goes to zero at the Peierls transition temperature. Since phonons are bosons, this mode becomes macroscopically occupied at lower temperatures, and is manifested by a static periodic lattice distortion. At the same time, an electronic CDW forms, and the Peierls gap opens up at ±kF. Below the Peierls transition temperature, a complete Peierls gap leads to thermally activated behavior in the conductivity due to normal uncondensed electrons.
However, a CDW whose wavelength is incommensurate with the underlying atomic lattice, i.e., where the CDW wavelength is not an integer multiple of the lattice constant, would have no preferred position, or phase φ, in its charge modulation ρ0 + ρ1cos[2kFx – φ]. Fröhlich thus proposed that the CDW could move and, moreover, that the Peierls gaps would be displaced in momentum space along with the entire Fermi sea, leading to an electric current proportional to dφ/dt. However, as discussed in subsequent sections, even an incommensurate CDW cannot move freely, but is pinned by impurities. Moreover, interaction with normal carriers leads to dissipative transport, unlike a superconductor.
CDWs in quasi-2-D layered materials
Several quasi-2-D systems, including layered transition metal dichalcogenides, undergo Peierls transitions to form quasi-2-D CDWs. These result from multiple nesting wavevectors coupling different flat regions of the Fermi surface. The charge modulation can either form a honeycomb lattice with hexagonal symmetry or a checkerboard pattern. A concomitant periodic lattice displacement accompanies the CDW and has been directly observed in 1T-TaS2 using cryogenic electron microscopy. In 2012, evidence for competing, incipient CDW phases were reported for layered cuprate high-temperature superconductors such as YBCO.
CDW transport in linear chain compounds
Early studies of quasi-1-D conductors were motivated by a proposal, in 1964, that certain types of polymer chain compounds could exhibit superconductivity with a high critical temperature Tc. The theory was based on the idea that pairing of electrons in the BCS theory of superconductivity could be mediated by interactions of conducting electrons in one chain with nonconducting electrons in some side chains. (By contrast, electron pairing is mediated by phonons, or vibrating ions, in the BCS theory of conventional superconductors.) Since light electrons, instead of heavy ions, would lead to the formation of Cooper pairs, their characteristic frequency and, hence, energy scale and Tc would be enhanced. Organic materials, such as TTF-TCNQ were measured and studied theoretically in the 1970s. These materials were found to undergo a metal-insulator, rather than superconducting, transition. It was eventually established that such experiments represented the first observations of the Peierls transition.
The first evidence for CDW transport in inorganic linear chain compounds, such as transition metal trichalcogenides, was reported in 1976 by Monceau et al., who observed enhanced electrical conduction at increased electric fields in NbSe3. The nonlinear contribution to the electrical conductivity σ vs. field E was fit to a Landau-Zener tunneling characteristic ~ exp[-E0/E] (see Landau–Zener formula), but it was soon realized that the characteristic Zener field E0 was far too small to represent Zener tunneling of normal electrons across the Peierls gap. Subsequent experiments showed a sharp threshold electric field, as well as peaks in the noise spectrum (narrow band noise) whose fundamental frequency scales with the CDW current. These and other experiments (e.g.,) confirm that the CDW collectively carries an electric current in a jerky fashion above the threshold field.
Classical models of CDW depinning
Linear chain compounds exhibiting CDW transport have CDW wavelengths λcdw = π/kF incommensurate with (i.e., not an integer multiple of) the lattice constant. In such materials, pinning is due to impurities that break the translational symmetry of the CDW with respect to φ. The simplest model treats the pinning as a sine-Gordon potential of the form u(φ) = u0[1 – cosφ], while the electric field tilts the periodic pinning potential until the phase can slide over the barrier above the classical depinning field. Known as the overdamped oscillator model, since it also models the damped CDW response to oscillatory (AC) electric fields, this picture accounts for the scaling of the narrow-band noise with CDW current above threshold.
However, since impurities are randomly distributed throughout the crystal, a more realistic picture must allow for variations in optimum CDW phase φ with position – essentially a modified sine-Gordon picture with a disordered washboard potential. This is done in the Fukuyama-Lee-Rice (FLR) model, in which the CDW minimizes its total energy by optimizing both the elastic strain energy due to spatial gradients in φ and the pinning energy. Two limits that emerge from FLR include weak pinning, typically from isoelectronic impurities, where the optimum phase is spread over many impurities and the depinning field scales as ni2 (ni being the impurity concentration) and strong pinning, where each impurity is strong enough to pin the CDW phase and the depinning field scales linearly with ni. Variations of this theme include numerical simulations that incorporate random distributions of impurities (random pinning model).
Quantum models of CDW transport
Early quantum models included a soliton pair creation model by Maki and a proposal by John Bardeen that condensed CDW electrons tunnel coherently through a tiny pinning gap, fixed at ±kF unlike the Peierls gap. Maki's theory lacked a sharp threshold field and Bardeen only gave a phenomenological interpretation of the threshold field. However, a 1985 paper by Krive and Rozhavsky pointed out that nucleated solitons and antisolitons of charge ±q generate an internal electric field E* proportional to q/ε. The electrostatic energy (1/2)ε[E ± E*]2 prevents soliton tunneling for applied fields E less than a threshold ET = E*/2 without violating energy conservation. Although this Coulomb blockade threshold can be much smaller than the classical depinning field, it shows the same scaling with impurity concentration since the CDW's polarizability and dielectric response ε vary inversely with pinning strength.
Building on this picture, as well as a 2000 article on time-correlated soliton tunneling, a more recent quantum model proposes Josephson-like coupling (see Josephson effect) between complex order parameters associated with nucleated droplets of charged soliton dislocations on many parallel chains. Following Richard Feynman in The Feynman Lectures on Physics, Vol. III, Ch. 21, their time-evolution is described using the Schrödinger equation as an emergent classical equation. The narrow-band noise and related phenomena result from the periodic buildup of electrostatic charging energy and thus do not depend on the detailed shape of the washboard pinning potential. Both a soliton pair-creation threshold and a higher classical depinning field emerge from the model, which views the CDW as a sticky quantum fluid or deformable quantum solid with dislocations, a concept discussed by Philip Warren Anderson.
Aharonov–Bohm quantum interference effects
The first evidence for phenomena related to the Aharonov–Bohm effect in CDWs was reported in a 1997 paper, which described experiments showing oscillations of period h/2e in CDW (not normal electron) conductance versus magnetic flux through columnar defects in NbSe3. Later experiments, including some reported in 2012, show oscillations in CDW current versus magnetic flux, of dominant period h/2e, through TaS3 rings up to 85 μm in circumference above 77 K. This behavior is similar to that of the superconducting quantum interference device (see SQUID), lending credence to the idea that CDW electron transport is fundamentally quantum in nature (see quantum mechanics).
See also
Spin density wave
High-temperature superconductivity
References
Cited references
General references
Grüner, George. Density Waves in Solids. Addison-Wesley, 1994.
Review of experiments as of 2013 by Pierre Monceau. Electronic crystals: an experimental overview.
Superconductivity
Phases of matter
Condensed matter physics | Charge density wave | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,503 | [
"Electrical resistance and conductance",
"Physical quantities",
"Superconductivity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
606,874 | https://en.wikipedia.org/wiki/Einstein%E2%80%93Cartan%20theory | In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922.
Overview
Einstein–Cartan theory differs from general relativity in two ways:
(1) it is formulated within the framework of Riemann–Cartan geometry, which possesses a locally gauged Lorentz symmetry, while general relativity is formulated within the framework of Riemannian geometry, which does not;
(2) an additional set of equations are posed that relate torsion to spin.
This difference can be factored into
general relativity (Einstein–Hilbert) → general relativity (Palatini) → Einstein–Cartan
by first reformulating general relativity onto a Riemann–Cartan geometry, replacing the Einstein–Hilbert action over Riemannian geometry by the Palatini action over Riemann–Cartan geometry; and second, removing the zero torsion constraint from the Palatini action, which results in the additional set of equations for spin and torsion, as well as the addition of extra spin-related terms in the Einstein field equations themselves.
The theory of general relativity was originally formulated in the setting of Riemannian geometry by the Einstein–Hilbert action, out of which arise the Einstein field equations. At the time of its original formulation, there was no concept of Riemann–Cartan geometry. Nor was there a sufficient awareness of the concept of gauge symmetry to understand that Riemannian geometries do not possess the requisite structure to embody a locally gauged Lorentz symmetry, such as would be required to be able to express continuity equations and conservation laws for rotational and boost symmetries, or to describe spinors in curved spacetime geometries. The result of adding this infrastructure is a Riemann–Cartan geometry. In particular, to be able to describe spinors requires the inclusion of a spin structure, which suffices to produce such a geometry.
The chief difference between a Riemann–Cartan geometry and Riemannian geometry is that in the former, the affine connection is independent of the metric, while in the latter it is derived from the metric as the Levi-Civita connection, the difference between the two being referred to as the contorsion. In particular, the antisymmetric part of the connection (referred to as the torsion) is zero for Levi-Civita connections, as one of the defining conditions for such connections.
Because the contorsion can be expressed linearly in terms of the torsion, it is also possible to directly translate the Einstein–Hilbert action into a Riemann–Cartan geometry, the result being the Palatini action (see also Palatini variation). It is derived by rewriting the Einstein–Hilbert action in terms of the affine connection and then separately posing a constraint that forces both the torsion and contorsion to be zero, which thus forces the affine connection to be equal to the Levi-Civita connection. Because it is a direct translation of the action and field equations of general relativity, expressed in terms of the Levi-Civita connection, this may be regarded as the theory of general relativity, itself, transposed into the framework of Riemann–Cartan geometry.
Einstein–Cartan theory relaxes this condition and, correspondingly, relaxes general relativity's assumption that the affine connection have a vanishing antisymmetric part (torsion tensor). The action used is the same as the Palatini action, except that the constraint on the torsion is removed. This results in two differences from general relativity:
(1) the field equations are now expressed in terms of affine connection, rather than the Levi-Civita connection, and so have additional terms in Einstein's field equations involving the contorsion that are not present in the field equations derived from the Palatini formulation;
(2) an additional set of equations are now present which couple the torsion to the intrinsic angular momentum (spin) of matter, much in the same way in which the affine connection is coupled to the energy and momentum of matter.
In Einstein–Cartan theory, the torsion is now a variable in the principle of stationary action that is coupled to a curved spacetime formulation of spin (the spin tensor). These extra equations express the torsion linearly in terms of the spin tensor associated with the matter source, which entails that the torsion generally be non-zero inside matter.
A consequence of the linearity is that outside of matter there is zero torsion, so that the exterior geometry remains the same as what would be described in general relativity. The differences between Einstein–Cartan theory and general relativity (formulated either in terms of the Einstein–Hilbert action on Riemannian geometry or the Palatini action on Riemann–Cartan geometry) rest solely on what happens to the geometry inside matter sources. That is: "torsion does not propagate". Generalizations of the Einstein–Cartan action have been considered which allow for propagating torsion.
Because Riemann–Cartan geometries have Lorentz symmetry as a local gauge symmetry, it is possible to formulate the associated conservation laws. In particular, regarding the metric and torsion tensors as independent variables gives the correct generalization of the conservation law for the total (orbital plus intrinsic) angular momentum to the presence of the gravitational field.
History
The theory was first proposed by Élie Cartan in 1922 and expounded in the following few years. Albert Einstein became affiliated with the theory in 1928 during his unsuccessful attempt to match torsion to the electromagnetic field tensor as part of a unified field theory. This line of thought led him to the related but different theory of teleparallelism.
Dennis Sciama and Tom Kibble independently revisited the theory in the 1960s, and an important review was published in 1976.
Einstein–Cartan theory has been historically overshadowed by its torsion-free counterpart and other alternatives like Brans–Dicke theory because torsion seemed to add little predictive benefit at the expense of the tractability of its equations. Since the Einstein–Cartan theory is purely classical, it also does not fully address the issue of quantum gravity. In the Einstein–Cartan theory, the Dirac equation becomes nonlinear. Even though renowned physicists such as Steven Weinberg "never understood what is so important physically about the possibility of torsion in differential geometry", other physicists claim that theories with torsion are valuable.
The theory has indirectly influenced loop quantum gravity (and seems also to have influenced twistor theory).
Field equations
The Einstein field equations of general relativity can be derived by postulating the Einstein–Hilbert action to be the true action of spacetime and then varying that action with respect to the metric tensor.
The field equations of Einstein–Cartan theory come from exactly the same approach,
except that a general asymmetric affine connection is assumed rather than the symmetric Levi-Civita connection
(i.e., spacetime is assumed to have torsion in addition to curvature),
and then the metric and torsion are varied independently.
Let represent the Lagrangian density of matter and represent the Lagrangian density of the gravitational field. The Lagrangian density for the gravitational field in the Einstein–Cartan theory is proportional to the Ricci scalar:
where is the determinant of the metric tensor, and is a physical constant involving the gravitational constant and the speed of light. By Hamilton's principle, the variation of the total action for the gravitational field and matter vanishes:
The variation with respect to the metric tensor yields the Einstein equations:
{| class="wikitable"
|-
|
|}
where is the Ricci tensor and is the canonical stress–energy–momentum tensor.
The Ricci tensor is no longer symmetric because the connection contains a nonzero torsion tensor; therefore, the right-hand side of the equation cannot be symmetric either, implying that must include an asymmetric contribution that can be shown to be related to the spin tensor. This canonical energy–momentum tensor is related to the more familiar symmetric energy–momentum tensor by the Belinfante–Rosenfeld procedure.
The variation with respect to the torsion tensor yields the Cartan spin connection equations
{| class="wikitable"
|-
|
|}
where is the spin tensor. Because the torsion equation is an algebraic constraint rather than a partial differential equation, the torsion field does not propagate as a wave, and vanishes outside of matter. Therefore, in principle the torsion can be algebraically eliminated from the theory in favor of the spin tensor, which generates an effective "spin–spin" nonlinear self-interaction inside matter. Torsion is equal to its source term and can be replaced by a boundary or a topological structure with a throat such as a "wormhole".
Avoidance of singularities
Recently, interest in Einstein–Cartan theory has been driven toward cosmological implications, most importantly, the avoidance of a gravitational singularity at the beginning of the universe, such as in the black hole cosmology, static universe, or cyclic model.
Singularity theorems which are premised on and formulated within the setting of Riemannian geometry (e.g. Penrose–Hawking singularity theorems) need not hold in Riemann–Cartan geometry. Consequently, Einstein–Cartan theory is able to avoid the general-relativistic problem of the singularity at the Big Bang. The minimal coupling between torsion and Dirac spinors generates an effective nonlinear spin–spin self-interaction, which becomes significant inside fermionic matter at extremely high densities. Such an interaction is conjectured to replace the singular Big Bang with a cusp-like Big Bounce at a minimum but finite scale factor, before which the observable universe was contracting. This scenario also explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic, providing a physical alternative to cosmic inflation. Torsion allows fermions to be spatially extended instead of "pointlike", which helps to avoid the formation of singularities such as black holes, removes the ultraviolet divergence in quantum field theory, and leads to the toroidal ring model of electrons. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, instead, the collapse reaches a bounce and forms a regular Einstein–Rosen bridge (wormhole) to a new, growing universe on the other side of the event horizon; pair production by the gravitational field after the bounce, when torsion is still strong, generates a finite period of inflation.
See also
Alternatives to general relativity
Metric-affine gravitation theory
Gauge theory gravity
Loop quantum gravity
References
Further reading
Lord, E. A. (1976). "Tensors, Relativity and Cosmology" (McGraw-Hill).
de Sabbata, V. and Gasperini, M. (1985). "Introduction to Gravitation" (World Scientific).
de Sabbata, V. and Sivaram, C. (1994). "Spin and Torsion in Gravitation" (World Scientific).
Theories of gravity
Albert Einstein | Einstein–Cartan theory | [
"Physics"
] | 2,332 | [
"Theoretical physics",
"Theories of gravity"
] |
606,970 | https://en.wikipedia.org/wiki/Minimal%20Supersymmetric%20Standard%20Model | The Minimal Supersymmetric Standard Model (MSSM) is an extension to the Standard Model that realizes supersymmetry. MSSM is the minimal supersymmetrical model as it considers only "the [minimum] number of new particle states and new interactions consistent with "Reality". Supersymmetry pairs bosons with fermions, so every Standard Model particle has a (yet undiscovered) superpartner. If discovered, such superparticles could be candidates for dark matter, and could provide evidence for grand unification or the viability of string theory. The failure to find evidence for MSSM using the Large Hadron Collider has strengthened an inclination to abandon it.
Background
The MSSM was originally proposed in 1981 to stabilize the weak scale, solving the hierarchy problem. The Higgs boson mass of the Standard Model is unstable to quantum corrections and the theory predicts that weak scale should be much weaker than what is observed to be. In the MSSM, the Higgs boson has a fermionic superpartner, the Higgsino, that has the same mass as it would if supersymmetry were an exact symmetry. Because fermion masses are radiatively stable, the Higgs mass inherits this stability. However, in MSSM there is a need for more than one Higgs field, as described below.
The only unambiguous way to claim discovery of supersymmetry is to produce superparticles in the laboratory. Because superparticles are expected to be 100 to 1000 times heavier than the proton, it requires a huge amount of energy to make these particles that can only be achieved at particle accelerators. The Tevatron was actively looking for evidence of the production of supersymmetric particles before it was shut down on 30 September 2011. Most physicists believe that supersymmetry must be discovered at the LHC if it is responsible for stabilizing the weak scale. There are five classes of particle that superpartners of the Standard Model fall into: squarks, gluinos, charginos, neutralinos, and sleptons. These superparticles have their interactions and subsequent decays described by the MSSM and each has characteristic signatures.
The MSSM imposes R-parity to explain the stability of the proton. It adds supersymmetry breaking by introducing explicit soft supersymmetry breaking operators into the Lagrangian that is communicated to it by some unknown (and unspecified) dynamics. This means that there are 120 new parameters in the MSSM. Most of these parameters lead to unacceptable phenomenology such as large flavor changing neutral currents or large electric dipole moments for the neutron and electron. To avoid these problems, the MSSM takes all of the soft supersymmetry breaking to be diagonal in flavor space and for all of the new CP violating phases to vanish.
Theoretical motivations
There are three principal motivations for the MSSM over other theoretical extensions of the Standard Model, namely:
Naturalness
Gauge coupling unification
Dark Matter
These motivations come out without much effort and they are the primary reasons why the MSSM is the leading candidate for a new theory to be discovered at collider experiments such as the Tevatron or the LHC.
Naturalness
The original motivation for proposing the MSSM was to stabilize the Higgs mass to radiative corrections that are quadratically divergent in the Standard Model (the hierarchy problem). In supersymmetric models, scalars are related to fermions and have the same mass. Since fermion masses are logarithmically divergent, scalar masses inherit the same radiative stability. The Higgs vacuum expectation value (VEV) is related to the negative scalar mass in the Lagrangian. In order for the radiative corrections to the Higgs mass to not be dramatically larger than the actual value, the mass of the superpartners of the Standard Model should not be significantly heavier than the Higgs VEV – roughly 100 GeV. In 2012, the Higgs particle was discovered at the LHC, and its mass was found to be 125–126 GeV.
Gauge-coupling unification
If the superpartners of the Standard Model are near the TeV scale, then measured gauge couplings of the three gauge groups unify at high energies. The beta-functions for the MSSM gauge couplings are given by
where is measured in SU(5) normalization—a factor of different
than the Standard Model's normalization and predicted by Georgi–Glashow SU(5) .
The condition for gauge coupling unification at one loop is whether the following expression is satisfied
.
Remarkably, this is precisely satisfied to experimental errors in the values of . There are two loop corrections and both TeV-scale and GUT-scale threshold corrections that alter this condition on gauge coupling unification, and the results of more extensive calculations reveal that gauge coupling unification occurs to an accuracy of 1%, though this is about 3 standard deviations from the theoretical expectations.
This prediction is generally considered as indirect evidence for both the MSSM and SUSY GUTs. Gauge coupling unification does not necessarily imply grand unification and there exist other mechanisms to reproduce gauge coupling unification. However, if superpartners are found in the near future, the apparent success of gauge coupling unification would suggest that a supersymmetric grand unified theory is a promising candidate for high scale physics.
Dark matter
If R-parity is preserved, then the lightest superparticle (LSP) of the MSSM is stable and is a Weakly interacting massive particle (WIMP) – i.e. it does not have electromagnetic or strong interactions. This makes the LSP a good dark matter candidate, and falls into the category of cold dark matter (CDM).
Predictions of the MSSM regarding hadron colliders
The Tevatron and LHC have active experimental programs searching for supersymmetric particles. Since both of these machines are hadron colliders – proton antiproton for the Tevatron and proton proton for the LHC – they search best for strongly interacting particles. Therefore, most experimental signature involve production of squarks or gluinos. Since the MSSM has R-parity, the lightest supersymmetric particle is stable and after the squarks and gluinos decay each decay chain will contain one LSP that will leave the detector unseen. This leads to the generic prediction that the MSSM will produce a 'missing energy' signal from these particles leaving the detector.
Neutralinos
There are four neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They are typically labeled , , , (although sometimes is used instead). These four states are mixtures of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical with its antiparticle. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a to a lighter neutralino or through a to chargino. Thus a typical decay is
{|
|
| →
|
| +
|
| colspan=6|
| →
| Missing energy
| +
|
| +
|
|-
|
| →
|
| +
|
| →
|
| +
|
| +
|
| →
| Missing energy
| +
|
| +
|
|}
Note that the “Missing energy” byproduct represents the mass-energy of the neutralino ( ) and in the second line, the mass-energy of a neutrino-antineutrino pair ( + ) produced with the lepton and antilepton in the final decay, all of which are undetectable in individual reactions with current technology.
The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Charginos
There are two charginos that are fermions and are electrically charged. They are typically labeled and (although sometimes and is used instead). The heavier chargino can decay through to the lighter chargino. Both can decay through a to neutralino.
Squarks
The squarks are the scalar superpartners of the quarks and there is one version for each Standard Model quark. Due to phenomenological constraints from flavor changing neutral currents, typically the lighter two generations of squarks have to be nearly the same in mass and therefore are not given distinct names. The superpartners of the top and bottom quark can be split from the lighter squarks and are called stop and sbottom.
In the other direction, there may be a remarkable left-right mixing of the stops and of the sbottoms because of the high masses of the partner quarks top and bottom:
A similar story holds for bottom with its own parameters and .
Squarks can be produced through strong interactions and therefore are easily produced at hadron colliders. They decay to quarks and neutralinos or charginos which further decay. In R-parity conserving scenarios, squarks are pair produced and therefore a typical signal is
2 jets + missing energy
2 jets + 2 leptons + missing energy
Gluinos
Gluinos are Majorana fermionic partners of the gluon which means that they are their own antiparticles. They interact strongly and therefore can be produced significantly at the LHC. They can only decay to a quark and a squark and thus a typical gluino signal is
4 jets + Missing energy
Because gluinos are Majorana, gluinos can decay to either a quark+anti-squark or an anti-quark+squark with equal probability. Therefore, pairs of gluinos can decay to
4 jets+ + Missing energy
This is a distinctive signature because it has same-sign di-leptons and has very little background in the Standard Model.
Sleptons
Sleptons are the scalar partners of the leptons of the Standard Model. They are not strongly interacting and therefore are not produced very often at hadron colliders unless they are very light.
Because of the high mass of the tau lepton there will be left-right mixing of the stau similar to that of stop and sbottom (see above).
Sleptons will typically be found in decays of a charginos and neutralinos if they are light enough to be a decay product.
MSSM fields
Fermions have bosonic superpartners (called sfermions), and bosons have fermionic superpartners (called bosinos). For most of the Standard Model particles, doubling is very straightforward. However, for the Higgs boson, it is more complicated.
A single Higgsino (the fermionic superpartner of the Higgs boson) would lead to a gauge anomaly and would cause the theory to be inconsistent. However, if two Higgsinos are added, there is no gauge anomaly. The simplest theory is one with two Higgsinos and therefore two scalar Higgs doublets.
Another reason for having two scalar Higgs doublets rather than one is in order to have Yukawa couplings between the Higgs and both down-type quarks and up-type quarks; these are the terms responsible for the quarks' masses. In the Standard Model the down-type quarks couple to the Higgs field (which has Y=−) and the up-type quarks to its complex conjugate (which has Y=+). However, in a supersymmetric theory this is not allowed, so two types of Higgs fields are needed.
MSSM superfields
In supersymmetric theories, every field and its superpartner can be written together as a superfield. The superfield formulation of supersymmetry is very convenient to write down manifestly supersymmetric theories (i.e. one does not have to tediously check that the theory is supersymmetric term by term in the Lagrangian). The MSSM contains vector superfields associated with the Standard Model gauge groups which contain the vector bosons and associated gauginos. It also contains chiral superfields for the Standard Model fermions and Higgs bosons (and their respective superpartners).
MSSM Higgs mass
The MSSM Higgs mass is a prediction of the Minimal Supersymmetric Standard Model. The mass of the lightest Higgs boson is set by the Higgs quartic coupling. Quartic couplings are not soft supersymmetry-breaking parameters since they lead to a quadratic divergence of the Higgs mass. Furthermore, there are no supersymmetric parameters to make the Higgs mass a free parameter in the MSSM (though not in non-minimal extensions). This means that Higgs mass is a prediction of the MSSM. The LEP II and the IV experiments placed a lower limit on the Higgs mass of 114.4 GeV. This lower limit is significantly above where the MSSM would typically predict it to be but does not rule out the MSSM; the discovery of the Higgs with a mass of 125 GeV is within the maximal upper bound of approximately 130 GeV that loop corrections within the MSSM would raise the Higgs mass to. Proponents of the MSSM point out that a Higgs mass within the upper bound of the MSSM calculation of the Higgs mass is a successful prediction, albeit pointing to more fine tuning than expected.
Formulas
The only susy-preserving operator that creates a quartic coupling for the Higgs in the MSSM arise for the D-terms of the SU(2) and U(1) gauge sector and the magnitude of the quartic coupling is set by the size of the gauge couplings.
This leads to the prediction that the Standard Model-like Higgs mass (the scalar that couples approximately to the VEV) is limited to be less than the Z mass:
.
Since supersymmetry is broken, there are radiative corrections to the quartic coupling that can increase the Higgs mass. These dominantly arise from the 'top sector':
where is the top mass and is the mass of the top squark. This result can be interpreted as the RG running of the Higgs quartic coupling from the scale of supersymmetry to the top mass—however since the top squark mass should be relatively close to the top mass, this is usually a fairly modest contribution and increases the Higgs mass to roughly the LEP II bound of 114 GeV before the top squark becomes too heavy.
Finally there is a contribution from the top squark A-terms:
where is a dimensionless number. This contributes an additional term to the Higgs mass at loop level, but is not logarithmically enhanced
by pushing (known as 'maximal mixing') it is possible to push the Higgs mass to 125 GeV without decoupling the top squark or adding new dynamics to the MSSM.
As the Higgs was found at around 125 GeV (along with no other superparticles) at the LHC, this strongly hints at new dynamics beyond the MSSM, such as the 'Next to Minimal Supersymmetric Standard Model' (NMSSM); and suggests some correlation to the little hierarchy problem.
MSSM Lagrangian
The Lagrangian for the MSSM contains several pieces.
The first is the Kähler potential for the matter and Higgs fields which produces the kinetic terms for the fields.
The second piece is the gauge field superpotential that produces the kinetic terms for the gauge bosons and gauginos.
The next term is the superpotential for the matter and Higgs fields. These produce the Yukawa couplings for the Standard Model fermions and also the mass term for the Higgsinos. After imposing R-parity, the renormalizable, gauge invariant operators in the superpotential are
The constant term is unphysical in global supersymmetry (as opposed to supergravity).
Soft SUSY breaking
The last piece of the MSSM Lagrangian is the soft supersymmetry breaking Lagrangian. The vast majority of the parameters of the MSSM are in the susy breaking Lagrangian. The soft susy breaking are divided into roughly three pieces.
The first are the gaugino masses
where are the gauginos and is different for the wino, bino and gluino.
The next are the soft masses for the scalar fields
where are any of the scalars in the MSSM and are Hermitian matrices for the squarks and sleptons of a given set of gauge quantum numbers. The eigenvalues of these matrices are actually the masses squared, rather than the masses.
There are the and terms which are given by
The terms are complex matrices much as the scalar masses are.
Although not often mentioned with regard to soft terms, to be consistent with observation, one must also include Gravitino and Goldstino soft masses given by
The reason these soft terms are not often mentioned are that they arise through local supersymmetry and not global supersymmetry, although they are required otherwise if the Goldstino were massless it would contradict observation. The Goldstino mode is eaten by the Gravitino to become massive, through a gauge shift, which also absorbs the would-be "mass" term of the Goldstino.
Problems
There are several problems with the MSSM—most of them falling into understanding the parameters.
The mu problem: The Higgsino mass parameter μ appears as the following term in the superpotential: μHuHd. It should have the same order of magnitude as the electroweak scale, many orders of magnitude smaller than that of the Planck scale, which is the natural cutoff scale. The soft supersymmetry breaking terms should also be of the same order of magnitude as the electroweak scale. This brings about a problem of naturalness: why are these scales so much smaller than the cutoff scale yet happen to fall so close to each other?
Flavor universality of soft masses and A-terms: since no flavor mixing additional to that predicted by the standard model has been discovered so far, the coefficients of the additional terms in the MSSM Lagrangian must be, at least approximately, flavor invariant (i.e. the same for all flavors).
Smallness of CP violating phases: since no CP violation additional to that predicted by the standard model has been discovered so far, the additional terms in the MSSM Lagrangian must be, at least approximately, CP invariant, so that their CP violating phases are small.
Theories of supersymmetry breaking
A large amount of theoretical effort has been spent trying to understand the mechanism for soft supersymmetry breaking that produces the desired properties in the superpartner masses and interactions. The three most extensively studied mechanisms are:
Gravity-mediated supersymmetry breaking
Gravity-mediated supersymmetry breaking is a method of communicating supersymmetry breaking to the supersymmetric Standard Model through gravitational interactions. It was the first method proposed to communicate supersymmetry breaking. In gravity-mediated supersymmetry-breaking models, there is a part of the theory that only interacts with the MSSM through gravitational interaction. This hidden sector of the theory breaks supersymmetry. Through the supersymmetric version of the Higgs mechanism, the gravitino, the supersymmetric version of the graviton, acquires a mass. After the gravitino has a mass, gravitational radiative corrections to soft masses are incompletely cancelled beneath the gravitino's mass.
It is currently believed that it is not generic to have a sector completely decoupled from the MSSM and there should be higher dimension operators that couple different sectors together with the higher dimension operators suppressed by the Planck scale. These operators give as large of a contribution to the soft supersymmetry breaking masses as the gravitational loops; therefore, today people usually consider gravity mediation to be gravitational sized direct interactions between the hidden sector and the MSSM.
mSUGRA stands for minimal supergravity. The construction of a realistic model of interactions within supergravity framework where supersymmetry breaking is communicated through the supergravity interactions was carried out by Ali Chamseddine, Richard Arnowitt, and Pran Nath in 1982. mSUGRA is one of the most widely investigated models of particle physics due to its predictive power requiring only 4 input parameters and a sign, to determine the low energy phenomenology from the scale of Grand Unification. The most widely used set of parameters is:
Gravity-Mediated Supersymmetry Breaking was assumed to be flavor universal because of the universality of gravity; however, in 1986 Hall, Kostelecky, and Raby showed that Planck-scale physics that are necessary to generate the Standard-Model Yukawa couplings spoil the universality of the supersymmetry breaking.
Gauge-mediated supersymmetry breaking (GMSB)
Gauge-mediated supersymmetry breaking is method of communicating supersymmetry breaking to the supersymmetric Standard Model through the Standard Model's gauge interactions. Typically a hidden sector breaks supersymmetry and communicates it to massive messenger fields that are charged under the Standard Model. These messenger fields induce a gaugino mass at one loop and then this is transmitted on to the scalar superpartners at two loops. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.5GeV. With the Higgs being discovered at 125GeV - this model requires stops above 2 TeV.
Anomaly-mediated supersymmetry breaking (AMSB)
Anomaly-mediated supersymmetry breaking is a special type of gravity mediated supersymmetry breaking that results in supersymmetry breaking being communicated to the supersymmetric Standard Model through the conformal anomaly. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.0GeV. With the Higgs being discovered at 125GeV - this scenario requires stops heavier than 2 TeV.
Phenomenological MSSM (pMSSM)
The unconstrained MSSM has more than 100 parameters in addition to the Standard Model parameters.
This makes any phenomenological analysis (e.g. finding regions in parameter space consistent
with observed data) impractical. Under the following three assumptions:
no new source of CP-violation
no Flavour Changing Neutral Currents
first and second generation universality
one can reduce the number of additional parameters to the following 19 quantities of the phenomenological MSSM (pMSSM):
The large parameter space of pMSSM makes searches in pMSSM extremely challenging and makes pMSSM difficult to exclude.
Experimental tests
Terrestrial detectors
XENON1T (a dark matter WIMP detector - being commissioned in 2016) is expected to explore/test supersymmetry candidates such as CMSSM.
See also
Desert (particle physics)
References
External links
MSSM on arxiv.org
Particle Data Group review of MSSM and search for MSSM predicted particles
Supersymmetric quantum field theory
Physics beyond the Standard Model | Minimal Supersymmetric Standard Model | [
"Physics"
] | 4,992 | [
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Particle physics",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
607,286 | https://en.wikipedia.org/wiki/Hilbert%27s%20program | In mathematics, Hilbert's program, formulated by German mathematician David Hilbert in the early 1920s, was a proposed solution to the foundational crisis of mathematics, when early attempts to clarify the foundations of mathematics were found to suffer from paradoxes and inconsistencies. As a solution, Hilbert proposed to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Hilbert proposed that the consistency of more complicated systems, such as real analysis, could be proven in terms of simpler systems. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic.
Gödel's incompleteness theorems, published in 1931, showed that Hilbert's program was unattainable for key areas of mathematics. In his first theorem, Gödel showed that any consistent system with a computable set of axioms which is capable of expressing arithmetic can never be complete: it is possible to construct a statement that can be shown to be true, but that cannot be derived from the formal rules of the system. In his second theorem, he showed that such a system could not prove its own consistency, so it certainly cannot be used to prove the consistency of anything stronger with certainty. This refuted Hilbert's assumption that a finitistic system could be used to prove the consistency of itself, and therefore could not prove everything else.
Statement of Hilbert's program
The main goal of Hilbert's program was to provide secure foundations for all mathematics. In particular, this should include:
A formulation of all mathematics; in other words all mathematical statements should be written in a precise formal language, and manipulated according to well defined rules.
Completeness: a proof that all true mathematical statements can be proved in the formalism.
Consistency: a proof that no contradiction can be obtained in the formalism of mathematics. This consistency proof should preferably use only "finitistic" reasoning about finite mathematical objects.
Conservation: a proof that any result about "real objects" obtained using reasoning about "ideal objects" (such as uncountable sets) can be proved without using ideal objects.
Decidability: there should be an algorithm for deciding the truth or falsity of any mathematical statement.
Gödel's incompleteness theorems
Kurt Gödel showed that most of the goals of Hilbert's program were impossible to achieve, at least if interpreted in the most obvious way. Gödel's second incompleteness theorem shows that any consistent theory powerful enough to encode addition and multiplication of integers cannot prove its own consistency. This presents a challenge to Hilbert's program:
It is not possible to formalize all mathematical true statements within a formal system, as any attempt at such a formalism will omit some true mathematical statements. There is no complete, consistent extension of even Peano arithmetic based on a computably enumerable set of axioms.
A theory such as Peano arithmetic cannot even prove its own consistency, so a restricted "finitistic" subset of it certainly cannot prove the consistency of more powerful theories such as set theory.
There is no algorithm to decide the truth (or provability) of statements in any consistent extension of Peano arithmetic. Strictly speaking, this negative solution to the Entscheidungsproblem appeared a few years after Gödel's theorem, because at the time the notion of an algorithm had not been precisely defined.
Hilbert's program after Gödel
Many current lines of research in mathematical logic, such as proof theory and reverse mathematics, can be viewed as natural continuations of Hilbert's original program. Much of it can be salvaged by changing its goals slightly (Zach 2005), and with the following modifications some of it was successfully completed:
Although it is not possible to formalize all mathematics, it is possible to formalize essentially all the mathematics that anyone uses. In particular Zermelo–Fraenkel set theory, combined with first-order logic, gives a satisfactory and generally accepted formalism for almost all current mathematics.
Although it is not possible to prove completeness for systems that can express at least the Peano arithmetic (or, more generally, that have a computable set of axioms), it is possible to prove forms of completeness for many other interesting systems. An example of a non-trivial theory for which completeness has been proved is the theory of algebraically closed fields of given characteristic.
The question of whether there are finitary consistency proofs of strong theories is difficult to answer, mainly because there is no generally accepted definition of a "finitary proof". Most mathematicians in proof theory seem to regard finitary mathematics as being contained in Peano arithmetic, and in this case it is not possible to give finitary proofs of reasonably strong theories. On the other hand, Gödel himself suggested the possibility of giving finitary consistency proofs using finitary methods that cannot be formalized in Peano arithmetic, so he seems to have had a more liberal view of what finitary methods might be allowed. A few years later, Gentzen gave a consistency proof for Peano arithmetic. The only part of this proof that was not clearly finitary was a certain transfinite induction up to the ordinal ε0. If this transfinite induction is accepted as a finitary method, then one can assert that there is a finitary proof of the consistency of Peano arithmetic. More powerful subsets of second-order arithmetic have been given consistency proofs by Gaisi Takeuti and others, and one can again debate about exactly how finitary or constructive these proofs are. (The theories that have been proved consistent by these methods are quite strong, and include most "ordinary" mathematics.)
Although there is no algorithm for deciding the truth of statements in Peano arithmetic, there are many interesting and non-trivial theories for which such algorithms have been found. For example, Tarski found an algorithm that can decide the truth of any statement in analytic geometry (more precisely, he proved that the theory of real closed fields is decidable). Given the Cantor–Dedekind axiom, this algorithm can be regarded as an algorithm to decide the truth of any statement in Euclidean geometry. This is substantial as few people would consider Euclidean geometry a trivial theory.
See also
Grundlagen der Mathematik
Foundational crisis of mathematics
References
G. Gentzen, 1936/1969. Die Widerspruchfreiheit der reinen Zahlentheorie. Mathematische Annalen 112:493–565. Translated as 'The consistency of arithmetic', in The collected papers of Gerhard Gentzen, M. E. Szabo (ed.), 1969.
D. Hilbert. 'Die Grundlegung der elementaren Zahlenlehre'. Mathematische Annalen 104:485–94. Translated by W. Ewald as 'The Grounding of Elementary Number Theory', pp. 266–273 in Mancosu (ed., 1998) From Brouwer to Hilbert: The debate on the foundations of mathematics in the 1920s, Oxford University Press. New York.
S.G. Simpson, 1988. Partial realizations of Hilbert's program (pdf). Journal of Symbolic Logic 53:349–363.
R. Zach, 2006. Hilbert's Program Then and Now. Philosophy of Logic 5:411–447, arXiv:math/0508572 [math.LO].
External links
Mathematical logic
Proof theory
Program | Hilbert's program | [
"Mathematics"
] | 1,553 | [
"Hilbert's problems",
"Mathematical logic",
"Mathematical problems",
"Proof theory"
] |
607,495 | https://en.wikipedia.org/wiki/Freezing-point%20depression | Freezing-point depression is a drop in the maximum temperature at which a substance freezes, caused when a smaller amount of another, non-volatile substance is added. Examples include adding salt into water (used in ice cream makers and for de-icing roads), alcohol in water, ethylene or propylene glycol in water (used in antifreeze in cars), adding copper to molten silver (used to make solder that flows at a lower temperature than the silver pieces being joined), or the mixing of two solids such as impurities into a finely powdered drug.
In all cases, the substance added/present in smaller amounts is considered the solute, while the original substance present in larger quantity is thought of as the solvent. The resulting liquid solution or solid-solid mixture has a lower freezing point than the pure solvent or solid because the chemical potential of the solvent in the mixture is lower than that of the pure solvent, the difference between the two being proportional to the natural logarithm of the mole fraction. In a similar manner, the chemical potential of the vapor above the solution is lower than that above a pure solvent, which results in boiling-point elevation. Freezing-point depression is what causes sea water (a mixture of salt and other compounds in water) to remain liquid at temperatures below , the freezing point of pure water.
Explanation
Using vapour pressure
The freezing point is the temperature at which the liquid solvent and solid solvent are at equilibrium, so that their vapor pressures are equal. When a non-volatile solute is added to a volatile liquid solvent, the solution vapour pressure will be lower than that of the pure solvent. As a result, the solid will reach equilibrium with the solution at a lower temperature than with the pure solvent. This explanation in terms of vapor pressure is equivalent to the argument based on chemical potential, since the chemical potential of a vapor is logarithmically related to pressure. All of the colligative properties result from a lowering of the chemical potential of the solvent in the presence of a solute. This lowering is an entropy effect. The greater randomness of the solution (as compared to the pure solvent) acts in opposition to freezing, so that a lower temperature must be reached, over a broader range, before equilibrium between the liquid solution and solid solution phases is achieved. Melting point determinations are commonly exploited in organic chemistry to aid in identifying substances and to ascertain their purity.
Due to concentration and entropy
In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze (a lower concentration of solvent exists in a solution versus pure solvent). Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. The solute is not occluding or preventing the solvent from solidifying, it is simply diluting it so there is a reduced probability of a solvent making an attempt at freezing in any given moment.
At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well.
Uses
The phenomenon of freezing-point depression has many practical uses. The radiator fluid in an automobile is a mixture of water and ethylene glycol. The freezing-point depression prevents radiators from freezing in winter. Road salting takes advantage of this effect to lower the freezing point of the ice it is placed on. Lowering the freezing point allows the street ice to melt at lower temperatures, preventing the accumulation of dangerous, slippery ice. Commonly used sodium chloride can depress the freezing point of water to about . If the road surface temperature is lower, NaCl becomes ineffective and other salts are used, such as calcium chloride, magnesium chloride or a mixture of many. These salts are somewhat aggressive to metals, especially iron, so in airports safer media such as sodium formate, potassium formate, sodium acetate, and potassium acetate are used instead.
Freezing-point depression is used by some organisms that live in extreme cold. Such creatures have evolved means through which they can produce a high concentration of various compounds such as sorbitol and glycerol. This elevated concentration of solute decreases the freezing point of the water inside them, preventing the organism from freezing solid even as the water around them freezes, or as the air around them becomes very cold. Examples of organisms that produce antifreeze compounds include some species of arctic-living fish such as the rainbow smelt, which produces glycerol and other molecules to survive in frozen-over estuaries during the winter months. In other animals, such as the spring peeper frog (Pseudacris crucifer), the molality is increased temporarily as a reaction to cold temperatures. In the case of the peeper frog, freezing temperatures trigger a large-scale breakdown of glycogen in the frog's liver and subsequent release of massive amounts of glucose into the blood.
With the formula below, freezing-point depression can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called cryoscopy (Greek cryo = cold, scopos = observe; "observe the cold") and relies on exact measurement of the freezing point. The degree of dissociation is measured by determining the van 't Hoff factor i by first determining mB and then comparing it to msolute. In this case, the molar mass of the solute must be known. The molar mass of a solute is determined by comparing mB with the amount of solute dissolved. In this case, i must be known, and the procedure is primarily useful for organic compounds using a nonpolar solvent. Cryoscopy is no longer as common a measurement method as it once was, but it was included in textbooks at the turn of the 20th century. As an example, it was still taught as a useful analytic procedure in Cohen's Practical Organic Chemistry of 1910, in which the molar mass of naphthalene is determined using a Beckmann freezing apparatus.
Laboratory uses
Freezing-point depression can also be used as a purity analysis tool when analyzed by differential scanning calorimetry. The results obtained are in mol%, but the method has its place, where other methods of analysis fail.
In the laboratory, lauric acid may be used to investigate the molar mass of an unknown substance via the freezing-point depression. The choice of lauric acid is convenient because the melting point of the pure compound is relatively high (43.8 °C). Its cryoscopic constant is 3.9 °C·kg/mol. By melting lauric acid with the unknown substance, allowing it to cool, and recording the temperature at which the mixture freezes, the molar mass of the unknown compound may be determined.
This is also the same principle acting in the melting-point depression observed when the melting point of an impure solid mixture is measured with a melting-point apparatus since melting and freezing points both refer to the liquid-solid phase transition (albeit in different directions).
In principle, the boiling-point elevation and the freezing-point depression could be used interchangeably for this purpose. However, the cryoscopic constant is larger than the ebullioscopic constant, and the freezing point is often easier to measure with precision, which means measurements using the freezing-point depression are more precise.
FPD measurements are also used in the dairy industry to ensure that milk has not had extra water added. Milk with a FPD of over 0.509 °C is considered to be unadulterated.
Formula
For dilute solution
If the solution is treated as an ideal solution, the extent of freezing-point depression depends only on the solute concentration that can be estimated by a simple linear relationship with the cryoscopic constant ("Blagden's Law").
where:
is the decrease in freezing point, defined as the freezing point of the pure solvent minus the freezing point of the solution, as the formula above results in a positive value given that all factors are positive. From the calculated using the formula above, the freezing point of the solution can then be calculated as .
, the cryoscopic constant, which is dependent on the properties of the solvent, not the solute. (Note: When conducting experiments, a higher k value makes it easier to observe larger drops in the freezing point.)
is the molality (moles of solute per kilogram of solvent)
is the van 't Hoff factor (number of ion particles per formula unit of solute, e.g. i = 2 for NaCl, 3 for BaCl2).
Some values of the cryoscopic constant Kf for selected solvents:
For concentrated solution
The simple relation above doesn't consider the nature of the solute, so it is only effective in a diluted solution. For a more accurate calculation at a higher concentration, for ionic solutes, Ge and Wang (2010) proposed a new equation:
In the above equation, TF is the normal freezing point of the pure solvent (273 K for water, for example); aliq is the activity of the solvent in the solution (water activity for aqueous solution); ΔHfusTF is the enthalpy change of fusion of the pure solvent at TF, which is 333.6 J/g for water at 273 K; ΔCfusp is the difference between the heat capacities of the liquid and solid phases at TF, which is 2.11 J/(g·K) for water.
The solvent activity can be calculated from the Pitzer model or modified TCPC model, which typically requires 3 adjustable parameters. For the TCPC model, these parameters are available for many single salts.
Ethanol example
The freezing point of ethanol water mixture is shown in the following graph.
See also
Melting-point depression
Boiling-point elevation
Colligative properties
Deicing
Eutectic point
Frigorific mixture
List of boiling and freezing information of solvents
Snow removal
References
Amount of substance
Chemical properties
Phase transitions | Freezing-point depression | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,092 | [
"Scalar physical quantities",
"Physical phenomena",
"Phase transitions",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Phases of matter",
"Critical phenomena",
"Amount of substance",
"nan",
"Statistical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"... |
607,530 | https://en.wikipedia.org/wiki/Reaction%20mechanism | In chemistry, a reaction mechanism is the step by step sequence of elementary reactions by which overall chemical reaction occurs.
A chemical mechanism is a theoretical conjecture that tries to describe in detail what takes place at each stage of an overall chemical reaction. The detailed steps of a reaction are not observable in most cases. The conjectured mechanism is chosen because it is thermodynamically feasible and has experimental support in isolated intermediates (see next section) or other quantitative and qualitative characteristics of the reaction. It also describes each reactive intermediate, activated complex, and transition state, which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also explain the reason for the reactants and catalyst used, the stereochemistry observed in reactants and products, all products formed and the amount of each.
The electron or arrow pushing method is often used in illustrating a reaction mechanism; for example, see the illustration of the mechanism for benzoin condensation in the following examples section.
Mechanisms also are of interest in inorganic chemistry. A often quoted mechanistic experiment involved the reaction of the labile hexaaquo chromous reductant with the exchange inert pentammine cobalt(III) chloride.
Reaction intermediates
Reaction intermediates are chemical species, often unstable and short-lived. They can, however, sometimes be isolated. They are neither reactants nor products of the overall chemical reaction, but temporary products and/or reactants in the mechanism's reaction steps. Reaction intermediates are often confused with the transition state. The transition states are, in contrast, fleeting, high-energy species that cannot be isolated. The kinetics (relative rates of the reaction steps and the rate equation for the overall reaction) are discussed in terms of the energy required for the conversion of the reactants to the proposed transition states (molecular states that correspond to maxima on the reaction coordinates, and to saddle points on the potential energy surface for the reaction).
Chemical kinetics
Information about the mechanism of a reaction is often provided by analyzing chemical kinetics to determine the reaction order in each reactant.
Illustrative is the oxidation of carbon monoxide by nitrogen dioxide:
CO + NO2 → CO2 + NO
The rate law for this reaction is:
This form shows that the rate-determining step does not involve CO. Instead, the slow step involves two molecules of NO2. A possible mechanism for the overall reaction that explains the rate law is:
2 NO2 → NO3 + NO (slow)
NO3 + CO → NO2 + CO2 (fast)
Each step is called an elementary step, and each has its own rate law and molecularity. The sum of the elementary steps gives the net reaction.
When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate which obeys the rate law .
Other reactions may have mechanisms of several consecutive steps. In organic chemistry, the reaction mechanism for the benzoin condensation, put forward in 1903 by A. J. Lapworth, was one of the first proposed reaction mechanisms.
A chain reaction is an example of a complex mechanism, in which the propagation steps form a closed cycle.
In a chain reaction, the intermediate produced in one step generates an intermediate in another step.
Intermediates are called chain carriers. Sometimes, the chain carriers are radicals, they can be ions as well. In nuclear fission they are neutrons.
Chain reactions have several steps, which may include:
Chain initiation: this can be by thermolysis (heating the molecules) or photolysis (absorption of light) leading to the breakage of a bond.
Propagation: a chain carrier makes another carrier.
Branching: one carrier makes more than one carrier.
Retardation: a chain carrier may react with a product reducing the rate of formation of the product. It makes another chain carrier, but the product concentration is reduced.
Chain termination: radicals combine and the chain carriers are lost.
Inhibition: chain carriers are removed by processes other than termination, such as by forming radicals.
Even though all these steps can appear in one chain reaction, the minimum necessary ones are Initiation, propagation, and termination.
An example of a simple chain reaction is the thermal decomposition of acetaldehyde (CH3CHO) to methane (CH4) and carbon monoxide (CO). The experimental reaction order is 3/2, which can be explained by a Rice-Herzfeld mechanism.
This reaction mechanism for acetaldehyde has 4 steps with rate equations for each step :
Initiation : CH3CHO → •CH3 + •CHO (Rate=k1 [CH3CHO])
Propagation: CH3CHO + •CH3 → CH4 + CH3CO• (Rate=k2 [CH3CHO][•CH3])
Propagation: CH3CO• → •CH3 + CO (Rate=k3 [CH3CO•])
Termination: •CH3 + •CH3 → CH3CH3 (Rate=k4 [•CH3]2)
For the overall reaction, the rates of change of the concentration of the intermediates •CH3 and CH3CO• are zero, according to the steady-state approximation, which is used to account for the rate laws of chain reactions.
d[•CH3]/dt = k1[CH3CHO] – k2[•CH3][CH3CHO] + k3[CH3CO•] - 2k4[•CH3]2 = 0
and d[CH3CO•]/dt = k2[•CH3][CH3CHO] – k3[CH3CO•] = 0
The sum of these two equations is k1[CH3CHO] – 2 k4[•CH3]2 = 0. This may be solved to find the steady-state concentration of •CH3 radicals as [•CH3] = (k1 / 2k4)1/2 [CH3CHO]1/2.
It follows that the rate of formation of CH4 is d[CH4]/dt = k2[•CH3][CH3CHO] = k2 (k1 / 2k4)1/2 [CH3CHO]3/2
Thus the mechanism explains the observed rate expression, for the principal products CH4 and CO. The exact rate law may be even more complicated, there are also minor products such as acetone (CH3COCH3) and propanal (CH3CH2CHO).
Other experimental methods to determine mechanism
Many experiments that suggest the possible sequence of steps in a reaction mechanism have been designed, including:
measurement of the effect of temperature (Arrhenius equation) to determine the activation energy
spectroscopic observation of reaction intermediates
determination of the stereochemistry of products, for example in nucleophilic substitution reactions
measurement of the effect of isotopic substitution on the reaction rate
for reactions in solution, measurement of the effect of pressure on the reaction rate to determine the volume change on formation of the activated complex
for reactions of ions in solution, measurement of the effect of ionic strength on the reaction rate
direct observation of the activated complex by pump-probe spectroscopy
infrared chemiluminescence to detect vibrational excitation in the products
electrospray ionization mass spectrometry.
crossover experiments.
Theoretical modeling
A correct reaction mechanism is an important part of accurate predictive modeling. For many combustion and plasma systems, detailed mechanisms are not available or require development.
Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group additivity methods must be used to obtain the required parameters.
Computational chemistry methods can also be used to calculate potential energy surfaces for reactions and determine probable mechanisms.
Molecularity
Molecularity in chemistry is the number of colliding molecular entities that are involved in a single reaction step.
A reaction step involving one molecular entity is called unimolecular.
A reaction step involving two molecular entities is called bimolecular.
A reaction step involving three molecular entities is called trimolecular or termolecular.
In general, reaction steps involving more than three molecular entities do not occur, because is statistically improbable in terms of Maxwell distribution to find such a transition state.
See also
Organic reactions by mechanism
Nucleophilic acyl substitution
Neighbouring group participation
Finkelstein reaction
Lindemann mechanism
Electrochemical reaction mechanism
Nucleophilic abstraction
References
L.G.WADE, ORGANIC CHEMISTRY 7TH ED, 2010
External links
Reaction mechanisms for combustion of hydrocarbons
Mechanism
Chemical kinetics
Chemical reaction engineering
Combustion | Reaction mechanism | [
"Chemistry",
"Engineering"
] | 1,869 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"Chemical engineering",
"Combustion",
"Physical organic chemistry",
"nan",
"Chemical kinetics"
] |
607,577 | https://en.wikipedia.org/wiki/Activated%20complex | In chemistry, an activated complex represents a collection of intermediate structures in a chemical reaction when bonds are breaking and forming. The activated complex is an arrangement of atoms in an arbitrary region near the saddle point of a potential energy surface. The region represents not one defined state, but a range of unstable configurations that a collection of atoms pass through between the reactants and products of a reaction. Activated complexes have partial reactant and product character, which can significantly impact their behaviour in chemical reactions.
The terms activated complex and transition state are often used interchangeably, but they represent different concepts. Transition states only represent the highest potential energy configuration of the atoms during the reaction, while activated complex refers to a range of configurations near the transition state. In a reaction coordinate, the transition state is the configuration at the maximum of the diagram while the activated complex can refer to any point near the maximum.
Transition state theory (also known as activated complex theory) studies the kinetics of reactions that pass through a defined intermediate state with standard Gibbs energy of activation . The transition state, represented by the double dagger symbol represents the exact configuration of atoms that has an equal probability of forming either the reactants or products of the given reaction.
The activation energy is the minimum amount of energy to initiate a chemical reaction and form the activated complex. The energy serves as a threshold that reactant molecules must surpass to overcome the energy barrier and transition into the activated complex. Endothermic reactions absorb energy from the surroundings, while exothermic reactions release energy. Some reactions occur spontaneously, while others necessitate an external energy input. The reaction can be visualized using a reaction coordinate diagram to show the activation energy and potential energy throughout the reaction.
Activated complexes were first discussed in transition state theory (also called activated complex theory), which was first developed by Eyring, Evans, and Polanyi in 1935.
Reaction rate
Transition state theory
Transition state theory explains the dynamics of reactions. The theory is based on the idea that there is an equilibrium between the activated complex and reactant molecules. The theory incorporates concepts from collision theory, which states that for a reaction to occur, reacting molecules must collide with a minimum energy and correct orientation. The reactants are first transformed into the activated complex before breaking into the products. From the properties of the activated complex and reactants, the reaction rate constant is
where K is the equilibrium constant, is the Boltzmann constant, T is the thermodynamic temperature, and h is the Planck constant. Transition state theory is based on classical mechanics, as it assumes that as the reaction proceeds, the molecules will never return to the transition state.
Symmetry
An activated complex with high symmetry can decrease the accuracy of rate expressions. Error can arise from introducing symmetry numbers into the rotational partition functions for the reactants and activated complexes. To reduce errors, symmetry numbers can by omitted by multiplying the rate expression by a statistical factor:
where the statistical factor is the number of equivalent activated complexes that can be formed, and the Q are the partition functions from the symmetry numbers that have been omitted.
The activated complex is a collection of molecules that forms and then explodes along a particular internal normal coordinate. Ordinary molecules have three translational degrees of freedom, and their properties are similar to activated complexes. However, activated complexed have an extra degree of translation associated with their approach to the energy barrier, crossing it, and then dissociating.
See also
Coordination complex
Reaction intermediate
References
Chemical kinetics
Reaction mechanisms | Activated complex | [
"Chemistry"
] | 705 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"Chemical kinetics",
"Physical organic chemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.