id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
93,070 | https://en.wikipedia.org/wiki/A%20New%20Kind%20of%20Science | A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.
Contents
Computation and its implications
The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world.
Simple programs
The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, among others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; and reversal-addition. For a program to qualify as simple, there are several requirements:
Its operation can be completely explained by a simple graphical illustration.
It can be completely explained in a few sentences of human language.
It can be implemented in a computer language using just a few lines of code.
The number of its possible variations is small enough so that all of them can be computed.
Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant proportion of them can produce great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.
Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.
Mapping and mining the computational universe
In order to study simple rules and their often-complex behavior, Wolfram argues that it is necessary to systematically explore all these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.
The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. Some forms of production are also in some ways unique to this field of study—for example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms.
Another type of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. But in general, Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.
Systematic abstract science
While Wolfram advocates simple programs as a scientific discipline, he also argues that its methodology will revolutionize other fields of science. The basis of his argument is that the study of simple programs is the minimal possible form of science, grounded equally in both abstraction and empirical experimentation. Every aspect of the methodology NKS advocates is optimized to make experimentation as direct, easy, and meaningful as possible while maximizing the chances that the experiment will do something unexpected. Just as this methodology allows computational mechanisms to be studied in their simplest forms, Wolfram argues that the process of doing so engages with the mathematical basis of the physical world, and therefore has much to offer the sciences.
Wolfram argues that the computational realities of the universe make science hard for fundamental reasons. But he also argues that by understanding the importance of these realities, we can learn to use them in our favor. For instance, instead of reverse engineering our theories from observation, we can enumerate systems and then try to match them to the behaviors we observe. A major theme of NKS is investigating the structure of the possibility space. Wolfram argues that science is far too ad hoc, in part because the models used are too complicated and unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze.
Philosophical underpinnings
Computational irreducibility
Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be "reduced"), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models. Likewise, his idea of intrinsic randomness generation—that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations—implies that computational models do not need to include explicit randomness.
Principle of computational equivalence
Based on his experimental results, Wolfram developed the principle of computational equivalence (PCE): the principle says that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power. Most systems can attain this level. Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating input and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems.
The principle can be restated as follows: almost all processes that are not obviously simple are of equivalent sophistication. From this principle, Wolfram draws an array of concrete deductions that he argues reinforce his theory. Possibly the most important of these is an explanation of why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like the concept of "heat", but simply a label for all systems whose computations are sophisticated. Wolfram argues that understanding this makes possible the "normal science" of the NKS paradigm.
Applications and results
NKS contains a number of specific results and ideas, and they can be organized into several themes. One common theme of examples and applications is demonstrating how little complexity it takes to achieve interesting behavior, and how the proper methodology can discover this behavior.
First, there are several cases where NKS introduces what was, during the book's composition, the simplest known system in some class that has a particular characteristic. Some examples include the first primitive recursive function that results in complexity, the smallest universal Turing machine, and the shortest axiom for propositional calculus. In a similar vein, Wolfram also demonstrates many simple programs that exhibit phenomena like phase transitions, conserved quantities, continuum behavior, and thermodynamics that are familiar from traditional science. Simple computational models of natural systems like shell growth, fluid turbulence, and phyllotaxis are a final category of applications that fall in this theme.
Another common theme is taking facts about the computational universe as a whole and using them to reason about fields in a holistic way. For instance, Wolfram discusses how facts about the computational universe inform evolutionary theory, SETI, free will, computational complexity theory, and philosophical fields like ontology, epistemology, and even postmodernism.
Wolfram suggests that the theory of computational irreducibility may explain how free will is possible in a nominally deterministic universe. He posits that the computational process in the brain of the being with free will is so complex that it cannot be captured in a simpler computation, due to the principle of computational irreducibility. Thus, while the process is indeed deterministic, there is no better way to determine the being's will than, in essence, to run the experiment and let the being exercise it.
The book also contains a number of results—both experimental and analytic—about what a particular automaton computes, or what its characteristics are, using some methods of analysis.
The book contains a new technical result in describing the Turing completeness of the Rule 110 cellular automaton. Very small Turing machines can simulate Rule 110, which Wolfram demonstrates using a 2-state 5-symbol universal Turing machine. Wolfram conjectures that a particular 2-state 3-symbol Turing machine is universal. In 2007, as part of commemorating the book's fifth anniversary, Wolfram's company offered a $25,000 prize for proof that this Turing machine is universal. Alex Smith, a computer science student from Birmingham, UK, won the prize later that year by proving Wolfram's conjecture.
Reception
Periodicals gave A New Kind of Science coverage, including articles in The New York Times, Newsweek, Wired, and The Economist. Some scientists criticized the book as abrasive and arrogant, and perceived a fatal flaw—that simple systems such as cellular automata are not complex enough to describe the degree of complexity in evolved systems, and observed that Wolfram ignored the research categorizing the complexity of systems. Although critics accept Wolfram's result showing universal computation, they view it as minor and dispute Wolfram's claim of a paradigm shift. Others found that the work contained valuable insights and refreshing ideas. Wolfram addressed his critics in a series of blog posts.
Scientific philosophy
A tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts. Therefore, NKS argues that systematically exploring the space of simple programs will lead to a base of reusable knowledge. But many scientists believe that of all possible parameters, only some actually occur in the universe. For instance, of all possible permutations of the symbols making up an equation, most will be essentially meaningless. NKS has also been criticized for asserting that the behavior of simple systems is somehow representative of all systems.
Methodology
A common criticism of NKS is that it does not follow established scientific methodology. For instance, NKS does not establish rigorous mathematical definitions, nor does it attempt to prove theorems; and most formulas and equations are written in Mathematica rather than standard notation. Along these lines, NKS has also been criticized for being heavily visual, with much information conveyed by pictures that lack formal meaning. It has also been criticized for not using modern research in the field of complexity, particularly works on complexity from a rigorous mathematical perspective. And it has been criticized for misrepresenting chaos theory.
Utility
NKS has been criticized for not providing specific results that are immediately applicable to ongoing scientific research. There has also been criticism, implicit and explicit, that the study of simple programs has little connection to the physical universe and hence is of limited value. Steven Weinberg has pointed out that no real-world system has been satisfactorily explained using Wolfram's methods. Mathematician Steven G. Krantz wrote, "Just because Wolfram can cook up a cellular automaton that seems to produce the spot pattern on a leopard, may we safely conclude that he understands the mechanism by which the spots are produced on the leopard, or why the spots are there, or what function (evolutionary or mating or camouflage or other) they perform?"
Principle of computational equivalence (PCE)
The principle of computational equivalence (PCE) has been criticized for being vague, unmathematical, and not making directly verifiable predictions. It has also been criticized for being contrary to the spirit of research in mathematical logic and computational complexity theory, which seek to make fine-grained distinctions between levels of computational sophistication, and for wrongly conflating different kinds of universality property. Moreover, critics such as Ray Kurzweil have argued that it ignores the distinction between hardware and software; while two computers may be equivalent in power, it does not follow that any two programs they might run are also equivalent. Others suggest it is little more than a rechristening of the Church–Turing thesis.
The fundamental theory (NKS Chapter 9)
Wolfram's speculations of a direction toward a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Professor of Computer Science at University of Texas Austin, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, and hence cannot explain the observed results of Bell tests.
Edward Fredkin and Konrad Zuse pioneered the idea of a computable universe, the former by writing a line in his book on how the world might be like a cellular automaton, later further developed by Fredkin using a toy model called Salt. It has been claimed that NKS tries to take these ideas as its own, but Wolfram's model of the universe is a rewriting network, not a cellular automaton, as Wolfram himself has suggested a cellular automaton cannot account for relativistic features such as no absolute time frame. Jürgen Schmidhuber has also charged that his work on Turing machine-computable physics was stolen without attribution, namely his idea on enumerating possible Turing-computable universes.
In a 2002 review of NKS, the Nobel laureate and elementary particle physicist Steven Weinberg wrote, "Wolfram himself is a lapsed elementary particle physicist, and I suppose he can't resist trying to apply his experience with digital computer programs to the laws of nature. This has led him to the view (also considered in a 1981 paper by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a set of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps. Following an idea of Edward Fredkin, he concludes that the universe itself would then be an automaton, like a giant computer. It's possible, but I can't see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood."
Natural selection
Wolfram's claim that natural selection is not the fundamental cause of complexity in biology has led journalist Chris Lavers to say that Wolfram does not understand the theory of evolution.
Originality
NKS has been heavily criticized as not original or important enough to justify its title and claims.
The authoritative manner in which NKS presents a vast number of examples and arguments has been criticized as leading the reader to believe that each of these is original to Wolfram; in particular, one of the most substantial new technical results presented in the book, that the rule 110 cellular automaton is Turing complete, was not proven by Wolfram. Wolfram credits the proof to his research assistant Matthew Cook. But the book's notes section acknowledges many of the discoveries made by these other scientists, citing their names together with historical facts, although not in the form of a traditional bibliography section. Additionally, the idea that very simple rules often generate great complexity is already an established idea in science, particularly in chaos theory and complex systems.
See also
Digital physics
Scientific reductionism
Calculating Space
Marcus Hutter's "Universal Artificial Intelligence" algorithm
References
External links
A New Kind of Science free E-Book
What We've Learned from NKS YouTube playlist — extensive discussion of each NKS chapter; (As of 2022, Stephen Wolfram discusses the NKS chapters in view of recent developments. Wolfram Physics Project)
2002 non-fiction books
Algorithmic art
Cellular automata
Computer science books
Complex systems theory
Mathematics and art
Metatheory of science
Science books
Self-organization
Systems theory books
Wolfram Research
Computational science | A New Kind of Science | [
"Mathematics"
] | 3,721 | [
"Self-organization",
"Applied mathematics",
"Recreational mathematics",
"Cellular automata",
"Computational science",
"Dynamical systems"
] |
93,188 | https://en.wikipedia.org/wiki/Triple-alpha%20process | The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon.
Triple-alpha process in stars
Helium accumulates in the cores of stars as a result of the proton–proton chain reaction and the carbon–nitrogen–oxygen cycle.
Nuclear fusion reaction of two helium-4 nuclei produces beryllium-8, which is highly unstable, and decays back into smaller nuclei with a half-life of , unless within that time a third alpha particle fuses with the beryllium-8 nucleus to produce an excited resonance state of carbon-12, called the Hoyle state, which nearly always decays back into three alpha particles, but once in about 2421.3 times releases energy and changes into the stable base form of carbon-12. When a star runs out of hydrogen to fuse in its core, it begins to contract and heat up. If the central temperature rises to 108 K, six times hotter than the Sun's core, alpha particles can fuse fast enough to get past the beryllium-8 barrier and produce significant amounts of stable carbon-12.
{|
| + →
| (−0.0918 MeV)
|-
| + → + 2
| (+7.367 MeV)
|}
The net energy release of the process is 7.275 MeV.
As a side effect of the process, some carbon nuclei fuse with additional helium to produce a stable isotope of oxygen and energy:
+ → + (+7.162 MeV)
Nuclear fusion reactions of helium with hydrogen produces lithium-5, which also is highly unstable, and decays back into smaller nuclei with a half-life of .
Fusing with additional helium nuclei can create heavier elements in a chain of stellar nucleosynthesis known as the alpha process, but these reactions are only significant at higher temperatures and pressures than in cores undergoing the triple-alpha process. This creates a situation in which stellar nucleosynthesis produces large amounts of carbon and oxygen, but only a small fraction of those elements are converted into neon and heavier elements. Oxygen and carbon are the main "ash" of helium-4 burning.
Primordial carbon
The triple-alpha process is ineffective at the pressures and temperatures early in the Big Bang. One consequence of this is that no significant amount of carbon was produced in the Big Bang.
Resonances
Ordinarily, the probability of the triple-alpha process is extremely small. However, the beryllium-8 ground state has almost exactly the energy of two alpha particles. In the second step, 8Be + 4He has almost exactly the energy of an excited state of 12C. This resonance greatly increases the probability that an incoming alpha particle will combine with beryllium-8 to form carbon. The existence of this resonance was predicted by Fred Hoyle before its actual observation, based on the physical necessity for it to exist, in order for carbon to be formed in stars. The prediction and then discovery of this energy resonance and process gave very significant support to Hoyle's hypothesis of stellar nucleosynthesis, which posited that all chemical elements had originally been formed from hydrogen, the true primordial substance. The anthropic principle has been cited to explain the fact that nuclear resonances are sensitively arranged to create large amounts of carbon and oxygen in the universe.
Nucleosynthesis of heavy elements
With further increases of temperature and density, fusion processes produce nuclides only up to nickel-56 (which decays later to iron); heavier elements (those beyond Ni) are created mainly by neutron capture. The slow capture of neutrons, the s-process, produces about half of elements beyond iron. The other half are produced by rapid neutron capture, the r-process, which probably occurs in core-collapse supernovae and neutron star mergers.
Reaction rate and stellar evolution
The triple-alpha steps are strongly dependent on the temperature and density of the stellar material. The power released by the reaction is approximately proportional to the temperature to the 40th power, and the density squared. In contrast, the proton–proton chain reaction produces energy at a rate proportional to the fourth power of temperature, the CNO cycle at about the 17th power of the temperature, and both are linearly proportional to the density. This strong temperature dependence has consequences for the late stage of stellar evolution, the red-giant stage.
For lower mass stars on the red-giant branch, the helium accumulating in the core is prevented from further collapse only by electron degeneracy pressure. The entire degenerate core is at the same temperature and pressure, so when its density becomes high enough, fusion via the triple-alpha process rate starts throughout the core. The core is unable to expand in response to the increased energy production until the pressure is high enough to lift the degeneracy. As a consequence, the temperature increases, causing an increased reaction rate in a positive feedback cycle that becomes a runaway reaction. This process, known as the helium flash, lasts a matter of seconds but burns 60–80% of the helium in the core. During the core flash, the star's energy production can reach approximately 1011 solar luminosities which is comparable to the luminosity of a whole galaxy, although no effects will be immediately observed at the surface, as the whole energy is used up to lift the core from the degenerate to normal, gaseous state. Since the core is no longer degenerate, hydrostatic equilibrium is once more established and the star begins to "burn" helium at its core and hydrogen in a spherical layer above the core. The star enters a steady helium-burning phase which lasts about 10% of the time it spent on the main sequence (the Sun is expected to burn helium at its core for about a billion years after the helium flash).
In higher mass stars, which evolve along the asymptotic giant branch, carbon and oxygen accumulate in the core as helium is burned, while hydrogen burning shifts to further-out layers, resulting in an intermediate helium shell. However, the boundaries of these shells do not shift outward at the same rate due to differing critical temperatures and temperature sensitivities for hydrogen and helium burning. When the temperature at the inner boundary of the helium shell is no longer high enough to sustain helium burning, the core contracts and heats up, while the hydrogen shell (and thus the star's radius) expand outward. Core contraction and shell expansion continue until the core becomes hot enough to reignite the surrounding helium. This process continues cyclically – with a period on the order of 1000 years – and stars undergoing this process have periodically variable luminosity. These stars also lose material from their outer layers in a stellar wind driven by radiation pressure, which ultimately becomes a superwind as the star enters the planetary nebula phase.
Discovery
The triple-alpha process is highly dependent on carbon-12 and beryllium-8 having resonances with slightly more energy than helium-4. Based on known resonances, by 1952 it seemed impossible for ordinary stars to produce carbon as well as any heavier element. Nuclear physicist William Alfred Fowler had noted the beryllium-8 resonance, and Edwin Salpeter had calculated the reaction rate for 8Be, 12C, and 16O nucleosynthesis taking this resonance into account. However, Salpeter calculated that red giants burned helium at temperatures of 2·108 K or higher, whereas other recent work hypothesized temperatures as low as 1.1·108 K for the core of a red giant.
Salpeter's paper mentioned in passing the effects that unknown resonances in carbon-12 would have on his calculations, but the author never followed up on them. It was instead astrophysicist Fred Hoyle who, in 1953, used the abundance of carbon-12 in the universe as evidence for the existence of a carbon-12 resonance. The only way Hoyle could find that would produce an abundance of both carbon and oxygen was through a triple-alpha process with a carbon-12 resonance near 7.68 MeV, which would also eliminate the discrepancy in Salpeter's calculations.
Hoyle went to Fowler's lab at Caltech and said that there had to be a resonance of 7.68 MeV in the carbon-12 nucleus. (There had been reports of an excited state at about 7.5 MeV.) Fred Hoyle's audacity in doing this is remarkable, and initially, the nuclear physicists in the lab were skeptical. Finally, a junior physicist, Ward Whaling, fresh from Rice University, who was looking for a project decided to look for the resonance. Fowler permitted Whaling to use an old Van de Graaff generator that was not being used. Hoyle was back in Cambridge when Fowler's lab discovered a carbon-12 resonance near 7.65 MeV a few months later, validating his prediction. The nuclear physicists put Hoyle as first author on a paper delivered by Whaling at the summer meeting of the American Physical Society. A long and fruitful collaboration between Hoyle and Fowler soon followed, with Fowler even coming to Cambridge.
The final reaction product lies in a 0+ state (spin 0 and positive parity). Since the Hoyle state was predicted to be either a 0+ or a 2+ state, electron–positron pairs or gamma rays were expected to be seen. However, when experiments were carried out, the gamma emission reaction channel was not observed, and this meant the state must be a 0+ state. This state completely suppresses single gamma emission, since single gamma emission must carry away at least 1 unit of angular momentum. Pair production from an excited 0+ state is possible because their combined spins (0) can couple to a reaction that has a change in angular momentum of 0.
Improbability and fine-tuning
Carbon is a necessary component of all known life. 12C, a stable isotope of carbon, is abundantly produced in stars due to three factors:
The decay lifetime of a 8Be nucleus is four orders of magnitude larger than the time for two 4He nuclei (alpha particles) to scatter.
An excited state of the 12C nucleus exists a little (0.3193 MeV) above the energy level of 8Be + 4He. This is necessary because the ground state of 12C is 7.3367 MeV below the energy of 8Be + 4He; a 8Be nucleus and a 4He nucleus cannot reasonably fuse directly into a ground-state 12C nucleus. However, 8Be and 4He use the kinetic energy of their collision to fuse into the excited 12C (kinetic energy supplies the additional 0.3193 MeV necessary to reach the excited state), which can then transition to its stable ground state. According to one calculation, the energy level of this excited state must be between about 7.3 MeV and 7.9 MeV to produce sufficient carbon for life to exist, and must be further "fine-tuned" to between 7.596 MeV and 7.716 MeV in order to produce the abundant level of 12C observed in nature. The Hoyle state has been measured to be about 7.65 MeV above the ground state of 12C.
In the reaction 12C + 4He → 16O, there is an excited state of oxygen which, if it were slightly higher, would provide a resonance and speed up the reaction. In that case, insufficient carbon would exist in nature; almost all of it would have converted to oxygen.
Some scholars argue the 7.656 MeV Hoyle resonance, in particular, is unlikely to be the product of mere chance. Fred Hoyle argued in 1982 that the Hoyle resonance was evidence of a "superintellect"; Leonard Susskind in The Cosmic Landscape rejects Hoyle's intelligent design argument. Instead, some scientists believe that different universes, portions of a vast "multiverse", have different fundamental constants: according to this controversial fine-tuning hypothesis, life can only evolve in the minority of universes where the fundamental constants happen to be fine-tuned to support the existence of life. Other scientists reject the hypothesis of the multiverse on account of the lack of independent evidence.
References
Nuclear fusion
Nucleosynthesis
Helium
Beryllium
Carbon
Concepts in stellar astronomy | Triple-alpha process | [
"Physics",
"Chemistry"
] | 2,533 | [
"Nuclear fission",
"Concepts in astrophysics",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Concepts in stellar astronomy",
"Nuclear fusion"
] |
93,825 | https://en.wikipedia.org/wiki/Monsanto | The Monsanto Company () was an American agrochemical and agricultural biotechnology corporation founded in 1901 and headquartered in Creve Coeur, Missouri. Monsanto's best-known product is Roundup, a glyphosate-based herbicide, developed in the 1970s. Later, the company became a major producer of genetically engineered crops. In 2018, the company ranked 199th on the Fortune 500 of the largest United States corporations by revenue.
Monsanto was one of four groups to introduce genes into plants in 1983, and was among the first to conduct field trials of genetically modified crops in 1987. It was one of the top-ten U.S. chemical companies until it divested most of its chemical businesses between 1997 and 2002, through a process of mergers and spin-offs that focused the company on biotechnology.
Monsanto was one of the first companies to apply the biotechnology industry business model to agriculture, using techniques developed by biotech drug companies. In this business model, companies recoup R&D expenses by exploiting biological patents.
Monsanto's roles in agricultural changes, biotechnology products, lobbying of government agencies, and roots as a chemical company have resulted in controversies. The company once manufactured controversial products such as the insecticide DDT, PCBs, Agent Orange, and recombinant bovine growth hormone.
In September 2016, German chemical company Bayer announced its intent to acquire Monsanto for US$66 billion in an all-cash deal. After gaining U.S. and EU regulatory approval, the sale was completed on June 7, 2018. The name Monsanto was no longer used, but Monsanto's previous product brand names were maintained. In June 2020, Bayer agreed to pay numerous settlements in lawsuits involving ex-Monsanto products Roundup, PCBs and Dicamba. Owing to the massive financial and reputational blows caused by ongoing litigation concerning Monsanto's herbicide Roundup, the Bayer-Monsanto merger is considered one of the worst corporate mergers in history.
History
"Pre-Pharmacia" Monsanto
1901 to WWII
In 1901, Monsanto was founded in St. Louis, Missouri, as a chemical company. The founder was John Francis Queeny, who, at age 42, was a 30‑year veteran of the nascent pharmaceutical industry. He funded the firm with his own money and capital from a soft drink distributor. He used for the company name the maiden name of his wife, Olga Méndez Monsanto, who was a scioness of the Monsanto family.
The company's first products were commodity food additives, such as the artificial sweetener saccharin, caffeine and vanillin.
Monsanto expanded to Europe in 1919 in a partnership with Graesser's Chemical Works at Cefn Mawr, Wales. The venture produced vanillin, aspirin and its raw ingredient salicylic acid, and later rubber processing chemicals.
In the 1920s, Monsanto expanded into basic industrial chemicals such as sulfuric acid and PCBs. Queeny's son Edgar Monsanto Queeny took over the company in 1928.
In 1926 the company founded and incorporated a town called Monsanto in Illinois (now known as Sauget). It was formed to provide minimal regulation and low taxes for Monsanto plants at a time when local jurisdictions had most of the responsibility for environmental rules. It was renamed in honor of Leo Sauget, its first village president.
In 1935, Monsanto bought the Swann Chemical Company in Anniston, Alabama, and thereby entered the business of producing PCBs.
In 1936, Monsanto acquired Thomas & Hochwalt Laboratories in Dayton, Ohio, to acquire the expertise of Charles Allen Thomas and Carroll A. Hochwalt. The acquisition became Monsanto's Central Research Department. Thomas spent the rest of his career at Monsanto, serving as President (1951–1960) and Board Chair (1960–1965). He retired in 1970. In 1943, Thomas was called to a meeting in Washington, D.C., with Leslie Groves, commander of the Manhattan Project, and James Conant, president of Harvard University and chairman of the National Defense Research Committee (NDRC). They urged Thomas to become co-director of the Manhattan Project at Los Alamos with Robert Oppenheimer, but Thomas was reluctant to leave Dayton and Monsanto. He joined the NDRC, and Monsanto's Central Research Department began to conduct related research. To that end, Monsanto operated the Dayton Project, and later Mound Laboratories, and assisted in the development of the first nuclear weapons.
Post-WWII
In 1946, Monsanto developed and marketed "All" laundry detergent, which it sold to Lever Brothers in 1957. In 1947, its styrene factory was destroyed in the Texas City Disaster. In 1949, Monsanto acquired American Viscose Corporation from Courtaulds. In 1954, Monsanto partnered with German chemical giant Bayer to form Mobay and market polyurethanes in the United States.
Monsanto began manufacturing DDT in 1944, along with some 15 other companies. This insecticide was used to kill malaria-transmitting mosquitoes, but it was banned in the United States in 1972 due to its harmful environmental impacts.
In 1977, Monsanto stopped producing PCBs; Congress banned PCB production two years later.
1960s and 1970s
In the mid‑1960s, William Standish Knowles and his team invented a way to selectively synthesize enantiomers via asymmetric hydrogenation. This was the first method for the catalytic production of pure chiral compounds. Knowles' team designed the "first industrial process to chirally synthesize an important compound"—L‑dopa, which is used to treat Parkinson's disease. In 2001, Knowles and Ryōji Noyori won the Nobel Prize in Chemistry. In the mid-1960s, chemists at Monsanto developed the Monsanto process for making acetic acid, which until 2000 was the most widely used production method. In 1964, Monsanto chemists invented AstroTurf (initially ChemGrass).
In the 1960s and 1970s, Monsanto was a producer of Agent Orange for United States Armed Forces operations in Vietnam, and settled out of court in a lawsuit brought by veterans in 1984. In 1968, it became the first company to start mass production of (visible) light-emitting diodes (LEDs), using gallium arsenide phosphide. From 1968 to 1970, sales doubled every few months. Their products (discrete LEDs and seven-segment numeric displays) became industry standards. The primary markets then were electronic calculators, digital watches and digital clocks. Monsanto became a pioneer of optoelectronics in the 1970s.
Between 1968 and 1974, the company sponsored the PGA Tour event in Pensacola, Florida, which was renamed the Monsanto Open.
In 1974, Harvard University and Monsanto signed a 10-year research grant to support the cancer research of Judah Folkman, which became the largest such arrangement ever made; medical inventions arising from that research were the first for which Harvard allowed its faculty to submit patent application.
1980 to 1989: Becoming an agribiotech company
Monsanto scientists were among the first to genetically modify a plant cell, publishing their results in 1983. Five years later the company conducted the first field tests of genetically modified crops. Increasing involvement in agricultural biotechnology dates from the installment of Richard Mahoney as Monsanto's CEO in 1983. This involvement increased under the leadership of Robert Shapiro, appointed CEO in 1995, leading ultimately to the disposition of product lines unrelated to agriculture.
In 1985, Monsanto acquired G.D. Searle & Company, a life sciences company that focused on pharmaceuticals, agriculture and animal health. In 1993, its Searle division filed a patent application for Celebrex, which in 1998 became the first selective COX‑2 inhibitor to be approved by the U.S. Food and Drug Administration (FDA). Celebrex became a blockbuster drug and was often mentioned as a key reason for Pfizer's acquisition of Monsanto's pharmaceutical business in 2002.
1990 to 1999: Moving into the seed market & industry consolidation
In 1994, Monsanto introduced a recombinant version of bovine somatotropin, brand-named Posilac. Monsanto later sold this business to Eli Lilly and Company.
In 1996, Monsanto purchased Agracetus, the biotechnology company that had generated the first transgenic cotton, soybeans, peanuts and other crops, and from which Monsanto had been licensing technology since 1991.
In 1997, Monsanto divested Solutia, a company created to carry off the responsibility for Monsanto's PCB business and associated liabilities, along with some related organic chemical production.
Monsanto first entered the maize seed business when it purchased 40% of Dekalb in 1996; it purchased the remainder of the corporation in 1998. In 1997, the company first published an annual report citing Monsanto's Law, a biotechnological take on Moore's Law, indicating its future directions and exponential growth in the use of biotechnology. In the same year, Californian GMO company Calgene was acquired. In 1998, Monsanto purchased Cargill's international seed business, which gave it access to sales and distribution facilities in 51 countries. In 2005, it finalized the purchase of Seminis Inc, a leading global vegetable and fruit seed company, for $1.4 billion. This made it the world's largest conventional seed company.
In 1999, Monsanto sold off NutraSweet Co. In December of the same year, Monsanto agreed to merge with Pharmacia & Upjohn, in a deal valuing the transaction at $27 billion. The agricultural division became a wholly owned subsidiary of the "new" Pharmacia; Monsanto's medical research division, which included products such as Celebrex.
"Pre-Pharmacia" Monsanto overview
"Post-Pharmacia" Monsanto
2000 to 2009: Birth of the "new" Monsanto
In 2000, Pharmacia spun off its agro-biotech subsidiary into a new company, the "new Monsanto", focused on four key agricultural crops—soybeans, maize, wheat and cotton. Monsanto agreed to indemnify Pharmacia against potential liabilities from judgments against Solutia. As a result, the new Monsanto continued to be a party to numerous lawsuits over the prior Monsanto. Pharmacia was bought by Pfizer in 2003.
In 2005, Monsanto acquired Emergent Genetics and its Stoneville and NexGen cotton brands. Emergent was the third-largest U.S. cotton seed company, with about 12% of the U.S. market. Monsanto's goal was to obtain "a strategic cotton germplasm and traits platform".
Also in 2005, Monsanto purchased Seminis, the California-based world leader in vegetable seed production, for $1.4 billion. Seminis developed new vegetable varieties using advanced cross-pollination methods. Monsanto indicated that Seminis would continue with non-GM development, while not ruling out GM in the longer term.
In June 2007, Monsanto purchased Delta and Pine Land Company, a major cotton seed breeder, for $1.5 billion. As a condition for approval from the Department of Justice, Monsanto was obligated to divest its Stoneville cotton business, which it sold to Bayer, and to divest its NexGen cotton business, which it sold to Americot. Monsanto also exited the pig-breeding business by selling Monsanto Choice Genetics to Newsham Genetics LC in November, divesting itself of "any and all swine-related patents, patent applications, and all other intellectual property". In 2007, Monsanto and BASF announced a long-term agreement to cooperate in the research, development, and marketing of new plant biotechnology products.
In 2008, Monsanto purchased Dutch seed company De Ruiter Seeds for €546 million, and sold its POSILAC bovine somatotropin brand and related business to Elanco Animal Health, a division of Eli Lilly & Co, in August for $300 million plus "additional contingent consideration".
2010 to 2017: Further growth, Syngenta
In 2012, Monsanto purchased for $210 million Precision Planting Inc., a company that produced computer hardware and software designed to enable farmers to increase yield and productivity through more precise planting.
Monsanto purchased San Francisco–based Climate Corp for $930 million in 2013. Climate Corp makes local weather forecasts for farmers based on data modelling and historical data; if the forecasts were wrong, the farmer was compensated.
In May 2013, a worldwide protest against Monsanto corporation, called March Against Monsanto, was held in over 400 cities. A second protest took place in May 2014.
Monsanto tried to acquire Swiss agro-biotechnology rival Syngenta for US$46.5 billion in 2015, but failed. In that year Monsanto was the world's biggest supplier of seeds, controlling 26% of the global seed market (Du Pont was second with 21%). Monsanto was the only manufacturer of white phosphorus for military use in the US.
"Post-Pharmacia" Monsanto overview
Sale to Bayer
In September 2016, Monsanto agreed to be acquired by Bayer for US$66 billion. In an effort to receive regulatory clearance for the deal, Bayer announced the sale of significant portions of its current agriculture businesses, including its seed and herbicide businesses, to BASF.
The deal was approved by the European Union on March 21, 2018, and approved in the United States on May 29, 2018. The sale closed on June 7, 2018; Bayer announced its intent to discontinue the Monsanto name, with the combined company operating solely under the Bayer brand.
Under the terms of merger, Bayer promised to maintain Monsanto's more than 9,000 U.S. jobs and add 3,000 new U.S. high-tech positions.
The prospective merger parties said at the time the combined agriculture business planned to spend $16 billion on research and development over the next six years and at least $8 billion on research and development in United States.
Bayer would also establish its new global Seeds & Traits and North American commercial headquarters in St. Louis, Missouri.
The Bayer-Monsanto merger is widely considered to be one of the worst mergers in history, mostly due to the exposure to Roundup litigation. By 2023, Bayer's market value had declined by over 60% since its 2016 merger, leaving the company's overall worth at less than half of what it paid to acquire Monsanto.
Products and associated issues
Current products
Glyphosate herbicides
Following its 1970 introduction, Monsanto's last commercially relevant United States patent on the herbicide glyphosate (brand name RoundUp) expired in 2000. Glyphosate has since been marketed by many agrochemical companies, in various solution strengths and with various adjuvants, under dozens of tradenames. As of 2009, glyphosate represented about 10% of Monsanto's revenue. Roundup-related products (which include genetically modified seeds) represented about half of Monsanto's gross margin.
Crop seed
As of 2015, Monsanto's line of seed products included corn, cotton, soy and vegetable seeds.
Row crops
Many of Monsanto's agricultural seed products are genetically modified, such as for resistance to herbicides, including glyphosate and dicamba. Monsanto calls glyphosate-tolerant seeds Roundup Ready. Monsanto's introduction of this system (planting a glyphosate-resistant seed and then applying glyphosate once plants emerged) allowed farmers to increase yield by planting rows closer together. Without it, farmers had to plant rows far enough apart to allow the control of post-emergent weeds with mechanical tillage. Farmers widely adopted the technology—for example over 80% of maize (Mon 832), soybean (MON-Ø4Ø32-6), cotton, sugar beet and canola planted in the United States are glyphosate-tolerant. Monsanto developed a Roundup Ready genetically modified wheat (MON 71800) but ended development in 2004 due to concerns from wheat exporters about the rejection of genetically modified (GM) wheat by foreign markets.
Two patents were critical to Monsanto's GM soybean business; one expired in 2011 and the other in 2014. The second expiration meant that glyphosate resistant soybeans became "generic". The first harvest of generic glyphosate-tolerant soybeans came in 2015. Monsanto broadly licensed the patent to other seed companies that include glyphosate resistance trait in their seed products. About 150 companies have licensed the technology, including competitors Syngenta and DuPont Pioneer.
Monsanto invented and sells genetically modified seeds that make a crystalline insecticidal protein from Bacillus thuringiensis, known as Bt. In 1995 Monsanto's potato plants producing Bt toxin were approved by the Environmental Protection Agency, following approval by the FDA, making it the first pesticide-producing crop to be approved in the United States. Monsanto subsequently developed Bt maize (MON 802, MON 809, MON 863, MON 810), Bt soybean and Bt cotton.
Monsanto produces seed that has multiple genetic modifications, also known as "stacked traits"—for instance, cotton that make one or more Bt proteins and is resistant to glyphosate. One of these, created in collaboration with Dow Chemical Company, is called SmartStax. In 2011 Monsanto launched the Genuity brand for its stacked-trait products.
As of 2012, the agricultural seed lineup included Roundup Ready alfalfa, canola and sugarbeet; Bt and/or Roundup Ready cotton; sorghum hybrids; soybeans with various oil profiles, most with the Roundup Ready trait; and a wide range of wheat products, many of which incorporate the nontransgenic "clearfield" imazamox-tolerant trait from BASF.
In 2013 Monsanto launched the first transgenic drought tolerance trait in a line of corn hybrids branded DroughtGard. The MON 87460 trait is provided by the insertion of the cspB gene from the soil microbe Bacillus subtilis; it was approved by the USDA in 2011 and by China in 2013.
The "Xtend Crop System" includes seed genetically modified to be resistant to both glyphosate and dicamba, and a herbicide product including those two active ingredients. In December 2014, the system was approved for use in the US. In February 2016, China approved the Roundup Ready 2 Xtend system. The lack of European Union approval led many American traders to reject the use of Xtend soybeans over concerns that the new seeds would become mixed with EU-approved seeds, leading Europe to reject American soybean exports.
India-specific issues
In 2009, Monsanto scientists discovered insects that had developed resistance to the Bt Cotton planted in Gujarat. Monsanto communicated this to the Indian government and its customers, stating that "Resistance is natural and expected, so measures to delay resistance are important. Among the factors that may have contributed to pink bollworm resistance to the Cry1Ac protein in Bollgard I in Gujarat are limited refuge planting and early use of unapproved Bt cotton seed, planted prior to GEAC approval of Bollgard I cotton, which may have had lower protein expression levels." The company advised farmers to switch to its second generation of Bt cotton – Bolgard II – which had two resistance genes instead of one, the widely recognised best practice to forestall, prevent, and cope with any kind of pesticide resistance. However, this advice was criticized: "an internal analysis of the statement of the Ministry of Environment and Forests says it 'appears that this could be a business strategy to phase out single gene events [that is, the first-generation Bollgard I product] and promote double genes [the second generation Bollgard II] which would fetch higher price.
Monsanto's GM cotton seed was the subject of NGO agitation because of its higher cost. Indian farmers crossed GM varieties with local varieties, using plant breeding, violating their agreements with Monsanto. In 2009, high prices of Bt Cotton were blamed for forcing farmers of Jhabua district into debt when the crops died due to lack of rain.
Vegetables
In 2012 Monsanto was the world's largest supplier of non-GE vegetable seeds by value, with sales of $800M. 95% of the research and development for vegetable seed is in conventional breeding. The company concentrates on improving flavor. According to their website they sell "4,000 distinct seed varieties representing more than 20 species". Broccoli, with the brand name Beneforté, with increased amounts of glucoraphanin was introduced in 2010 following development by its Seminis subsidiary.
Former products
Polychlorinated biphenyls (PCBs)
Until it ended production in 1977, Monsanto was the source of 99% of the polychlorinated biphenyls (PCBs) used by U.S. industry. They were sold under brand names including Aroclor and Santotherm; the name Santotherm is still used for non-chlorinated products. PCBs are a persistent organic pollutant, and cause cancer in both animals and humans, among other health effects. PCBs were initially welcomed due to the electrical industry's need for durable, safer (than flammable mineral oil) cooling and insulating fluid for industrial transformers and capacitors. PCBs were also commonly used as stabilizing additives in the manufacture of flexible PVC coatings for electrical wiring and in electronic components to enhance PVC heat and fire resistance. As transformer leaks occurred and toxicity problems arose near factories, their durability and toxicity became recognized as serious problems. PCB production was banned by the U.S. Congress in 1979 and by the Stockholm Convention on Persistent Organic Pollutants in 2001.
Agent Orange
Monsanto, Dow Chemical, and eight other chemical companies made Agent Orange for the U.S. Department of Defense. It was given its name from the color of the orange-striped barrels in which it was shipped, and was by far the most widely used of the so-called "Rainbow Herbicides".
Bovine somatotropin
Monsanto developed and sold recombinant bovine somatotropin (also known as rBST and rBGH), a synthetic hormone that increases milk production by 11–16% when injected into cows. In October 2008, Monsanto sold this business to Eli Lilly for $300 million plus additional considerations.
The use of rBST remains controversial with respect to its effects on cows and their milk.
In some markets, milk from cows that are not treated with rBST is sold with labels indicating that it is rBST-free: this milk has proved popular with consumers. In reaction to this, in early 2008 a pro-rBST advocacy group called "American Farmers for the Advancement and Conservation of Technology" (AFACT), made up of dairies and originally affiliated with Monsanto, formed and began lobbying to ban such labels. AFACT stated that "absence" labels can be misleading and imply that milk from cows treated with rBST is inferior.
Uncommercialized products
Monsanto also developed notable technologies that were not ultimately commercialized.
"Terminator" seeds
Genetic use restriction technology, colloquially known as "terminator technology", produces plants with sterile seeds. This trait would prevent the spread of those seeds into the wild. It also would prevent farmers from planting seeds they harvest, requiring them to purchase seed for every planting, allowing the company to enforce its licensing terms via technology. Farmers have been buying hybrid seeds for generations, instead of replanting their harvest, because second-generation hybrid seeds are inferior. Nevertheless, most seed companies contract only with farmers who agree not to plant harvested seeds.
Terminator technology has been developed by governmental labs, university researchers and companies. The technology has not been used commercially. Rumors that Monsanto and other companies intended to introduce terminator technology caused protests, for example in India.
In 1999, Monsanto pledged not to commercialize terminator technology. The Delta & Pine Land Company of Mississippi intended to commercialize the technology, but D&PL was acquired by Monsanto in 2007.
Monsanto "Terminator seeds" were never commercialized nor used in any farmer's field anywhere in the world. The patent expired in 2015.
GM wheat
Monsanto developed several strains of genetically modified wheat, including glyphosate-resistant strains, in the 1990s. Field tests were done in the United States between 1998 and 2005. As of 2017, no genetically modified wheat had been released for commercial use.
Legal affairs
Monsanto engaged in high-profile lawsuits, as both plaintiff and defendant. It defended lawsuits mostly over its products' health and environmental effects. Monsanto used the courts to enforce its patents, particularly in agricultural biotechnology, an approach similar to that of other companies in the field, such as Dupont Pioneer and Syngenta. Monsanto also became one of the most controversial large corporations in the world, over a range of issues involving its industrial and agricultural chemical products, and GM seed. In April 2018, just prior to Bayer's acquisition, Bayer indicated that improving Monsanto's reputation represented a major challenge. That June, Bayer announced it would drop the Monsanto name as part of a campaign to regain consumer trust.
Argentina
Argentina approved Roundup Ready soy in 1996. Between 1996 and 2008 soy production grew from 14 million acres to 42 million acres. The growth was driven by Argentine investors' interest in export markets. The consolidation led to a decrease in production of many staples such as milk, rice, maize, potatoes and lentils. As of 2004, about 150,000 small farmers had left the countryside; as of 2009, 50% in the Chaco region.
The Guardian reported that a Monsanto representative had said, "any problems with GM soya were to do with use of the crop as a monoculture, not because it was GM. If you grow any crop to the exclusion of any other you are bound to get problems."
In 2005 and 2006, Monsanto attempted to enforce its patents on soymeal originating in Argentina and shipped to Spain by having Spanish customs officials seize the soymeal shipments. The seizures were part of a larger attempt by Monsanto to put pressure on the Argentinian government to enforce Monsanto's seed patents.
In 2013 environmentalist groups objected to a Monsanto corn seed conditioning facility in Malvinas Argentinas, Córdoba. Neighbours objected to the risk of environmental impact. Court rulings supported the project, but environmentalist groups organised demonstrations and opened an online petition for the subject to be decided in a popular referendum. The court rulings stipulated that while construction could continue, the facility could not begin operating until the environmental impact report required by law had been duly presented.
In 2016 Monsanto reached an agreement with Argentina's government on soybean seed royalty payments. Monsanto agreed to give the Argentine Seed Institute (Inase) oversight over crops grown from Monsanto's Intacta genetically modified soybean seeds. Before the agreement, Argentine farmers generally avoided royalties by using seeds from previous harvests or purchased from non-registered suppliers. Inase agreed to delegate testing to grain exchanges. About 6 million sample tests were to be conducted annually. Seeds that appear to be GMOs may be tested again using a polymerase chain reaction test.
Brazil
Brazil is the second largest producer of GMO soy. In 2003 GM soy was found in fields planted in the state of Rio Grande do Sul. This was a controversial decision, and in response, the Landless Workers' Movement protested by invading and occupying several Monsanto farm plots used for research, training and seed-processing. In 2005 Brazil passed a law creating a regulatory pathway for GM crops.
China
Monsanto was criticized by Chinese economist Larry Lang for controlling the Chinese soybean market, and for trying to do the same to Chinese corn and cotton.
India
In the late 1990s and early 2000s, public attention was drawn to suicides by indebted farmers following crop failures. For example, in the early 2000s, farmers in Andhra Pradesh (AP) were in economic crisis due to high-interest rates and crop failures, leading to widespread unrest and farmer suicides. Monsanto was one focus of protests with respect to the price and yields of Bt seed. In 2005, the Genetic Engineering Approval Committee, the Indian regulatory authority, released a study on field tests of certain Bt cotton strains in AP and ruled that Monsanto could not market those strains in AP because of poor yields. At about the same time, the state agriculture minister barred the company from selling Bt cotton seed, because Monsanto refused a request by the state government to provide pay about Rs 4.5 crore (about one million US$) to indebted farmers in some districts, and because the government blamed Monsanto's seeds for crop failures. The order was later lifted.
In 2006, AP tried to convince Monsanto to reduce the price of Bt seeds. Unsatisfied, the state filed several cases against Monsanto and its Mumbai-based licensee, Maharashtra Hybrid Seeds. Research by International Food Policy Research Institute found no evidence supporting an increased suicide rate following the introduction of Bt cotton and that Bt cotton. The report stated that farmer suicides predated commercial introduction in 2002 (and unofficial introduction in 2001) and that such suicides had made up a fairly constant portion of the overall national suicide rate since 1997. The report concluded that while Bt cotton may have been a factor in specific suicides, the contribution was likely marginal compared to socio-economic factors. As of 2009, Bt cotton was planted in 87% of Indian cotton-growing land.
Critics including Vandana Shiva said that the crop failures could "often be traced to" Monsanto's Bt cotton, that the seeds increased farmer indebtedness and argued that Monsanto misrepresented the profitability of their Bt Cotton, causing losses leading to debt. In 2009, Shiva wrote that Indian farmers who had previously spent as little as ₹7 (rupees) per kilogram were now paying up to ₹17,000 per kilo per year for Bt cotton. In 2012 the Indian Council of Agricultural Research (ICAR) and the Central Cotton Research Institute (CCRI) stated that for the first time farmer suicides could be linked to a decline in the performance of Bt cotton, and advised, "cotton farmers are in a deep crisis since shifting to Bt cotton. The spate of farmer suicides in 2011–12 has been particularly severe among Bt cotton farmers."
In 2004, in response to an order from the Bombay High Court the Tata Institute produced a report on farmer suicides in Maharashtra in 2005. The survey cited "government apathy, the absence of a safety net for farmers, and lack of access to information related to agriculture as the chief causes for the desperate condition of farmers in the state."
Various studies identified the important factors as insufficient or risky credit systems, the difficulty of farming semi-arid regions, poor agricultural income, absence of alternative income opportunities, a downturn in the urban economy which forced non-farmers into farming and the absence of suitable counseling services. ICAR and CCRI stated that the cost of cotton cultivation had jumped as a consequence of rising pesticide costs, while total Bt cotton production in the five years from 2007 to 2012 had declined.
United Kingdom
Brofiscin Quarry was used as a waste site from about 1965 to 1972 and accepted waste from BP, Veolia and Monsanto. A 2005 report by Environment Agency Wales (EAW) found that the quarry contained up to 75 toxic substances, including heavy metals, Agent Orange and PCBs.
In February 2011, Monsanto agreed to help with the costs of remediation, but did not accept responsibility for the pollution. In 2011, EAW and the Rhondda Cynon Taf council announced that they had decided to place an engineered cap over the waste mass, and stated that the cost would be £1.5 million; previous estimates had been as high as £100 million.
United States
PCBs
In the late 1960s, the Monsanto plant in Sauget, Illinois, was the nation's largest producer of polychlorinated biphenyl (PCB) compounds, which remained in the water along Dead Creek there. An EPA official referred to Sauget as "one of the most polluted communities in the region" and "a soup of different chemicals".
In Anniston, Alabama, plaintiffs in a 2002 lawsuit provided documentation showing that the local Monsanto factory knowingly discharged both mercury and PCB-laden waste into local creeks for over 40 years. In 1969 Monsanto dumped 45 tons of PCBs into Snow Creek, a feeder for Choccolocco Creek, which supplies much of the area's drinking water, and buried millions of pounds of PCB in open-pit landfills located on hillsides above the plant and surrounding neighborhoods. In August 2003, Solutia and Monsanto agreed to pay plaintiffs $700 million to settle claims by over 20,000 Anniston residents.
In June 2020, Bayer proposed paying $650 million to settle local PCB lawsuits, and $170 million to the attorneys-general of New Mexico, Washington and the District of Columbia. Monsanto was acknowledged at the time of the settlement to have ceased making PCBs in 1977, though State Impact of Pennsylvania reported that this did not stop PCBs from contaminating people many years later. State Impact of Pennsylvania stated "In 1979, the EPA banned the use of PCBs, but they still exist in some products produced before 1979. They persist in the environment because they bind to sediments and soils. High exposure to PCBs can cause birth defects, developmental delays, and liver changes." On November 25, 2020, however U.S. District Judge Fernando M. Olguin rejected the proposed $650 million settlement from Bayer and allowed Monsanto-related lawsuits involving PCB to proceed.
In January 2025, Monsanto was ordered to pay $100 million to four people who say they were sickened by PCBs at a school in Monroe, Washington.
Polluted sites
As of November 2013, Monsanto was associated with nine "active" Superfund sites and 32 "archived" sites in the US, in the EPA's Superfund database. Monsanto was sued and settled multiple times for damaging the health of its employees or residents near its Superfund sites through pollution and poisoning.
GM wheat
In 2013 a Monsanto-developed transgenic cultivar of glyphosate-resistant wheat was discovered on a farm in Oregon, growing as a weed or "volunteer plant". The final Oregon field test had occurred in 2001. As of May 2013, the GMO seed source was unknown. Volunteer wheat from a former test field two miles away was tested and was not found to be glyphosate-tolerant. Monsanto faced penalties up to $1 million over potential violations of the Plant Protection Act. The discovery threatened world-leading US wheat exports, which totaled $8.1 billion in 2012. This wheat variety was rarely exported to Europe and was more likely destined for Asia. Monsanto said it had destroyed all the material it held after completing trials in 2004 and it was "mystified" by its appearance. On June 14, 2013, the USDA announced: "As of today, USDA has neither found nor been informed of anything that would indicate that this incident amounts to more than a single isolated incident in a single field on a single farm. All information collected so far shows no indication of the presence of GE wheat in commerce." As of August 30, 2013, while the source of the GM wheat remained unknown, Japan, South Korea and Taiwan had all resumed placing orders.
Cancer risks of Roundup
Monsanto has faced controversy in the United States over claims that its herbicide products might be carcinogens. There is limited evidence that human cancer risk might increase as a result of occupational exposure to large amounts of glyphosate, as in agricultural work, but no good evidence of such a risk from home use, such as in domestic gardening. The consensus among national pesticide regulatory agencies and scientific organizations is that labeled uses of glyphosate have demonstrated no evidence of human carcinogenicity. Organizations such as the World Health Organization (WHO), the Food and Agriculture Organization, European Commission, Canadian Pest Management Regulatory Agency, and the German Federal Institute for Risk Assessment have concluded that there is no evidence that glyphosate poses a carcinogenic or genotoxic risk to humans. However, one international scientific organization, the International Agency for Research on Cancer (IARC), affiliated with the WHO, has made claims of carcinogenicity in research reviews; in 2015 the IARC declared glyphosate "probably carcinogenic".
As of October 30, 2019, there were 42,700 plaintiffs who said that glyphosate herbicides caused their cancer after the IARC report in 2015 linking glyphosate to cancer in humans. Monsanto denies that Roundup is carcinogenic.
In March 2017, 40 plaintiffs filed a lawsuit at the Alameda County Superior Court, a branch of the California Superior Court, asking for damages caused by the company's glyphosate-based weed-killers, including Roundup, and demanding a jury trial. On August 10, 2018, Monsanto lost the first decided case. Dewayne Johnson, who has non-Hodgkin's lymphoma, was initially awarded $289 million in damages after a jury in San Francisco said that Monsanto had failed to adequately warn consumers of cancer risks posed by the herbicide. Pending appeal, the award was later reduced to $78.5 million. In November 2018, Monsanto appealed the judgement, asking an appellate court to consider a motion for a new trial. A verdict on the appeal was delivered in June 2020 upholding the verdict but further reducing the award to $21.5 million.
On March 27, 2019, Monsanto was found liable in a federal court for Edwin Hardeman's non-Hodgkin's lymphoma and ordered to pay $80 million in damages. A spokesperson for Bayer, by this time the parent company of Monsanto, said the company would appeal the verdict.
On May 13, 2019, a jury in California ordered Bayer to pay $2 billion in damages after finding that the company had failed to adequately inform consumers of the possible carcinogenicity of Roundup. On July 26, 2019, an Alameda County judge cut the settlement to $86.7 million, stating that the judgement by the jury exceeded legal precedent.
In June 2020, Monsanto acquisitor Bayer agreed to settle over a hundred thousand Roundup cancer lawsuits, agreeing to pay $8.8 to $9.6 billion to settle those claims, and $1.5 billion for any future claims. The settlement does not include three cases that have already gone to jury trials and are being appealed.
Dicamba lawsuits
Following a lawsuit by a peach farmer alleging that Dicamba used as a weed killer drifted in the wind from adjacent crops to destroy his peach orchards, a Missouri trial jury found in February 2020 that Monsanto and codefendant BASF were negligent in design of Dicamba and failed to warn farmers about the product, awarding $15 million for losses and $250 million in punitive damages. On February 14, 2020, the jury involved in a Missouri lawsuit involving tree damage caused by dicamba drift ruled against Bayer and its co-defendant BASF and found in favor of Bader Farms owner Bill Bader. In June 2020, Bayer agreed to a settlement of up to $400 million for all 2015–2020 crop year dicamba claims, not including the $250 million judgement which was issued to Bader. On November 25, 2020, U.S. District Judge Stephen Limbaugh Jr. reduced the punitive damage amount in the Bader Farms case to $60 million.
Improper accounting for incentive rebates
From 2009 to 2011, Monsanto improperly accounted for incentive rebates. The actions inflated Monsanto's reported profit by $31 million over the two years. Monsanto paid $80 million in penalties pursuant to a subsequent settlement with the US Securities and Exchange Commission. Monsanto materially misstated its consolidated earnings in response to losing market share of Roundup to generic producers. Monsanto overhauled its internal controls. Two of their top CPAs were suspended and Monsanto was required to hire, at their expense, an independent ethics/compliance consultant for two years.
Alleged ghostwriting
A review of glyphosate's carcinogenic potential by four independent expert panels, with a comparison to the IARC assessment, was published in September 2016. Using emails released in August 2017 by plaintiffs' lawyers who are suing Monsanto, Bloomberg Business Week reported that "Monsanto scientists were heavily involved in organizing, reviewing, and editing drafts submitted by the outside experts." A Monsanto spokesperson responded that Monsanto had provided only non-substantive cosmetic copyediting.
In 2017, The New York Times reported that a 2015 article attributed to researcher and columnist Henry I. Miller had been drafted by Monsanto. According to the report, Monsanto asked Miller to write an article rebutting the findings of the International Agency for Research on Cancer, and he indicated willingness to do it if he "could start from a high-quality draft". Forbes later removed Miller's blog from Forbes.com and ended their relationship.
Government relations
United States
Monsanto regularly lobbied the US government with expenses reaching $8.8 million in 2008 and $6.3 million in 2011. $2 million was spent on matters concerning "Foreign Agriculture Biotechnology Laws, Regulations, and Trade". Some US diplomats in Europe at other times worked directly for Monsanto.
California's 2012 Proposition 37 would have mandated the disclosure of genetically modified crops used in the production of California food products. Monsanto spent $8.1 million opposing passage, making it the largest contributor against the initiative. The proposition was rejected by a 53.7% majority. Labeling is not required in the US.
In 2009 Michael R. Taylor, food safety expert and former Monsanto VP for Public Policy, became a senior advisor to the FDA Commissioner.
Monsanto is a member of the Washington D.C.–based Biotechnology Industry Organization (BIO), the world's largest biotechnology trade association, which provides "advocacy, business development, and communications services." Between 2010 and 2011 BIO spent a total of $16.43 million on lobbying.
The Monsanto Company Citizenship Fund aka Monsanto Citizenship Fund is a political action committee that donated over $10 million to various candidates from 2003 to 2013.
As of October 2013, Monsanto and DuPont Co. continued backing an anti-labeling campaign, spending roughly $18 million. The state of Washington, along with 26 other states, made proposals in November to require GMO labeling.
Revolving door
In the US regulatory environment, many individuals move back and forth between positions in the public and private sectors, including at Monsanto. Critics argued that the connections between the company and the government allowed Monsanto to obtain favorable regulations at the expense of consumer safety. Supporters of the practice point to the benefits of competent and experienced individuals in both sectors and to the importance of appropriately managing potential conflicts of interest. The list of such people includes:
Linda J. Fisher—EPA assistant administrator, then Monsanto VP from 1995 to 2000. then EPA deputy administrator.
Michael A. Friedman, MD—FDA deputy commissioner.
Earle H. Harbison Jr., Central Intelligence Agency Deputy Director, then President, Chief Operating Officer, and Director, from 1986 to 1993.
Robert Holifield—chief of staff of Senate Agriculture Committee, then partner in Lincoln Policy Group.
Mickey Kantor—US trade representative, then Monsanto board member.
Blanche Lincoln—US Senator and chair of Agriculture Committee, then founder of lobbying firm Lincoln Policy Group
William D. Ruckelshaus—EPA Administrator, then acting Director of the Federal Bureau of Investigation, and then Deputy Attorney General of the United States, then EPA administrator, then Monsanto Board member.
Donald Rumsfeld—Secretary of Defense and previous secretary of Searle, a Monsanto subsidiary, for eight years
Michael R. Taylor—assistant to the FDA commissioner, then attorney for King & Spalding, then FDA deputy commissioner for policy on food safety between 1991 and 1994. He was cleared of conflict of interest accusations. Then he became Monsanto's VP for Public Policy, becoming Senior Advisor to the FDA Commissioner for the Obama administration.
Clarence Thomas—Supreme Court Justice who worked as an attorney for Monsanto in the 1970s, then wrote the majority opinion in J. E. M. Ag Supply, Inc. v. Pioneer Hi-Bred International, Inc. finding that "newly developed plant breeds are patentable under the general utility patent laws of the United States."
Ann Veneman—Secretary of the Department of Agriculture, and member of the board of directors of Calgene
United Kingdom
During the late 1990s, Monsanto lobbied to raise permitted glyphosate levels in soybeans and was successful in convincing Codex Alimentarius and both the UK and US governments to lift levels 200 times to 20 milligrams per kilogram of soya. When asked how negotiations with Monsanto were conducted, Lord Donoughue, then the Labour Party Agriculture minister in the House of Lords, stated that all information relating to the matter would be "kept secret". During the 24 months prior to the 1997 British election Monsanto representatives had 22 meetings at the departments of Agriculture and the Environment. Stanley Greenberg, an election advisor to Tony Blair, later worked as a Monsanto consultant. Former Labour spokesperson David Hill, became Monsanto's media adviser at the lobbying firm Bell Pottinger. The Labour government was challenged in Parliament about "trips, facilities, gifts and other offerings of financial value provided by Monsanto to civil servants", but only acknowledged that Department of Trade and Industry had two working lunches with Monsanto. Peter Luff, then a Conservative Party MP and Chairman of the Agriculture Select Committee, received up to £10,000 a year from Bell Pottinger on behalf of Monsanto.
European Union
In January 2011, WikiLeaks documents suggested that US diplomats in Europe responded to a request for help from the Spanish government. One report stated, "In addition, the cables show US diplomats working directly for GM companies such as Monsanto. 'In response to recent urgent requests by [Spanish rural affairs ministry] state secretary Josep Puxeu and Monsanto, post requests renewed US government support of Spain's science-based agricultural biotechnology position through high-level US government intervention.'" The leaked documents showed that in 2009, when the Spanish government's policy approving MON810 was under pressure from EU interests, Monsanto's Director for Biotechnology for Spain and Portugal requested that the US government support Spain on the matter. The leaks indicated that Spain and the US had worked closely together to "persuade the EU not to strengthen biotechnology laws". Spain was viewed as a key GMO supporter and a leading indicator of support across the continent. The leaks also revealed that in response to an attempt by France to ban MON810 in late 2007, then-US ambassador to France, Craig Roberts Stapleton, asked Washington to "calibrate a targeted retaliation list that [would cause] some pain across the EU", targeting countries that did not support the use of GM crops. This activity transpired after the US, Australia, Argentina, Brazil, Canada, India, Mexico and New Zealand had brought an action against Europe via the World Trade Organization with respect to the EU's banning of GMOs; in 2006, the WTO had ruled against the EU.
Monsanto was a member of EuropaBio, the leading biotechnology trade group in Europe. One of EuropaBio's initiatives is "Transforming Europe's position on GM food". It found "an urgent need to reshape the terms of the debate about GM in Europe". EuropaBio proposed the recruitment of high-profile "ambassadors" to lobby EU officials.
In September 2017 Monsanto lobbyists were banned from the European parliament after the Monsanto refused to attend a parliamentary hearing into allegations of regulatory interference.
Haiti
After the 2010 Haiti earthquake, Monsanto donated $255,000 for disaster relief and 60,000 seed sacks (475 tons) of hybrid (non-GM) corn and vegetable seeds worth $4 million. However, a Catholic Relief Services (CRS) rapid assessment of seed supply and demand for the five most common food security crops found that the Haitians had enough seed and recommended that imported seeds be introduced only on a small scale. Emmanuel Prophete, head of Haiti's Ministry of Agriculture's Service National Semencier (SNS), stated that SNS was not opposed to the hybrid maize seeds because they at least double yields. Louise Sperling, Principal Researcher at the International Center for Tropical Agriculture (CIAT) told HGW that she was not opposed to hybrids, but noted that most hybrids required extra water and better soils and that most of Haiti was not appropriate for hybrids.
Activists objected that some of the seeds were coated with the fungicides Maxim or thiram. In the United States, pesticides containing thiram are banned in home garden products because most home gardeners do not have adequate protection. Activists wrote that the coated seeds were handled in a dangerous manner by the recipients.
The donated seeds were sold at a reduced price in local markets. However, farmers feared that they were being given seeds that would "threaten local varieties".
Public relations
Monsanto has engaged in various public relations campaigns to improve its image and public perception of some of its products. These include developing a relationship with scientist Richard Doll with respect to Agent Orange. Other campaigns include the joint funding with other biotech companies for the website GMO Answers.
Sponsorships
Disneyland attractions, namely:
Hall of Chemistry (1955 to 1966)
Monsanto House of the Future (from 1957 to 1967)
Fashions and Fabrics through the Years (from 1965 to 1966)
Adventure Thru Inner Space (from 1967 to 1986)
Monsanto has donated $10 million to the Missouri Botanical Garden in St. Louis in the 1970s, which named its 1998 plant science facility the 'Monsanto Center', which has been renamed in 2018 as the 'Bayer Center'.
Field Museum
Gregor Mendel exhibit and "Underground Adventures" since 2011 "about the importance and fragility of the ecosystem within soil".
"Monsanto Environmental Education Initiative", led by Gregory M. Mueller
Chair of the Department of Botany and Associate Curator of Mycology
Staff of the Field Museum, such as Curator Mark W. Westneat, attended Monsanto meetings
Monsanto Insectarium, renamed in 2018 as the Bayer Insectarium, at the St. Louis Zoo, in St. Louis, Missouri,
University relationships
Monsanto was a major funder of science research at Washington University in St. Louis for many years. This research was highlighted by the Washington University/Monsanto Biomedical Research Agreement, which brought more than $100 million of research funding to the university. Washington University built the Monsanto Laboratory of the Life Sciences in 1965. In 2015, Monsanto gave Washington University's Institute for School Partnership a $1.94 million grant to help better teach students in STEM fields.
Awards
In 2009 Monsanto was chosen as Forbes magazine's company of the year. In 2010 Swiss research firm Covalence rated Monsanto least ethical of 581 multinational corporations based on their EthicalQuote reputation tracking index which "aggregates thousands of positive and negative news items published by the media, companies, and stakeholders", without attempt to validate sources. The journal Science ranked Monsanto in its Top 20 Employers list between 2011 and 2014. In 2012, it described the company as "innovative leader in the industry", "makes changes needed" and "does important quality research". Monsanto executive Robert Fraley won the World Food Prize for "breakthrough achievements in founding, developing, and applying modern agricultural biotechnology".
Documentaries
The Corporation
Bitter Seeds
Food, Inc.
The Future of Food
The World According to Monsanto
See also
Biological patents in the United States
DuPont Pioneer
Genetically modified food controversies
Industrial Bio-Test Laboratories
Temporal analysis of products
References
Bibliography
Forrestal, Dan J. (1977). Faith, Hope & $5000: The Story of Monsanto, Simon & Schuster, .
Pechlaner, Gabriela, Corporate Crops: Biotechnology, Agriculture, and the Struggle for Control, University of Texas Press, 2012,
Robin, Marie-Monique, The World According to Monsanto: Pollution, Corruption, and the Control of the World's Food Supply, New Press, 2009,
Spears, Ellen Griffith, Baptized in PCBs: Race, Pollution, and Justice in an All-American Town, The University of North Carolina Press, 2014, .
Shiva, Vandana, Stolen Harvest: The Hijacking of the Global Food Supply, South End Press, 2000, .
External links
1901 establishments in Missouri
2018 disestablishments in Missouri
2018 mergers and acquisitions
Accounting scandals
Agriculture companies disestablished in 2018
Agriculture companies established in 1901
American companies disestablished in 2018
American companies established in 1901
American subsidiaries of foreign companies
Bayer
Biotechnology companies disestablished in 2018
Biotechnology companies established in 1901
Biotechnology companies of the United States
Buildings and structures in St. Louis County, Missouri
Chemical companies disestablished in 2018
Chemical companies established in 1901
Chemical companies of the United States
Companies based in St. Louis County, Missouri
Companies formerly listed on the New York Stock Exchange
Corporate spin-offs
Defunct companies based in Missouri
Genetic engineering and agriculture
Life sciences industry
Seed companies | Monsanto | [
"Engineering",
"Biology"
] | 11,104 | [
"Genetic engineering and agriculture",
"Life sciences industry",
"Genetic engineering"
] |
94,102 | https://en.wikipedia.org/wiki/Solid%20angle | In geometry, a solid angle (symbol: ) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point.
The point from which the object is viewed is called the apex of the solid angle, and the object is said to subtend its solid angle at that point.
In the International System of Units (SI), a solid angle is expressed in a dimensionless unit called a steradian (symbol: sr), which is equal to one square radian, sr = rad2. One steradian corresponds to one unit of area (of any shape) on the unit sphere surrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the total surface area of the unit sphere, . Solid angles can also be measured in squares of angular measures such as degrees, minutes, and seconds.
A small object nearby may subtend the same solid angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, it is also much closer to Earth. Indeed, as viewed from any point on Earth, both objects have approximately the same solid angle (and therefore apparent size). This is evident during a solar eclipse.
Definition and properties
The magnitude of an object's solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the apex, that the object covers. Giving the area of a segment of a unit sphere in steradians is analogous to giving the length of an arc of a unit circle in radians. Just as the magnitude of a plane angle in radians at the vertex of a circular sector is the ratio of the length of its arc to its radius, the magnitude of a solid angle in steradians is the ratio of the area covered on a sphere by an object to the square of the radius of the sphere. The formula for the magnitude of the solid angle in steradians is
where is the area (of any shape) on the surface of the sphere and is the radius of the sphere.
Solid angles are often used in astronomy, physics, and in particular astrophysics. The solid angle of an object that is very far away is roughly proportional to the ratio of area to squared distance. Here "area" means the area of the object when projected along the viewing direction.
The solid angle of a sphere measured from any point in its interior is 4 sr. The solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2/3 sr. The solid angle subtended at the corner of a cube (an octant) or spanned by a spherical octant is /2 sr, one-eight of the solid angle of a sphere.
Solid angles can also be measured in square degrees (1 sr = 2 square degrees), in square arc-minutes and square arc-seconds, or in fractions of the sphere (1 sr = fractional area), also known as spat (1 sp = 4 sr).
In spherical coordinates there is a formula for the differential,
where is the colatitude (angle from the North Pole) and is the longitude.
The solid angle for an arbitrary oriented surface subtended at a point is equal to the solid angle of the projection of the surface to the unit sphere with center , which can be calculated as the surface integral:
where is the unit vector corresponding to , the position vector of an infinitesimal area of surface with respect to point , and where represents the unit normal vector to . Even if the projection on the unit sphere to the surface is not isomorphic, the multiple folds are correctly considered according to the surface orientation described by the sign of the scalar product .
Thus one can approximate the solid angle subtended by a small facet having flat surface area , orientation , and distance from the viewer as:
where the surface area of a sphere is .
Practical applications
Defining luminous intensity and luminance, and the correspondent radiometric quantities radiant intensity and radiance
Calculating spherical excess of a spherical triangle
The calculation of potentials by using the boundary element method (BEM)
Evaluating the size of ligands in metal complexes, see ligand cone angle
Calculating the electric field and magnetic field strength around charge distributions
Deriving Gauss's Law
Calculating emissive power and irradiation in heat transfer
Calculating cross sections in Rutherford scattering
Calculating cross sections in Raman scattering
The solid angle of the acceptance cone of the optical fiber
The computation of nodal densities in meshes.
Solid angles for common objects
Cone, spherical cap, hemisphere
The solid angle of a cone with its apex at the apex of the solid angle, and with apex angle 2, is the area of a spherical cap on a unit sphere
For small such that this reduces to , the area of a circle.
The above is found by computing the following double integral using the unit surface element in spherical coordinates:
This formula can also be derived without the use of calculus.
Over 2200 years ago Archimedes proved that the surface area of a spherical cap is always equal to the area of a circle whose radius equals the distance from the rim of the spherical cap to the point where the cap's axis of symmetry intersects the cap. In the above coloured diagram this radius is given as
In the adjacent black & white diagram this radius is given as "t".
Hence for a unit sphere the solid angle of the spherical cap is given as
When = , the spherical cap becomes a hemisphere having a solid angle 2.
The solid angle of the complement of the cone is
This is also the solid angle of the part of the celestial sphere that an astronomical observer positioned at latitude can see as the Earth rotates. At the equator all of the celestial sphere is visible; at either pole, only one half.
The solid angle subtended by a segment of a spherical cap cut by a plane at angle from the cone's axis and passing through the cone's apex can be calculated by the formula
For example, if , then the formula reduces to the spherical cap formula above: the first term becomes , and the second .
Tetrahedron
Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where are the vector positions of the vertices A, B and C. Define the vertex angle to be the angle BOC and define , correspondingly. Let be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define , correspondingly. The solid angle subtended by the triangular surface ABC is given by
This follows from the theory of spherical excess and it leads to the fact that there is an analogous theorem to the theorem that "The sum of internal angles of a planar triangle is equal to ", for the sum of the four internal solid angles of a tetrahedron as follows:
where ranges over all six of the dihedral angles between any two planes that contain the tetrahedral faces OAB, OAC, OBC and ABC.
A useful formula for calculating the solid angle of the tetrahedron at the origin O that is purely a function of the vertex angles , , is given by L'Huilier's theorem as
where
Another interesting formula involves expressing the vertices as vectors in 3 dimensional space. Let be the vector positions of the vertices A, B and C, and let , , and be the magnitude of each vector (the origin-point distance). The solid angle subtended by the triangular surface ABC is:
where
denotes the scalar triple product of the three vectors and denotes the scalar product.
Care must be taken here to avoid negative or incorrect solid angles. One source of potential errors is that the scalar triple product can be negative if , , have the wrong winding. Computing the absolute value is a sufficient solution since no other portion of the equation depends on the winding. The other pitfall arises when the scalar triple product is positive but the divisor is negative. In this case returns a negative value that must be increased by .
Pyramid
The solid angle of a four-sided right rectangular pyramid with apex angles and (dihedral angles measured to the opposite side faces of the pyramid) is
If both the side lengths ( and ) of the base of the pyramid and the distance () from the center of the base rectangle to the apex of the pyramid (the center of the sphere) are known, then the above equation can be manipulated to give
The solid angle of a right -gonal pyramid, where the pyramid base is a regular -sided polygon of circumradius , with a
pyramid height is
The solid angle of an arbitrary pyramid with an -sided base defined by the sequence of unit vectors representing edges can be efficiently computed by:
where parentheses (* *) is a scalar product and square brackets [* * *] is a scalar triple product, and is an imaginary unit. Indices are cycled: and . The complex products add the phase associated with each vertex angle of the polygon. However, a multiple of
is lost in the branch cut of and must be kept track of separately. Also, the running product of complex phases must scaled occasionally to avoid underflow in the limit of nearly parallel segments.
Latitude-longitude rectangle
The solid angle of a latitude-longitude rectangle on a globe is
where and are north and south lines of latitude (measured from the equator in radians with angle increasing northward), and and are east and west lines of longitude (where the angle in radians increases eastward). Mathematically, this represents an arc of angle swept around a sphere by radians. When longitude spans 2 radians and latitude spans radians, the solid angle is that of a sphere.
A latitude-longitude rectangle should not be confused with the solid angle of a rectangular pyramid. All four sides of a rectangular pyramid intersect the sphere's surface in great circle arcs. With a latitude-longitude rectangle, only lines of longitude are great circle arcs; lines of latitude are not.
Celestial objects
By using the definition of angular diameter, the formula for the solid angle of a celestial object can be defined in terms of the radius of the object, , and the distance from the observer to the object, :
By inputting the appropriate average values for the Sun and the Moon (in relation to Earth), the average solid angle of the Sun is steradians and the average solid angle of the Moon is steradians. In terms of the total celestial sphere, the Sun and the Moon subtend average fractional areas of % () and % (), respectively. As these solid angles are about the same size, the Moon can cause both total and annular solar eclipses depending on the distance between the Earth and the Moon during the eclipse.
Solid angles in arbitrary dimensions
The solid angle subtended by the complete ()-dimensional spherical surface of the unit sphere in -dimensional Euclidean space can be defined in any number of dimensions . One often needs this solid angle factor in calculations with spherical symmetry. It is given by the formula
where is the gamma function. When is an integer, the gamma function can be computed explicitly. It follows that
This gives the expected results of 4 steradians for the 3D sphere bounded by a surface of area and 2 radians for the 2D circle bounded by a circumference of length . It also gives the slightly less obvious 2 for the 1D case, in which the origin-centered 1D "sphere" is the interval and this is bounded by two limiting points.
The counterpart to the vector formula in arbitrary dimension was derived by Aomoto
and independently by Ribando. It expresses them as an infinite multivariate Taylor series:
Given unit vectors defining the angle, let denote the matrix formed by combining them so the th column is , and . The variables form a multivariable . For a "congruent" integer multiexponent define . Note that here = non-negative integers, or natural numbers beginning with 0. The notation for means the variable , similarly for the exponents .
Hence, the term means the sum over all terms in in which l appears as either the first or second index.
Where this series converges, it converges to the solid angle defined by the vectors.
References
Further reading
Erratum ibid. vol 50 (2011) page 059801.
External links
Arthur P. Norton, A Star Atlas, Gall and Inglis, Edinburgh, 1969.
M. G. Kendall, A Course in the Geometry of N Dimensions, No. 8 of Griffin's Statistical Monographs & Courses, ed. M. G. Kendall, Charles Griffin & Co. Ltd, London, 1961
Angle
Euclidean solid geometry | Solid angle | [
"Physics"
] | 2,626 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Euclidean solid geometry",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Angle"
] |
94,158 | https://en.wikipedia.org/wiki/Lagrange%20inversion%20theorem | In mathematical analysis, the Lagrange inversion theorem, also known as the Lagrange–Bürmann formula, gives the Taylor series expansion of the inverse function of an analytic function. Lagrange inversion is a special case of the inverse function theorem.
Statement
Suppose is defined as a function of by an equation of the form
where is analytic at a point and Then it is possible to invert or solve the equation for , expressing it in the form given by a power series
where
The theorem further states that this series has a non-zero radius of convergence, i.e., represents an analytic function of in a neighbourhood of This is also called reversion of series.
If the assertions about analyticity are omitted, the formula is also valid for formal power series and can be generalized in various ways: It can be formulated for functions of several variables; it can be extended to provide a ready formula for for any analytic function ; and it can be generalized to the case where the inverse is a multivalued function.
The theorem was proved by Lagrange and generalized by Hans Heinrich Bürmann, both in the late 18th century. There is a straightforward derivation using complex analysis and contour integration; the complex formal power series version is a consequence of knowing the formula for polynomials, so the theory of analytic functions may be applied. Actually, the machinery from analytic function theory enters only in a formal way in this proof, in that what is really needed is some property of the formal residue, and a more direct formal proof is available. In fact, the Lagrange inversion theorem has a number of additional rather different proofs, including ones using tree-counting arguments or induction.
If is a formal power series, then the above formula does not give the coefficients of the compositional inverse series directly in terms for the coefficients of the series . If one can express the functions and in formal power series as
with and , then an explicit form of inverse coefficients can be given in term of Bell polynomials:
where
is the rising factorial.
When , the last formula can be interpreted in terms of the faces of associahedra
where for each face of the associahedron
Example
For instance, the algebraic equation of degree
can be solved for by means of the Lagrange inversion formula for the function , resulting in a formal series solution
By convergence tests, this series is in fact convergent for which is also the largest disk in which a local inverse to can be defined.
Applications
Lagrange–Bürmann formula
There is a special case of Lagrange inversion theorem that is used in combinatorics and applies when for some analytic with Take to obtain Then for the inverse (satisfying ), we have
which can be written alternatively as
where is an operator which extracts the coefficient of in the Taylor series of a function of .
A generalization of the formula is known as the Lagrange–Bürmann formula:
where is an arbitrary analytic function.
Sometimes, the derivative can be quite complicated. A simpler version of the formula replaces with to get
which involves instead of .
Lambert W function
The Lambert function is the function that is implicitly defined by the equation
We may use the theorem to compute the Taylor series of at We take and Recognizing that
this gives
The radius of convergence of this series is (giving the principal branch of the Lambert function).
A series that converges for (approximately ) can also be derived by series inversion. The function satisfies the equation
Then can be expanded into a power series and inverted. This gives a series for
can be computed by substituting for in the above series. For example, substituting for gives the value of
Binary trees
Consider the set of unlabelled binary trees. An element of is either a leaf of size zero, or a root node with two subtrees. Denote by the number of binary trees on nodes.
Removing the root splits a binary tree into two trees of smaller size. This yields the functional equation on the generating function
Letting , one has thus Applying the theorem with yields
This shows that is the th Catalan number.
Asymptotic approximation of integrals
In the Laplace–Erdelyi theorem that gives the asymptotic approximation for Laplace-type integrals, the function inversion is taken as a crucial step.
See also
Faà di Bruno's formula gives coefficients of the composition of two formal power series in terms of the coefficients of those two series. Equivalently, it is a formula for the nth derivative of a composite function.
Lagrange reversion theorem for another theorem sometimes called the inversion theorem
Formal power series#The Lagrange inversion formula
References
External links
Bürmann–Lagrange series at Springer EOM
Inverse functions
Theorems in real analysis
Theorems in complex analysis
Theorems in combinatorics | Lagrange inversion theorem | [
"Mathematics"
] | 967 | [
"Theorems in mathematical analysis",
"Theorems in combinatorics",
"Theorems in real analysis",
"Theorems in complex analysis",
"Combinatorics",
"Theorems in discrete mathematics"
] |
3,325,140 | https://en.wikipedia.org/wiki/Entropy%20in%20thermodynamics%20and%20information%20theory | The mathematical expressions for thermodynamic entropy in the statistical thermodynamics formulation established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s are similar to the information entropy by Claude Shannon and Ralph Hartley, developed in the 1940s.
Equivalence of form of the defining expressions
The defining expression for entropy in the theory of statistical mechanics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, is of the form:
where is the probability of the microstate i taken from an equilibrium ensemble, and is the Boltzmann constant.
The defining expression for entropy in the theory of information established by Claude E. Shannon in 1948 is of the form:
where is the probability of the message taken from the message space M, and b is the base of the logarithm used. Common values of b are 2, Euler's number , and 10, and the unit of entropy is shannon (or bit) for b = 2, nat for b = , and hartley for b = 10.
Mathematically H may also be seen as an average information, taken over the message space, because when a certain message occurs with probability pi, the information quantity −log(pi) (called information content or self-information) will be obtained.
If all the microstates are equiprobable (a microcanonical ensemble), the statistical thermodynamic entropy reduces to the form, as given by Boltzmann,
where W is the number of microstates that corresponds to the macroscopic thermodynamic state. Therefore S depends on temperature.
If all the messages are equiprobable, the information entropy reduces to the Hartley entropy
where is the cardinality of the message space M.
The logarithm in the thermodynamic definition is the natural logarithm. It can be shown that the Gibbs entropy formula, with the natural logarithm, reproduces all of the properties of the macroscopic classical thermodynamics of Rudolf Clausius. (See article: Entropy (statistical views)).
The logarithm can also be taken to the natural base in the case of information entropy. This is equivalent to choosing to measure information in nats instead of the usual bits (or more formally, shannons). In practice, information entropy is almost always calculated using base-2 logarithms, but this distinction amounts to nothing other than a change in units. One nat is about 1.44 shannons.
For a simple compressible system that can only perform volume work, the first law of thermodynamics becomes
But one can equally well write this equation in terms of what physicists and chemists sometimes call the 'reduced' or dimensionless entropy, , so that
Just as S is conjugate to T, so σ is conjugate to kBT (the energy that is characteristic of T on a molecular scale).
Thus the definitions of entropy in statistical mechanics (The Gibbs entropy formula ) and in classical thermodynamics (, and the fundamental thermodynamic relation) are equivalent for microcanonical ensemble, and statistical ensembles describing a thermodynamic system in equilibrium with a reservoir, such as the canonical ensemble, grand canonical ensemble, isothermal–isobaric ensemble. This equivalence is commonly shown in textbooks. However, the equivalence between the thermodynamic definition of entropy and the Gibbs entropy is not general but instead an exclusive property of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
Theoretical relationship
Despite the foregoing, there is a difference between the two quantities. The information entropy Η can be calculated for any probability distribution (if the "message" is taken to be that the event i which had probability pi occurred, out of the space of the events possible), while the thermodynamic entropy S refers to thermodynamic probabilities pi specifically. The difference is more theoretical than actual, however, because any probability distribution can be approximated arbitrarily closely by some thermodynamic system.
Moreover, a direct connection can be made between the two. If the probabilities in question are the thermodynamic probabilities pi: the (reduced) Gibbs entropy σ can then be seen as simply the amount of Shannon information needed to define the detailed microscopic state of the system, given its macroscopic description. Or, in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more". To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the average of the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.
Furthermore, the prescription to find the equilibrium distributions of statistical mechanics—such as the Boltzmann distribution—by maximising the Gibbs entropy subject to appropriate constraints (the Gibbs algorithm) can be seen as something not unique to thermodynamics, but as a principle of general relevance in statistical inference, if it is desired to find a maximally uninformative probability distribution, subject to certain constraints on its averages. (These perspectives are explored further in the article Maximum entropy thermodynamics.)
The Shannon entropy in information theory is sometimes expressed in units of bits per symbol. The physical entropy may be on a "per quantity" basis (h) which is called "intensive" entropy instead of the usual total entropy which is called "extensive" entropy. The "shannons" of a message (Η) are its total "extensive" information entropy and is h times the number of bits in the message.
A direct and physically real relationship between h and S can be found by assigning a symbol to each microstate that occurs per mole, kilogram, volume, or particle of a homogeneous substance, then calculating the 'h' of these symbols. By theory or by observation, the symbols (microstates) will occur with different probabilities and this will determine h. If there are N moles, kilograms, volumes, or particles of the unit substance, the relationship between h (in bits per unit substance) and physical extensive entropy in nats is:
where ln(2) is the conversion factor from base 2 of Shannon entropy to the natural base e of physical entropy. N h is the amount of information in bits needed to describe the state of a physical system with entropy S. Landauer's principle demonstrates the reality of this by stating the minimum energy E required (and therefore heat Q generated) by an ideally efficient memory change or logic operation by irreversibly erasing or merging N h bits of information will be S times the temperature which is
where h is in informational bits and E and Q are in physical Joules. This has been experimentally confirmed.
Temperature is a measure of the average kinetic energy per particle in an ideal gas (kelvins = joules/kB) so the J/K units of kB is dimensionless (joule/joule). kb is the conversion factor from energy in kelvins to joules for an ideal gas. If kinetic energy measurements per particle of an ideal gas were expressed as joules instead of kelvins, kb in the above equations would be replaced by 3/2. This shows that S is a true statistical measure of microstates that does not have a fundamental physical unit other than the units of information, in this case nats, which is just a statement of which logarithm base was chosen by convention.
Information is physical
Szilard's engine
A physical thought experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Leó Szilárd, in a refinement of the famous Maxwell's demon scenario (and a reversal of the Joule expansion thought experiment).
Consider Maxwell's set-up, but with only a single gas particle in a box. If the demon knows which half of the box the particle is in (equivalent to a single bit of information), it can close a shutter between the two halves of the box, close a piston unopposed into the empty half of the box, and then extract joules of useful work if the shutter is opened again. The particle can then be left to isothermally expand back to its original equilibrium occupied volume. In just the right circumstances therefore, the possession of a single bit of Shannon information (a single bit of negentropy in Brillouin's term) really does correspond to a reduction in the entropy of the physical system. The global entropy is not decreased, but information to free energy conversion is possible.
This thought experiment has been physically demonstrated, using a phase-contrast microscope equipped with a high speed camera connected to a computer, acting as the demon. In this experiment, information to energy conversion is performed on a Brownian particle by means of feedback control; that is, synchronizing the work given to the particle with the information obtained on its position. Computing energy balances for different feedback protocols, has confirmed that the Jarzynski equality requires a generalization that accounts for the amount of information involved in the feedback.
Landauer's principle
In fact one can generalise: any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system.
Thus, Rolf Landauer argued in 1961, if one were to imagine starting with those degrees of freedom in a thermalised state, there would be a real reduction in thermodynamic entropy if they were then re-set to a known state. This can only be achieved under information-preserving microscopically deterministic dynamics if the uncertainty is somehow dumped somewhere else – i.e. if the entropy of the environment (or the non information-bearing degrees of freedom) is increased by at least an equivalent amount, as required by the Second Law, by gaining an appropriate quantity of heat: specifically kT ln(2) of heat for every 1 bit of randomness erased.
On the other hand, Landauer argued, there is no thermodynamic objection to a logically reversible operation potentially being achieved in a physically reversible way in the system. It is only logically irreversible operations – for example, the erasing of a bit to a known state, or the merging of two computation paths – which must be accompanied by a corresponding entropy increase. When information is physical, all processing of its representations, i.e. generation, encoding, transmission, decoding and interpretation, are natural processes where entropy increases by consumption of free energy.
Applied to the Maxwell's demon/Szilard engine scenario, this suggests that it might be possible to "read" the state of the particle into a computing apparatus with no entropy cost; but only if the apparatus has already been SET into a known state, rather than being in a thermalised state of uncertainty. To SET (or RESET) the apparatus into this state will cost all the entropy that can be saved by knowing the state of Szilard's particle.
In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.
Negentropy
Shannon entropy has been related by physicist Léon Brillouin to a concept sometimes called negentropy. In 1953, Brillouin derived a general equation stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Leo Szilard's engine produces in the idealistic case, which in turn equals the same quantity found by Landauer. In his book, he further explored this problem concluding that any cause of a bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount, kT ln(2), of energy. Consequently, acquiring information about a system's microstates is associated with an entropy production, while erasure yields entropy production only when the bit value is changing. Setting up a bit of information in a sub-system originally in thermal equilibrium results in a local entropy reduction. However, there is no violation of the second law of thermodynamics, according to Brillouin, since a reduction in any local system's thermodynamic entropy results in an increase in thermodynamic entropy elsewhere. In this way, Brillouin clarified the meaning of negentropy which was considered as controversial because its earlier understanding can yield Carnot efficiency higher than one. Additionally, the relationship between energy and information formulated by Brillouin has been proposed as a connection between the amount of bits that the brain processes and the energy it consumes: Collell and Fauquet argued that De Castro analytically found the Landauer limit as the thermodynamic lower bound for brain computations. However, even though evolution is supposed to have "selected" the most energetically efficient processes, the physical lower bounds are not realistic quantities in the brain. Firstly, because the minimum processing unit considered in physics is the atom/molecule, which is distant from the actual way that brain operates; and, secondly, because neural networks incorporate important redundancy and noise factors that greatly reduce their efficiency. Laughlin et al. was the first to provide explicit quantities for the energetic cost of processing sensory information. Their findings in blowflies revealed that for visual sensory data, the cost of transmitting one bit of information is around 5 × 10−14 Joules, or equivalently 104 ATP molecules. Thus, neural processing efficiency is still far from Landauer's limit of kT ln(2) J, but as a curious fact, it is still much more efficient than modern computers.
In 2009, Mahulikar & Herwig redefined thermodynamic negentropy as the specific entropy deficit of the dynamically ordered sub-system relative to its surroundings. This definition enabled the formulation of the Negentropy Principle, which is mathematically shown to follow from the 2nd Law of Thermodynamics, during order existence.
Quantum theory
Hirschman showed, cf. Hirschman uncertainty, that Heisenberg's uncertainty principle can be expressed as a particular lower bound on the sum of the classical distribution entropies of the quantum observable probability distributions of a quantum mechanical state, the square of the wave-function, in coordinate, and also momentum space, when expressed in Planck units. The resulting inequalities provide a tighter bound on the uncertainty relations of Heisenberg.
It is meaningful to assign a "joint entropy", because positions and momenta are quantum conjugate variables and are therefore not jointly observable. Mathematically, they have to be treated as joint distribution.
Note that this joint entropy is not equivalent to the Von Neumann entropy, −Tr ρ lnρ = −⟨lnρ⟩.
Hirschman's entropy is said to account for the full information content of a mixture of quantum states.
(Dissatisfaction with the Von Neumann entropy from quantum information points of view has been expressed by Stotland, Pomeransky, Bachmat and Cohen, who have introduced a yet different definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows distinction between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures.)
See also
References
Further reading
. [Republication of 1962 original.]
(A highly technical collection of writings giving an overview of the concept of entropy as it appears in various disciplines.)
.
.
(as PDF)
External links
Information Processing and Thermodynamic Entropy Stanford Encyclopedia of Philosophy.
An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science — a wikibook on the interpretation of the concept of entropy.
Thermodynamic entropy
Entropy and information
Philosophy of thermal and statistical physics | Entropy in thermodynamics and information theory | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,333 | [
"Philosophy of thermal and statistical physics",
"Physical quantities",
"Thermodynamic entropy",
"Entropy and information",
"Entropy",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
3,328,072 | https://en.wikipedia.org/wiki/Silicon%20nitride | Silicon nitride is a chemical compound of the elements silicon and nitrogen. (Trisilicon tetranitride) is the most thermodynamically stable and commercially important of the silicon nitrides, and the term ″Silicon nitride″ commonly refers to this specific composition. It is a white, high-melting-point solid that is relatively chemically inert, being attacked by dilute HF and hot . It is very hard (8.5 on the mohs scale). It has a high thermal stability with strong optical nonlinearities for all-optical applications.
Production
Silicon nitride is prepared by heating powdered silicon between 1300 °C and 1400 °C in a nitrogen atmosphere:
3 Si + 2 →
The silicon sample weight increases progressively due to the chemical combination of silicon and nitrogen. Without an iron catalyst, the reaction is complete after several hours (~7), when no further weight increase due to nitrogen absorption (per gram of silicon) is detected.
In addition to , several other silicon nitride phases (with chemical formulas corresponding to varying degrees of nitridation/Si oxidation state) have been reported in the literature. These include the gaseous disilicon mononitride (), silicon mononitride (SiN) and silicon sesquinitride (), each of which are stoichiometric phases. As with other refractories, the products obtained in these high-temperature syntheses depends on the reaction conditions (e.g. time, temperature, and starting materials including the reactants and container materials), as well as the mode of purification. However, the existence of the sesquinitride has since come into question.
It can also be prepared by diimide route:
+ 6 → + 4 (s) at 0 °C
3 → + + 3 (g) at 1000 °C
Carbothermal reduction of silicon dioxide in a nitrogen atmosphere at 1400–1450 °C has also been examined:
3 + 6 C + 2 → + 6 CO
The nitridation of silicon powder was developed in the 1950s, following the "rediscovery" of silicon nitride and was the first
large-scale method for powder production. However, use of low-purity raw silicon caused contamination of silicon nitride by silicates and iron. The diimide decomposition results in amorphous silicon nitride, which needs further annealing under nitrogen at 1400–1500 °C to convert it to a crystalline powder; this is now the second-most-important route for commercial production. The carbothermal reduction was the earliest used method for silicon nitride production and is now considered as the most-cost-effective industrial route to high-purity silicon nitride powder.
Film deposition
Electronic-grade silicon nitride films are formed using chemical vapor deposition (CVD), or one of its variants, such as plasma-enhanced chemical vapor deposition (PECVD):
3 (g) + 4 (g) → (s) + 12 (g) at 750–850°C
3 (g) + 4 (g) → (s) + 12 HCl(g)
3 (g) + 4 (g) → (s) + 6 HCl(g) + 6 (g)
For deposition of silicon nitride layers on semiconductor (usually silicon) substrates, two methods are used:
Low pressure chemical vapor deposition (LPCVD) technology, which works at rather high temperature and is done either in a vertical or in a horizontal tube furnace, or
Plasma-enhanced atomic layer chemical vapor deposition (PECVD) technology, which works at rather low temperature (≤ 250 °C) and vacuum conditions. Examples include (bisdiethylamino)silane as silicon precursor and plasma of N2 as reactant.
Since the lattice constants of silicon nitride and silicon are different, tension or stress can occur, depending on the deposition process. Especially when using PECVD technology this tension can be reduced by adjusting deposition parameters.
Silicon nitride nanowires can also be produced by sol-gel method using carbothermal reduction followed
by nitridation of silica gel, which contains ultrafine carbon particles. The particles can be produced by decomposition of dextrose in the temperature range 1200–1350 °C. The possible synthesis reactions are:
(s) + C(s) → SiO(g) + CO(g) and
3 SiO(g) + 2 (g) + 3 CO(g) → (s) + 3 (g) or
3 SiO(g) + 2 (g) + 3 C(s) → (s) + 3 CO(g).
Processing
Silicon nitride is difficult to produce as a bulk material—it cannot be heated over 1850 °C, which is well below its melting point, due to dissociation to silicon and nitrogen. Therefore, application of conventional hot press sintering techniques is problematic. Bonding of silicon nitride powders can be achieved at lower temperatures through adding materials called sintering aids or "binders", which commonly induce a degree of liquid phase sintering. A cleaner alternative is to use spark plasma sintering, where heating is conducted very rapidly (seconds) by passing pulses of electric current through the compacted powder. Dense silicon nitride compacts have been obtained by this techniques at temperatures 1500–1700 °C.
Crystal structure and properties
There exist three crystallographic structures of silicon nitride (), designated as α, β and γ phases. The α and β phases are the most common forms of , and can be produced under normal pressure condition. The γ phase can only be synthesized under high pressures and temperatures and has a hardness of 35 GPa.
The α- and β- have trigonal (Pearson symbol hP28, space group P31c, No. 159) and hexagonal (hP14, P63, No. 173) structures, respectively, which are built up by corner-sharing tetrahedra. They can be regarded as consisting of layers of silicon and nitrogen atoms in the sequence ABAB... or ABCDABCD... in β- and α-, respectively. The AB layer is the same in the α and β phases, and the CD layer in the α phase is related to AB by a c-glide plane. The tetrahedra in β- are interconnected in such a way that tunnels are formed, running parallel with the c axis of the unit cell. Due to the c-glide plane that relates AB to CD, the α structure contains cavities instead of tunnels. The cubic γ- is often designated as c modification in the literature, in analogy with the cubic modification of boron nitride (c-BN). It has a spinel-type structure in which two silicon atoms each coordinate six nitrogen atoms octahedrally, and one silicon atom coordinates four nitrogen atoms tetrahedrally.
The longer stacking sequence results in the α-phase having higher hardness than the β-phase. However, the α-phase is chemically unstable compared with the β-phase. At high temperatures when a liquid phase is present, the α-phase always transforms into the β-phase. Therefore, β- is the major form used in ceramics. Abnormal grain growth may occur in doped β-, whereby abnormally large elongated grains form in a matrix of finer equiaxed grains and can serve as a technique to enhance fracture toughness in this material by crack bridging. Abnormal grain growth in doped silicon nitride arises due to additive-enhanced diffusion and results in composite microstructures, which can also be considered as “in-situ composites” or “self-reinforced materials.
In addition to the crystalline polymorphs of silicon nitride, glassy amorphous materials may be formed as the pyrolysis products of preceramic polymers, most often containing varying amounts of residual carbon (hence they are more appropriately considered as silicon carbonitrides). Specifically, polycarbosilazane can be readily converted to an amorphous form of silicon carbonitride based material upon pyrolysis, with valuable implications in the processing of silicon nitride materials through processing techniques more commonly used for polymers.
Applications
In general, the main issue with applications of silicon nitride has not been technical performance, but cost. As the cost has come down, the number of production applications is accelerating.
Automotive industry
One of the major applications of sintered silicon nitride is in engine parts. It can be used in diesel engines, glowplugs for speed up start-up times; precombustion chambers (swirl chambers) to reduce emissions, start-up time and noise; and turbochargers to reduce engine lag and emissions. In spark-ignition engines, silicon nitride is used for rocker arm pads for lower wear, turbocharger turbines for lower inertia and less engine lag, and in exhaust gas control valves for increased acceleration. Currently, it is estimated that more than 300,000 sintered silicon nitride turbochargers are made annually.
Silicon nitride is used in some high-performance automotive ceramic coatings for protecting paint.
Bearings
Silicon nitride bearings are both full ceramic bearings and ceramic hybrid bearings with balls in ceramics and races in steel. Silicon nitride ceramics have good shock resistance compared to other ceramics. Therefore, ball bearings made of silicon nitride ceramic are used in performance bearings. A representative example is use of silicon nitride bearings in the main engines of the NASA's Space Shuttle.
Since silicon nitride ball bearings are harder than metal, this reduces contact with the bearing track. This results in 80% less friction, three to ten times longer lifetime, 80% higher speed, 60% less weight, the ability to operate with lubrication starvation, higher corrosion resistance and higher operation temperature, as compared to traditional metal bearings. Silicon nitride balls weigh 79% less than tungsten carbide balls. Silicon nitride ball bearings can be found in high end automotive bearings, industrial bearings, wind turbines, motorsports, bicycles, rollerblades and skateboards. Silicon nitride bearings are especially useful in applications where corrosion or electric or magnetic fields prohibit the use of metals, for example, in tidal flow meters, where seawater attack is a problem, or in electric field seekers.
Si3N4 was first demonstrated as a superior bearing in 1972 but did not reach production until nearly 1990 because of challenges associated with reducing the cost.
Since 1990, the cost has been reduced substantially as production volume has increased. Although bearings are still two to five times more expensive than the best steel bearings, their superior performance and life are justifying rapid adoption. Around 15–20 million bearing balls were produced in the U.S. in 1996 for machine tools and many other applications. Growth is estimated at 40% per year, but could be even higher if ceramic bearings are selected for consumer applications such as in-line skates and computer disk drives.
NASA testing says ceramic-hybrid bearings exhibit much lower fatigue (wear) life than standard all-steel bearings.
High-temperature material
Silicon nitride has long been used in high-temperature applications. In particular, it was identified as one of the few monolithic ceramic materials capable of surviving the severe thermal shock and thermal gradients generated in hydrogen/oxygen rocket engines. To demonstrate this capability in a complex configuration, NASA scientists used advanced rapid prototyping technology to fabricate a one-inch-diameter, single-piece combustion chamber/nozzle (thruster) component. The thruster was hot-fire tested with hydrogen/oxygen propellant and survived five cycles including a 5-minute cycle to a 1320 °C material temperature.
In 2010 silicon nitride was used as the main material in the thrusters of the JAXA space probe Akatsuki.
Silicon nitride was used for the "microshutters" developed for the Near Infrared Spectrograph aboard the James Webb Space Telescope. According to NASA: The "operating temperature is cryogenic so the device has to be able to operate at extremely cold temperatures. Another challenge was developing shutters that would be able to: open and close repeatedly without fatigue; open individually; and open wide enough to meet the science requirements of the instrument. Silicon nitride was chosen for use in the microshutters, because of its high strength and resistance to fatigue." This microshutter system allows the instrument to observe and analyze up to 100 celestial objects simultaneously.
Medical
Silicon nitride has many orthopedic applications. The material is also an alternative to PEEK (polyether ether ketone) and titanium, which are used for spinal fusion devices (with latter being relatively expensive). It is silicon nitride's hydrophilic, microtextured surface that contributes to the material's strength, durability and reliability compared to PEEK and titanium. Certain compositions of this material exhibit anti-bacterial, anti-fungal, or anti-viral properties.
Metal working and cutting
The first major application of was abrasive and cutting tools. Bulk, monolithic silicon nitride is used as a material for cutting tools, due to its hardness, thermal stability, and resistance to wear. It is especially recommended for high speed machining of cast iron. Hot hardness, fracture toughness and thermal shock resistance mean that sintered silicon nitride can cut cast iron, hard steel and nickel based alloys with surface speeds up to 25 times quicker than those obtained with conventional materials such as tungsten carbide. The use of cutting tools has had a dramatic effect on manufacturing output. For example, face milling of gray cast iron with silicon nitride inserts doubled the cutting speed, increased tool life from one part to six parts per edge, and reduced the average cost of inserts by 50%, as compared to traditional tungsten carbide tools.
Electronics
Silicon nitride is often used as an insulator and chemical barrier in manufacturing integrated circuits, to electrically isolate different structures or as an etch mask in bulk micromachining. As a passivation layer for microchips, it is superior to silicon dioxide, as it is a significantly better diffusion barrier against water molecules and sodium ions, two major sources of corrosion and instability in microelectronics. It is also used as a dielectric between polysilicon layers in capacitors in analog chips.
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively).
Not only silicon nitride, but also various ternary compounds of silicon, nitrogen and hydrogen (SiNxHy) are used as insulating layers. They are plasma deposited using the following reactions:
2 (g) + (g) → 2 SiNH(s) + 3 (g)
(g) + (g) → SiNH(s) + 3 (g)
These SiNH films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm), and are thermally stable to high temperatures under specific physical conditions. Silicon nitride is also used in the xerographic process as one of the layers of the photo drum. Silicon nitride is also used as an ignition source for domestic gas appliances. Because of its good elastic properties, silicon nitride, along with silicon and silicon oxide, is the most popular material for cantilevers — the sensing elements of atomic force microscopes.
Aspirational applications
Solar cells
Solar cells are often coated with an anti-reflective coating. Silicon nitride can be used for this, and it is possible to adjust its index of refraction by varying the parameters of the deposition process.
Photonic integrated circuits
Photonic integrated circuits can be produced with various materials, also called material platforms. Silicon nitride is one of those material platforms, next to, for example, Silicon Photonics and Indium Phosphide. Silicon Nitride photonic integrated circuits have a broad spectral coverage and features low light losses. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International’s TriPleX waveguides.
High stress membranes
Silicon nitride has emerged as a favorable platform for high-stress thin film membrane devices. These devices have been used as sensing devices in a wide variety of a scientific experiments including spectroscopy applications and dark matter searches.
History
The first synthesis of silicon nitride was reported in 1857 by Henri Etienne Sainte-Claire Deville and Friedrich Wöhler. In their method, silicon was heated in a crucible placed inside another crucible packed with carbon to reduce permeation of oxygen to the inner crucible. They reported a product they termed silicon nitride but without specifying its chemical composition. Paul Schuetzenberger first reported a product with the composition of the tetranitride, , in 1879 that was obtained by heating silicon with brasque (a paste made by mixing charcoal, coal, or coke with clay which is then used to line crucibles) in a blast furnace. In 1910, Ludwig Weiss and Theodor Engelhardt heated silicon under pure nitrogen to produce . E. Friederich and L. Sittig made Si3N4 in 1925 via carbothermal reduction under nitrogen, that is, by heating silica, carbon, and nitrogen at 1250–1300 °C.
Silicon nitride remained merely a chemical curiosity for decades before it was used in commercial applications. From 1948 to 1952, the Carborundum Company, Niagara Falls, New York, applied for several patents on the manufacture and application of silicon nitride. By 1958 Haynes (Union Carbide) silicon nitride was in commercial production for thermocouple tubes, rocket nozzles, and boats and crucibles for melting metals. British work on silicon nitride, started in 1953, was aimed at high-temperature parts of gas turbines and resulted in the development of reaction-bonded silicon nitride and hot-pressed silicon nitride. In 1971, the Advanced Research Project Agency of the US Department of Defense placed a US$17 million contract with Ford and Westinghouse for two ceramic gas turbines.
Even though the properties of silicon nitride were well known, its natural occurrence was discovered only in the 1990s, as tiny inclusions (about 2 μm × 0.5 μm in size) in meteorites. The mineral was named nierite after a pioneer of mass spectrometry, Alfred O. C. Nier. This mineral may have been detected earlier, again exclusively in meteorites, by Soviet geologists.
References
Cited sources
Ceramic materials
Inorganic silicon compounds
Nitrides
Superhard materials
Semiconductor fabrication materials | Silicon nitride | [
"Physics",
"Chemistry",
"Engineering"
] | 3,954 | [
"Ceramic engineering",
"Inorganic compounds",
"Materials",
"Superhard materials",
"Ceramic materials",
"Inorganic silicon compounds",
"Matter"
] |
3,330,825 | https://en.wikipedia.org/wiki/Debye%E2%80%93Waller%20factor | The Debye–Waller factor (DWF), named after Peter Debye and Ivar Waller, is used in condensed matter physics to describe the attenuation of x-ray scattering or coherent neutron scattering caused by thermal motion. It is also called the B factor, atomic B factor, or temperature factor. Often, "Debye–Waller factor" is used as a generic term that comprises the Lamb–Mössbauer factor of incoherent neutron scattering and Mössbauer spectroscopy.
The DWF depends on the scattering vector q. For a given q, DWF(q) gives the fraction of elastic scattering; 1 – DWF(q) correspondingly gives the fraction of inelastic scattering (strictly speaking, this probability interpretation is not true in general). In diffraction studies, only the elastic scattering is useful; in crystals, it gives rise to distinct Bragg reflection peaks. Inelastic scattering events are undesirable as they cause a diffuse background — unless the energies of scattered particles are analysed, in which case they carry valuable information (for instance in inelastic neutron scattering or electron energy loss spectroscopy).
The basic expression for the DWF is given by
where u is the displacement of a scattering center,
and denotes either thermal or time averaging.
Assuming harmonicity of the scattering centers in the material under study, the Boltzmann distribution implies that is normally distributed with zero mean. Then, using for example the expression of the corresponding characteristic function, the DWF takes the form
Note that although the above reasoning is classical, the same holds in quantum mechanics.
Assuming also isotropy of the harmonic potential, one may write
where q, u are the magnitudes (or absolute values) of the vectors q, u respectively, and is the mean squared displacement. In crystallographic publications, values of are often given where . Note that if the incident wave has wavelength , and it is elastically scattered by an angle of , then
In the context of protein structures, the term B-factor is used. The B-factor is defined as
It is measured in units of Å2.
The B-factors can be taken as indicating the relative vibrational motion of different parts of the structure. Atoms with low B-factors belong to a part of the structure that is well ordered. Atoms with large B-factors generally belong to part of the structure that is very flexible. Each ATOM record (PDB file format) of a crystal structure deposited with the Protein Data Bank contains a B-factor for that atom.
Derivation
Introduction
Scattering experiments are a common method for learning about crystals. Such experiments typically involve a probe (e.g. X-rays or neutrons) and a crystalline solid. A well-characterized probe propagating towards the crystal may interact and scatter away in a particular manner. Mathematical expressions relating the scattering pattern, properties of the probe, properties of the experimental apparatus, and properties of the crystal then allow one to derive desired features of the crystalline sample.
The following derivation is based on chapter 14 of Simon's The Oxford Solid State Basics and on the report Atomic Displacement Parameter Nomenclature by Trueblood et al. (available under #External links). It is recommended to consult these sources for a more explicit discussion. Background on the quantum mechanics involved may be found in Sakurai and Napolitano's Modern Quantum Mechanics.
Scattering experiments often consist of a particle with initial crystal momentum incident on a solid. The particle passes through a potential distributed in space, , and exits with crystal momentum . This situation is described by Fermi's golden rule, which gives the probability of transition per unit time, , to the energy eigenstate from the energy eigenstate due to the weak perturbation caused by our potential .
. (1)
By inserting a complete set of position states, then utilizing the plane-wave expression relating position and momentum, we find that the matrix element is simply a Fourier transform of the potential.
. (2)
Above, the length of the sample is denoted by . We now assume that our solid is a periodic crystal with each unit cell labeled by a lattice position vector . Position within a unit cell is given by a vector such that the overall position in the crystal may be expressed as . Because of the translational invariance of our unit cells, the potential distribution of every cell is identical and .
. (3)
Laue equation
According to the Poisson summation formula:
. (4)
is a reciprocal lattice vector of the periodic potential and is the volume of its unit cell. By comparison of (3) and (4), we find that the Laue equation must be satisfied for scattering to occur:
. (5)
(5) is a statement of the conservation of crystal momentum. Particles scattered in a crystal experience a change in wave vector equal to a reciprocal lattice vector of the crystal. When they do, the contribution to the matrix element is simply a finite constant. Thus, we find an important link between scattered particles and the scattering crystal. The Laue condition, which states that crystal momentum must be conserved, is equivalent to the Bragg condition , which demands constructive interference for scattered particles. Now that we see how the first factor of (3) determines whether or not incident particles are scattered, we consider how the second factor influences scattering.
Structure factor
The second term on the right hand side of (3) is the structure factor.
. (6)
For a given reciprocal lattice vector (corresponding to a family of lattice planes labeled by Miller indices ), the intensity of scattered particles is proportional to the square of the structure factor.
. (7)
Buried in (6) are detailed aspects of the crystal structure that are worth distinguishing and discussing.
Debye–Waller factor
Consideration of the structure factor (and our assumption about translational invariance) is complicated by the fact that atoms in the crystal may be displaced from their respective lattice sites. Taking the scattering potential to be proportional to the density of scattering matter, we rewrite the structure factor.
. (8)
The integral from here onwards is understood to be taken over the unit cell. is the density of scattering matter. The angle brackets indicate a temporal average of each unit cell followed by a spatial average over every unit cell. We further assume that each atom is displaced independently of the other atoms.
. (9)
The number of atoms in the unit cell is and the occupancy factor for atom is . represents the point in the unit cell for which we would like to know the density of scattering matter. is the density of scattering matter from atom at a position separated from the nuclear position by a vector . is the probability density function for displacement. is the reference lattice site from which atom may be displaced to a new position . If is symmetrical enough (e.g. spherically symmetrical), is simply the mean nuclear position. When considering X-ray scattering, the scattering matter density consists of electron density around the nucleus. For neutron scattering, we have -functions weighted by a scattering length for the respective nucleus (see Fermi pseudopotential). Note that in the above discussion, we assumed the atoms were not deformable. With this in mind, (9) may be plugged into expression (8) for the structure factor.
; . (10)
Now we see the overall structure factor may be represented as a weighted sum of structure factors corresponding to each atom. Set the displacement between the location in space for which we would like to know the scattering density and the reference position for the nucleus equal to a new variable . Do the same for the displacement between the displaced and reference nuclear positions . Substitute into (10).
. (11)
Within the square brackets of (11), we convolve the density of scattering matter of atom with the probability density function for some nuclear displacement. Then, in the curly brackets, we Fourier transform the resulting convolution. The final step is to multiply by a phase depending on the reference (e.g. mean) position of atom . But, according to the convolution theorem, Fourier transforming a convolution is the same as multiplying the two Fourier transformed functions. Set the displacement between the location in space for which we would like to know the scattering density and the position for the nucleus equal to a new variable .
. (12)
Substitute (12) into (10).
. (13)
That is:
; , . (14)
is the atomic form factor of the atom ; it determines how the distribution of scattering matter about the nuclear position influences scattering. is the atomic Debye–Waller factor; it determines how the propensity for nuclear displacement from the reference lattice position influences scattering. The expression given for in the article's opening is different because of 1) the decision to take the thermal or time average, 2) the arbitrary choice of negative sign in the exponential, and 3) the decision to square the factor (which more directly connects it to the observed intensity).
Anisotropic displacement parameter, U
A common simplification to (14) is the harmonic approximation, in which the probability density function is modeled as a Gaussian. Under this approximation, static displacive disorder is ignored and it is assumed that atomic displacements are determined entirely by motion (alternative models in which the Gaussian approximation is invalid have been considered elsewhere).
; ; . (15)
We've dropped the atomic index. belongs to the direct lattice while would belong to the reciprocal lattice. By choosing the convenient dimensionless basis , we guarantee that will have units of length and describe the displacement. The tensor in (15) is the anisotropic displacement parameter. With dimension (length), it is associated with mean square displacements. For the mean square displacement along unit vector , simply take . Related schemes use the parameters or B rather than (see to Trueblood et al. for a more complete discussion). Finally, we can find the relationship between the Debye–Waller factor and the anisotropic displacement parameter.
. (16)
From equations (7) and (14), the Debye–Waller factor contributes to the observed intensity of a diffraction experiment. And based on (16), we see that our anisotropic displacement factor is responsible for determining . Additionally, (15) shows that may be directly related to the probability density function for a nuclear displacement from the mean position. As a result, it's possible to conduct a scattering experiment on a crystal, fit the resulting spectrum for the various atomic values, and derive each atom's tendency for nuclear displacement from .
Applications
Anisotropic displacement parameters are often useful for visualizing matter. From (15), we may define ellipsoids of constant probability for which , where is some constant. Such "vibration ellipsoids" have been used to illustrate crystal structures. Alternatively, mean square displacement surfaces along may be defined by . See the external links "Gallery of ray-traced ORTEP's", "2005 paper by Rowsell et al.", and "2009 paper by Korostelev and Noller" for more images. Anisotropic displacement parameters are also refined in programs (e.g. GSAS-II) to resolve scattering spectra during Rietveld refinement.
References
External links
2019 paper by Cristiano Malica and Dal Corso. Introduction to Debye–Waller factor and applications within Density Functional Theory - Temperature-dependent atomic B factor: an ab initio calculation
Gallery of ray-traced ORTEP's - University of Glasgow
2005 paper by Rowsell et al. depicting metal-organic framework thermal ellipsoids -
2009 paper by Korostelev and Noller depicting tRNA thermal ellipsoids - Analysis of Structural Dynamics in the Ribosome by TLS Crystallographic Refinement
Cruickshank's 1956 Acta Crystallogr. paper - The analysis of the anisotropic thermal motion of molecules in crystals
1996 report by Trueblood et al. - Atomic Displacement Parameter Nomenclature
Crystallography
Scattering
Condensed matter physics
Peter Debye | Debye–Waller factor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,489 | [
"Phases of matter",
"Materials science",
"Scattering",
"Crystallography",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"Matter"
] |
16,940,523 | https://en.wikipedia.org/wiki/Multiphase%20topology%20optimisation | The Multi Phase Topology Optimisation is a simulation technique based on the principle of the finite element method which is able to determine the optimal distribution of two or more different materials in combination under thermal and mechanical loads.
The objective of optimization is to minimize the component's elastic energy. Conventional topology optimisation methods which simulate adaptive bone mineralization have the disadvantage that there is a continuous change of mass by the growth process. However, MPTO keeps all initial material concentrations and uses methods adapted for molecular dynamics to find energy minimum. Applying MPTO to Mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. This allows the macro- and microstructure of a mechanical component in one step. This method uses the Rapid Prototyping techniques, 3D printing and selective laser sintering to produce very stiff, light weight components with graded porosities calculated by MPTO.
References
Finite element method
Structural analysis | Multiphase topology optimisation | [
"Engineering"
] | 206 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
16,942,737 | https://en.wikipedia.org/wiki/CAMP-dependent%20pathway | In the field of molecular biology, the cAMP-dependent pathway, also known as the adenylyl cyclase pathway, is a G protein-coupled receptor-triggered signaling cascade used in cell communication.
Discovery
cAMP was discovered by Earl Sutherland and Ted Rall in the mid 1950s. cAMP is considered a secondary messenger along with Ca2+. Sutherland won the Nobel Prize in 1971 for his discovery of the mechanism of action of epinephrine in glycogenolysis, that requires cAMP as secondary messenger.
Mechanism
G protein-coupled receptors (GPCRs) are a large family of integral membrane proteins that respond to a variety of extracellular stimuli. Each GPCR binds to and is activated by a specific ligand stimulus that ranges in size from small molecule catecholamines, lipids, or neurotransmitters to large protein hormones. When a GPCR is activated by its extracellular ligand, a conformational change is induced in the receptor that is transmitted to an attached intracellular heterotrimeric G protein complex. The Gs alpha subunit of the stimulated G protein complex exchanges GDP for GTP and is released from the complex.
In a cAMP-dependent pathway, the activated Gs alpha subunit binds to and activates an enzyme called adenylyl cyclase, which, in turn, catalyzes the conversion of ATP into cyclic adenosine monophosphate (cAMP).
Increases in concentration of the second messenger cAMP may lead to the activation of
cyclic nucleotide-gated ion channels
exchange proteins activated by cAMP (EPAC) such as RAPGEF3
popeye domain containing proteins (Popdc)
an enzyme called protein kinase A (PKA).
The PKA enzyme is also known as cAMP-dependent enzyme because it gets activated only if cAMP is present. Once PKA is activated, it phosphorylates a number of other proteins including:
enzymes that convert glycogen into glucose
enzymes that promote muscle contraction in the heart leading to an increase in heart rate
transcription factors, which regulate gene expression
also phosphorylate AMPA receptors
Specificity of signaling between a GPCR and its ultimate molecular target through a cAMP-dependent pathway may be achieved through formation of a multiprotein complex that includes the GPCR, adenylyl cyclase, and the effector protein.
Importance
In humans, cAMP works by activating protein kinase A (PKA, cAMP-dependent protein kinase), one of the first few kinases discovered. It has four sub-units two catalytic and two regulatory. cAMP binds to the regulatory sub-units. It causes them to break apart from the catalytic sub-units. The catalytic sub-units make their way in to the nucleus to influence transcription. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell.
cAMP-dependent pathway is necessary for many living organisms and life processes. Many different cell responses are mediated by cAMP; these include increase in heart rate, cortisol secretion, and breakdown of glycogen and fat. cAMP is essential for the maintenance of memory in the brain, relaxation in the heart, and water absorbed in the kidney.
This pathway can activate enzymes and regulate gene expression. The activation of preexisting enzymes is a much faster process, whereas regulation of gene expression is much longer and can take up to hours. The cAMP pathway is studied through loss of function (inhibition) and gain of function (increase) of cAMP.
If cAMP-dependent pathway is not controlled, it can ultimately lead to hyper-proliferation, which may contribute to the development and/or progression of cancer.
Activation
Activated GPCRs cause a conformational change in the attached G protein complex, which results in the Gs alpha subunit's exchanging GDP for GTP and separation from the beta and gamma subunits. The Gs alpha subunit, in turn, activates adenylyl cyclase, which quickly converts ATP into cAMP. This leads to the activation of the cAMP-dependent pathway. This pathway can also be activated downstream by directly activating adenylyl cyclase or PKA.
Molecules that activate cAMP pathway include:
cholera toxin - increases cAMP levels
forskolin - a diterpene natural product that activates adenylyl cyclase
caffeine and theophylline inhibit cAMP phosphodiesterase, which degrades cAMP - thus enabling higher levels of cAMP than would otherwise be had.
bucladesine (dibutyryl cAMP, db cAMP) - also a phosphodiesterase inhibitor
pertussis toxin, which increases cAMP levels by inhibiting Gi to its GDP (inactive) form. This leads to an increase in adenylyl cyclase activity, thereby increasing cAMP levels, which can lead to an increase in insulin and therefore hypoglycemia
Deactivation
The Gs alpha subunit slowly catalyzes the hydrolysis of GTP to GDP, which in turn deactivates the Gs protein, shutting off the cAMP pathway. The pathway may also be deactivated downstream by directly inhibiting adenylyl cyclase or dephosphorylating the proteins phosphorylated by PKA.
Molecules that inhibit the cAMP pathway include:
cAMP phosphodiesterase converts cAMP into AMP by breaking the phosphodiester bond, in turn reducing the cAMP levels
Gi protein, which is a G protein that inhibits adenylyl cyclase, reducing cAMP levels.
References
EC 4.6.1
Cell signaling
Signal transduction
Cell biology
Neurochemistry | CAMP-dependent pathway | [
"Chemistry",
"Biology"
] | 1,136 | [
"Biochemistry",
"Neurochemistry",
"Cell biology",
"Signal transduction"
] |
16,944,901 | https://en.wikipedia.org/wiki/Spin-exchange | In quantum mechanics, spin-exchange is an interaction process between two particles mediated by an exchange interaction. It preserves total angular momentum of the system but may allow other aspects of the system to change. When two spin-polarized atoms in their ground state experience a spin-exchange collision, the total spin of the atoms is preserved yet the orientation of the individual spins may change. For example, if atoms and are oppositely polarized, a spin-exchange collision reverses the spins:
In alkali metals
In a typical vapor of alkali metal atoms, spin-exchange collisions are the dominant type of interaction between atoms. The collisions happen so rapidly that they only alter the state of the electron spins and do not significantly affect the nuclear spins. Thus, spin-exchange collisions between alkali metal atoms can change the hyperfine state of the atoms while preserving total angular momentum of the colliding pair. As a result, spin-exchange collisions cause decoherence in ensembles of polarized atoms precessing in the presence of a magnetic field.
The time between spin-exchange collisions for a vapor of alkali metal atoms is
where the spin exchange cross section for alkali metals such as K, Rb, and Cs is
and where is the vapor density and is the average relative velocity given by the Maxwell–Boltzmann distribution:
where is the ideal gas constant, is the temperature, and is the molar mass of the atoms.
References
Quantum mechanics | Spin-exchange | [
"Physics"
] | 292 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
18,147,992 | https://en.wikipedia.org/wiki/Sommerfeld%20number | In the design of fluid bearings, the Sommerfeld number (S) is a dimensionless quantity used extensively in hydrodynamic lubrication analysis. The Sommerfeld number is very important in lubrication analysis because it contains all the variables normally specified by the designer.
The Sommerfeld number is named after Arnold Sommerfeld (1868–1951).
Definition
The Sommerfeld Number is typically defined by the following equation:
where:
S is the Sommerfeld Number or bearing characteristic number
r is the shaft radius
c is the radial clearance
μ is the absolute viscosity of the lubricant
N is the speed of the rotating shaft in rev/s
P is the load per unit of projected bearing area
The second part of the equation is seen to be the Hersey number. However, an alternative definition for S is used in some texts based on angular velocity:
where:
is angular velocity of the shaft in rad/s.
W is the applied load
L is the bearing length
D is the bearing diameter
It is therefore necessary to check which definition is being used when referring to design data or textbooks, since the value of S will differ by a factor of 2π.
Derivation
Petrov's law
Nikolai Pavlovich Petrov's method of lubrication analysis, which assumes a concentric shaft and bearing, was the first to explain the phenomenon of bearing friction. This method, which ultimately produces the equation known as Petrov's law (or Petroff's law), is useful because it defines groups of relevant dimensionless parameters, and predicts a fairly accurate coefficient of friction, even when the shaft is not concentric.
Considering a vertical shaft rotating inside a bearing, it can be assumed that the bearing is subjected to a negligible load, the radial clearance space is completely filled with lubricant, and that leakage is negligible. The surface velocity of the shaft is: , where N is the rotational speed of the shaft in rev/s.
The shear stress in the lubricant can be represented as follows:
Assuming a constant rate of shear,
The torque required to shear the film is
If a small radial load W acts on the shaft and hence the bearing, the frictional drag force can be considered equal to the product fW, with the friction torque represented as
Where
W is the force acting on the bearing
P is the radial load per unit of project bearing area (Pressure)
f is the coefficient of friction
If the small radial load W is considered negligible, setting the two expressions for torque equal to one another and solving for the coefficient of friction yields
Which is known as Petroff's law or the Petroff equation.
It provides a quick and simple means of obtaining reasonable estimates of coefficients of friction of lightly loaded bearings.
Notes
References
External links
Sommerfeld number calculator
Fluid dynamics
Bearings (mechanical) | Sommerfeld number | [
"Chemistry",
"Engineering"
] | 590 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
18,149,255 | https://en.wikipedia.org/wiki/Depolarizer%20%28optics%29 | A or is an optical device used to scramble the polarization of light. An ideal depolarizer would output randomly polarized light whatever its input, but all practical depolarizers produce pseudo-random output polarization.
Optical systems are often sensitive to the polarization of light reaching them (for example grating-based spectrometers). Unwanted polarization of the input to such a system may cause errors in the system's output.
Types
Cornu depolarizer
The Cornu depolarizer was one of the earliest designs, named after its inventor Marie Alfred Cornu. It consists of a pair of 45° prisms of quartz crystal, optically contacted to form a cuboid. The fast axes are 90° apart and 45° from the sides of the depolarizer (see figure). Any ray entering the prism effectively passes through two wave plates. The thickness of these wave plates and therefore their retardance varies across the beam. The phase shift is given by
For an input beam of uniform polarization the output polarization will be periodic in . The phase shift is also dependent on wavelength due to dispersion.
The use of two prisms means that the output is essentially coaxial with the input. At the interface between the prisms refraction does take place, as the refractive indices are exchanged. There is therefore some separation of the components of the output beam.
This device is not commonly used today, but similar designs are commercially available.
Lyot depolarizer
The Lyot depolarizer is another early design. It was invented by Bernard Lyot. It consists of two wave plates with their fast axes 45° apart, with the second plate twice as thick as the first. The output is periodic as a function of wavelength and as a function of the wave-plates' thicknesses. Special considerations are needed when this depolarizer is to be used for a particular application, because the optimal wave-plate thicknesses depend on the signal wavelength and optical spectrum with which it is to be used. It is commercially available for broadband visible applications.
This device is especially attractive in fiber optics, where two pieces of correct length of polarization-maintaining optical fiber spliced together at a 45° angle are used instead of the wave-plates, thus no other components such as beam splitters are required.
Wedge depolarizer
Quartz-silica
The quartz-silica wedge depolarizer is a common commercial design and is similar to the Cornu depolarizer, however, the angle between the two components is much smaller (2° is typical) and only the first component is birefringent. The second component is made of fused silica, which has a very similar refractive index to quartz, but is not birefringent. The fast axis of the quartz element is generally at 45° to the wedge. The whole device is much more compact than a Cornu depolarizer (for the same aperture).
As with the Cornu depolarizer, there is some separation of the output as a function of polarization, as well as some beam deviation due to the imperfect match in refractive index between quartz and silica. The output is periodic across the depolarizer. Because the wedge angle is so much smaller than in a Cornu depolarizer the period is larger, often around . This depolarizer also has a preferred orientation because of its single defined fast axis. In commercial wedge depolarizers this is usually marked.
Quartz-quartz
Quartz-quartz wedge depolarizers are commercially available, though not common. They are similar to Cornu depolarizers, but with the small wedge angle of the silica-compensated wedge.
Other birefringent materials can be used in place of quartz in the above designs.
Wedge depolarizers exhibit some small beam deviation. This is true even if the faces of the optic are exactly parallel. Because each half of the optic is a wedge, and the two halves do not have exactly the same refractive index (for a particular polarization), the depolarizer is effectively very slightly wedged (optically).
Time-variable depolarizer
The Lyot depolarizer and similar devices are based on the fact that the retardations of optical waveplates or retarders depend on optical frequency or wavelength. They cause polarization mode dispersion which can be detrimental. Furthermore they cannot be used for (quasi-)monochromatic signals. For the latter, time-variable depolarizers are needed. These are composed of time-variable optical retarders. An effective way to realize time-variable depolarizers are rotating waveplates or equivalent optical devices.
A rotating halfwave plate produces polarization which is periodic in time, and therefore effectively scrambled for sufficiently slow responses. Its input polarization must be linear. Resulting output polarization is rotating linear polarization. Likewise, circular polarization can be depolarized with a rotating quarterwave plate. Output polarization is again linear. If a halfwave and a quarterwave plate are concatenated and rotate at different speeds, any input polarization is depolarized. If the waveplates are not perfect, more rotating waveplates can improve performance. Based on electrooptic rotating waveplates, such polarization-independent depolarizers are commercially available with depolarization intervals down to .
Other ways to produce depolarized light
In many applications it is possible to use a quarter-wave plate to produce circularly polarized light, but this is only possible for light of a limited range of wavelengths which is linearly polarized to start with. Other methods have been demonstrated, such as the use of Faraday rotators and liquid crystals. It is also possible to depolarize light using fiber optics. Relatively high degree of depolarization is also achieved by light passing through usual semitransparent materials like matte plastic or greased paper.
See also
Polarization scrambling
Polarizer
Optical prisms
References
External links
Polarization (waves)
Optical components | Depolarizer (optics) | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,266 | [
"Glass engineering and science",
"Optical components",
"Astrophysics",
"Polarization (waves)",
"Components"
] |
12,469,290 | https://en.wikipedia.org/wiki/ACIGA | The Australian Consortium for Interferometric Gravitational Astronomy (ACIGA) is a collaboration of Australian research institutions involved in the international gravitational wave research community.
The institutions associated with ACIGA are:
The Australian National University
University of Western Australia
University of Adelaide
Monash University
University of Melbourne
CSIRO optical technology group
Charles Sturt University
See also
AIGO
References
External links
ACIGA Home Page
The Centre for Gravitational Physics at the Australian National University
The Australian International Gravitational Research Centre at the University of Western Australia
Optics research group at the University of Adelaide
High Optical Power Test Facility located near Gingin, Western Australia
The School of Computing and Mathematics at Charles Sturt University
Astronomy in Australia
Gravitational-wave astronomy | ACIGA | [
"Physics",
"Astronomy"
] | 136 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
12,469,994 | https://en.wikipedia.org/wiki/Weisz%E2%80%93Prater%20criterion | The Weisz–Prater criterion is a method used to estimate the influence of pore diffusion on reaction rates in heterogeneous catalytic reactions. If the criterion is satisfied, pore diffusion limitations are negligible. The criterion is
Where is the reaction rate per volume of catalyst, is the catalyst particle radius, is the reactant concentration at the particle surface, and is the effective diffusivity. Diffusion is usually in the Knudsen regime when average pore radius is less than 100 nm.
For a given effectiveness factor,, and reaction order, n, the quantity is defined by the equation:
for small values of beta this can be approximated using the binomial theorem:
Assuming with a reaction order gives value of equal to 0.1. Therefore, for many conditions, if then pore diffusion limitations can be excluded.
References
Scientific techniques
Laboratory techniques
Transport phenomena
Chemical reaction engineering | Weisz–Prater criterion | [
"Physics",
"Chemistry",
"Engineering"
] | 183 | [
"Transport phenomena",
"Physical phenomena",
"Chemical reaction engineering",
"Chemical engineering",
"nan"
] |
12,473,239 | https://en.wikipedia.org/wiki/Stochastic%20partial%20differential%20equation | Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have relevance to quantum field theory, statistical mechanics, and spatial modeling.
Examples
One of the most studied SPDEs is the stochastic heat equation, which may formally be written as
where is the Laplacian and denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation and the Schrödinger equation.
Discussion
One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions.
For linear equations, one can usually find a mild solution via semigroup techniques.
However, problems start to appear when considering non-linear equations. For example
where is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization.
An early attempt to circumvent such problems for some specific equations was the so called da Prato–Debussche trick which involved studying such non-linear equations as perturbations of linear ones. However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existence for a variety of sub-critical SPDEs.
See also
Brownian surface
Kardar–Parisi–Zhang equation
Kushner equation
Malliavin calculus
Polynomial chaos
Wick product
Zakai equation
References
Further reading
External links
Stochastic differential equations
Partial differential equations | Stochastic partial differential equation | [
"Mathematics"
] | 450 | [
"Applied mathematics",
"Mathematical finance"
] |
414,962 | https://en.wikipedia.org/wiki/Stent | In medicine, a stent is a tube usually constructed of a metallic alloy or a polymer. It is inserted into the lumen (hollow space) of an anatomic vessel or duct to keep the passageway open.
Stenting refers to the placement of a stent. The word "stent" is also used as a verb to describe the placement of such a device, particularly when a disease such as atherosclerosis has pathologically narrowed a structure such as an artery.
A stent is different from a shunt. A shunt is a tube that connects two previously unconnected parts of the body to allow fluid to flow between them. Stents and shunts can be made of similar materials, but perform two different tasks.
There are various types of stents used for different medical purposes. Coronary stents are commonly used in coronary angioplasty, with drug-eluting stents being the most common type. Vascular stents are used for peripheral and cerebrovascular disease, while ureteral stents ensure the patency of a ureter.
Prostatic stents can be temporary or permanent and are used to treat conditions like benign prostatic hyperplasia. Colon and esophageal stents are palliative treatments for advanced colon and esophageal cancer. Pancreatic and biliary stents provide drainage from the gallbladder, pancreas, and bile ducts to the duodenum in conditions such as obstructing gallstones. There are also different types of bare-metal, drug-eluting, and bioresorbable stents available based on their properties.
The term "stent" originates from Charles Stent, an English dentist who made advances in denture-making techniques in the 19th century. The use of coronary stents began in 1986 by Jacques Puel and Ulrich Sigwart to prevent vessel closure during coronary surgery.
Stent types
By destination organ
Coronary stent
Coronary stents are placed during a coronary angioplasty. The most common use for coronary stents is in the coronary arteries, into which a bare-metal stent, a drug-eluting stent, a bioabsorbable stent, a dual-therapy stent (combination of both drug and bioengineered stent), or occasionally a covered stent is inserted.
The majority of coronary stents used today are drug-eluting stents, which release medication to prevent complications such as blood clot formation and restenosis (re-narrowing). Stenting is performed through a procedure called percutaneous coronary intervention (PCI), where the cardiologist uses angiography and intravascular ultrasound to assess the blockage in the artery and determine the appropriate size and type of stent. The procedure is typically done in a catheterization clinic, and patients may need to stay overnight for observation. While stenting has been shown to reduce chest pain (angina) and improve survival rates after a heart attack, its effectiveness in stable angina patients has been debated.
Studies have found that most heart attacks occur due to plaque rupture rather than an obstructed artery that would benefit from a stent. Statins, along with PCI/stenting and anticoagulant therapies, are considered part of a broader treatment strategy. Some cardiologists believe that coronary stents are overused, but there is evidence of under-use in certain patient groups like the elderly. Ongoing research continues to explore new types of stents with biocompatible coatings or absorbable materials.
Vascular stent
Vascular stents are a common treatment for advanced peripheral and cerebrovascular disease. Common sites treated with vascular stents include the carotid, iliac, and femoral arteries. Because of the external compression and mechanical forces subjected to these locations, flexible stent materials such as nitinol are used in many peripheral stents.
Vascular stents made of metals can lead to thrombosis at the site of treatment or to inflammation scarring. Drug-eluting stents with pharmacologic agents or as drug delivery vehicles have been developed as an alternative to decrease the chances of restenosis.
Because vascular stents are designed to expand inside a blocked artery to keep it open, allowing blood to flow freely, the mechanical properties of vascular stents are crucial for their function: they need to be highly elastic to allow for the expansion and contraction of the stent within the blood vessel, they also need to have high strength and fatigue resistance to withstand the constant physiological load of the arteries, they should have good biocompatibility to reduce the risk of thrombosis and vascular restenosis, and to minimize the body's rejection of the implant.
Vascular stents are commonly used in angioplasty, a surgical procedure that opens blocked arteries and places a stent to keep the artery open. This is a common treatment for heart attacks and is also used in the prevention and treatment of strokes. Over 2 million people receive a stent each year for coronary artery disease alone. Vascular stents can also be used to prevent the rupture of aneurysms in the brain, aorta, or other blood vessels.
Ureteric stent
Ureteral stents are used to ensure the patency of a ureter, which may be compromised, for example, by a kidney stone. This method is sometimes used as a temporary measure to prevent damage to a kidney caused by a kidney stone until a procedure to remove the stone can be performed.
An ureteral stent it is typically inserted using a cystoscope, and one or both ends of the stent may be coiled to prevent movement. Ureteral stents are used for various purposes, such as temporary measures to prevent damage to a blocked kidney until a stone removal procedure can be performed, providing drainage for compressed ureters caused by tumors, and preventing spasms and collapse of the ureter after trauma during procedures like stone removal. The thread attached to some stents may cause irritation but allows for easy removal by pulling gently.
Stents without threads require cystoscopy for removal. Recent developments have introduced magnetic retrieval systems that eliminate the need for invasive procedures like cystoscopy when removing the stent. The use of magnets enables simple extraction without anesthesia and can be done by primary care physicians or nurses rather than urologists. This method has shown high success rates across different patient groups including adults, children, and kidney transplant patients while reducing costs associated with operating room procedures.
Prostatic stent
Prostatic stents are placed from the bladder through the prostatic and penile urethra to allow drainage of the bladder through the penis. This is sometimes required in benign prostatic hyperplasia.
A prostatic stent is used to keep the male urethra open and allow for the passage of urine in cases of prostatic obstruction and lower urinary tract symptoms (LUTS). There are two types of prostatic stents: temporary and permanent. Permanent stents, typically made of metal coils, are inserted into the urethra to apply constant gentle pressure and hold open sections that obstruct urine flow. They can be placed under anesthesia as an outpatient procedure but have disadvantages such as increased urination, limited incontinence, potential displacement or infection, and limitations on subsequent endoscopic surgical options. On the other hand, temporary stents can be easily inserted with topical anesthesia similar to a Foley catheter, and allow patients to retain volitional voiding. However, they may cause discomfort or increased urinary frequency.
In the US, there is one temporary prostatic stent that has received FDA approval called The Spanner. It maintains urine flow while allowing natural voluntary urination. Research on permanent stents often focuses on metal coil designs that expand radially to hold open obstructed areas of the urethra.
These permanent stents are used for conditions like benign prostatic hyperplasia (BPH), recurrent bulbar urethral stricture (RBUS), or detrusor external sphincter dyssynergia (DESD). The Urolume is currently the only FDA-approved permanent prostatic stent.
Colon and Esophageal stents
Colon and esophageal stents are a palliative treatment for advanced colon and esophageal cancer.
A colon stent is typically made of flexible metal mesh that can expand and hold open the blocked area, allowing for the passage of stool. Colon stents are used primarily as a palliative treatment for patients with advanced colorectal cancer who are not candidates for surgery. They help relieve symptoms such as abdominal pain, constipation, and bowel obstruction caused by tumors or strictures in the colon.
The placement of a colon stent involves endoscopic techniques similar to esophageal stenting. A thin tube called an endoscope is inserted into the rectum and guided through the colon to locate the blockage. Using fluoroscopy or endoscopic guidance, a guidewire is passed through the narrowed area and then removed after positioning it properly. The stent is then delivered over the guidewire and expanded to keep open the obstructed section of the colon. Complications associated with colon stents include perforation of the intestinal wall, migration or dislodgment of the stent, bleeding, infection at insertion site, or tissue overgrowth around it.
Colon stenting provides several benefits including prompt relief from bowel obstruction symptoms without invasive surgery in many cases. It allows for faster recovery time compared to surgical interventions while providing palliative care for patients with advanced colorectal cancer by improving quality of life and enabling better nutritional intake. However, there are potential risks associated with complications such as migration or obstruction that may require additional procedures or interventions to address these issues effectively.
Pancreatic and biliary stents
Pancreatic and biliary stents provide pancreatic and bile drainage from the gallbladder, pancreas, and bile ducts to the duodenum in conditions such as ascending cholangitis due to obstructing gallstones.
Pancreatic and biliary stents can also be used to treat biliary/pancreatic leaks or to prevent post-ERCP pancreatitis.
In the case of gallstone pancreatitis, a gallstone travels from the gallbladder and blocks the opening to the first part of the small intestine (duodenum). This causes a backup of fluid that can travel up both the bile duct and the pancreatic duct. Gallbladder stones can lead to obstruction of the biliary tree via which gallbladder and pancreas enzymes are secreted into the duodenum, causing emergency events such as acute cholecystitis or acute pancreatitis.
In conditions such as ascending cholangitis due to obstructing gallstones, these stents play a crucial role. They help in maintaining the flow of bile and pancreatic juices from the gallbladder, pancreas, and bile ducts to the duodenum1. Biliary stents are often used during endoscopic retrograde cholangiopancreatography (ERCP) to treat blockages that narrow your bile or pancreatic ducts. In cases of malignant biliary obstruction, endoscopic stent placement is one of the treatment options to relieve the obstruction. Biliary drainage is considered effective, particularly in bile duct conditions that are diagnosed and treated early.
Glaucoma drainage stent
Glaucoma drainage stents are recent developments and have been recently approved in some countries. They are used to reduce intraocular pressure by providing a drainage channel.
By properties or function
Bare-metal stent
A stent graft or covered stent is type of vascular stent with a fabric coating that creates a contained tube but is expandable like a bare metal stent. Covered stents are used in endovascular surgical procedures such as endovascular aneurysm repair. Stent grafts are also used to treat stenoses in vascular grafts and fistulas used for hemodialysis.
Bioresorbable stent
A bioresorbable stent is a tube-like device made from a material that can release a drug to prevent scar tissue growth. It is used to open and widen clogged heart arteries and then dissolves or is absorbed by the body. Unlike traditional metal stents, bioresorbable stents can restore normal vessel function, avoid long-term complications, and enable natural reconstruction of the arterial wall.
Metal-based bioresorbable scaffolds include iron, magnesium, zinc, and their alloys. Magnesium-based scaffolds have been approved for use in several countries around the world and show promising clinical results in delivering against the drawbacks of permanent metal stents. However, attention has been given to reducing the rate of magnesium corrosion through alloying and coating techniques.
Clinical research shows that resorbable scaffolds offer comparable efficacy and safety profiles to traditional drug-eluting stents (DES). The Magmaris resorbable magnesium scaffold has reported favorable safety outcomes similar to thin-strutted DES in patient populations. The Absorb naturally dissolving stent has also shown low rates of major adverse cardiac events when compared to DES. Imaging studies demonstrate that these naturally dissolving stents begin to dissolve between six months to two years after placement in the artery.
Drug-eluting stent
Drug-eluting stents (DES) are specialized medical devices used to treat coronary artery disease and peripheral artery disease. They release a drug that inhibits cellular growth into the blocked or narrowed arteries, reducing the risk of blockages. DES are commonly placed using percutaneous coronary intervention (PCI), a minimally invasive procedure performed via catheter. These stents have shown clear advantages over older bare-metal stents, improving patient outcomes and quality of life for cardiac patients. With over 90% of stents used in PCI procedures being drug-eluting as of 2023, DES have become the standard choice for interventional cardiologists.
DES gradually release drugs that prevent restenosis and thrombosis within the treated arteries, addressing common complications associated with previous treatments. While risks such as clot formation and bleeding exist, studies have demonstrated superior efficacy compared to bare-metal stents in reducing major adverse cardiac events like heart attacks and repeat revascularization procedures. Long-term outcomes are still being studied due to their relatively recent introduction; however, DES have revolutionized the treatment of coronary artery disease by significantly improving patient outcomes and enhancing their quality of life.
Etymology
The currently accepted origin of the word stent is that it derives from the name of an English dentist, Charles Thomas Stent (1807–1885), notable for his advances in the field of denture-making. He was born in Brighton, England, on October 17, 1807, was a dentist in London, and is most famous for improving and modifying the denture base of the gutta-percha, creating the stent's compounding that made it practical as a material for dental impressions.
Others attribute the noun stent to Jan F. Esser, a Dutch plastic surgeon who in 1916 used the word to describe a dental impression compound invented in 1856 by Charles Stent, whom Esser employed to craft a form for facial reconstruction. The full account is described in the Journal of the History of Dentistry. According to the author, from the use of Stent's compound as a support for facial tissues evolved the use of a stent to hold open various body structures.
The verb form "stenting" was used for centuries to describe the process of stiffening garments (a usage long obsolete, per the Oxford English Dictionary), and some believe this to be the origin. According to the Merriam Webster Third New International Dictionary, the noun evolved from the Middle English verb , shortened from 'to stretch', which in turn came from Latin , the past participle of 'to stretch out'.
The first (self-expanding) "stents" used in medical practice in 1986 by Ulrich Sigwart in Lausanne were initially called "Wallstents" after their inventor, Hans Wallstén.
Julio Palmaz et al. created a balloon-expandable stent that is currently used.
History
The first use of a coronary stent is typically attributed to and Ulrich Sigwart, who implanted a stent into a patient in Toulouse, France, in 1986. That stent was used as a scaffold to prevent a vessel from closing and to avoid restenosis in coronary surgery—a condition where scar tissue grows within the stent and interferes with vascular flow. Shortly thereafter, in 1987, Julio Palmaz (known for patenting a balloon-expandable stent ) and Richard Schatz implanted their similar stent into a patient in Germany.
Though several doctors have been credited with the creation of the stent, the first FDA-approved stent in the U.S. was created by Richard Schatz and coworkers. Named the Palmaz-Schatz (Johnson & Johnson), it was developed in 1987.
To further reduce the incidence of restenosis, the drug-eluting stent was introduced in 2003. Research has led to general stent design changes and improvements since that time. Bioresorbable scaffolds have also entered the market, though a large-scale clinical trial showed higher acute risks compared to drug-eluding stents. As a result, the FDA issued an official warning for their use in 2013, and research on the design and performance optimisation of stents is ongoing.
See also
References
External links
Coronary Stent
Drug-Eluting Stents — Angioplasty.Org
Cardiovascular and Interventional Radiological Society of Europe
The Cardiovascular Forum
Stent for Life Initiative
Cobalt Chromium Rapamycin Eluting Coronary Stent System
Implants (medicine)
Interventional radiology
Medical devices | Stent | [
"Biology"
] | 3,777 | [
"Medical devices",
"Medical technology"
] |
415,016 | https://en.wikipedia.org/wiki/Blood%20substitute | A blood substitute (also called artificial blood or blood surrogate) is a substance used to mimic and fulfill some functions of biological blood. It aims to provide an alternative to blood transfusion, which is transferring blood or blood-based products from one person into another. Thus far, there are no well-accepted oxygen-carrying blood substitutes, which is the typical objective of a red blood cell transfusion; however, there are widely available non-blood volume expanders for cases where only volume restoration is required. These are helping doctors and surgeons avoid the risks of disease transmission and immune suppression, address the chronic blood donor shortage, and address the concerns of Jehovah's Witnesses and others who have religious objections to receiving transfused blood.
The main categories of "oxygen-carrying" blood substitutes being pursued are hemoglobin-based oxygen carriers (HBOC) and perfluorocarbon emulsions. Oxygen therapeutics are in clinical trials in the U.S. and European Union, and Hemopure is available in South Africa.
History
After William Harvey discovered blood pathways in 1616, many people tried to use fluids such as beer, urine, milk, and non-human animal blood as blood substitute. Sir Christopher Wren suggested wine and opium as blood substitute.
At the beginning of the 20th century, the development of modern transfusion medicine initiated through the work of Landsteiner and co-authors opened the possibility to understanding the general principle of blood group serology. Simultaneously, significant progress was made in the fields of heart and circulation physiology as well as in the understanding of the mechanism of oxygen transport and tissue oxygenation.
Restrictions in applied transfusion medicine, especially in disaster situations such as World War II, laid the grounds for accelerated research in the field of blood substitutes. Early attempts and optimism in developing blood substitutes were very quickly confronted with significant side effects, which could not be promptly eliminated due to the level of knowledge and technology available at that time. The emergence of HIV in the 1980s renewed impetus for development of infection-safe blood substitutes. Public concern about the safety of the blood supply was raised further by mad cow disease. The continuous decline of blood donation combined with the increased demand for blood transfusion (increased ageing of population, increased incidence of invasive diagnostic, chemotherapy and extensive surgical interventions, terror attacks, international military conflicts) and positive estimation of investors in biotechnology branch made for a positive environment for further development of blood substitutes.
Efforts to develop blood substitutes have been driven by a desire to replace blood transfusion in emergency situations, in places where infectious disease is endemic and the risk of contaminated blood products is high, where refrigeration to preserve blood may be lacking, and where it might not be possible or convenient to find blood type matches.
In 2023, DARPA announced funding twelve universities and labs for synthetic blood research. Human trials would be expected to happen between 2028-2030.
Approaches
Efforts have focused on molecules that can carry oxygen, and most work has focused on recombinant hemoglobin, which normally carries oxygen, and perfluorocarbons (PFC), chemical compounds which can carry and release oxygen.
The first approved oxygen-carrying blood substitute was a perfluorocarbon-based product called Fluosol-DA-20, manufactured by Green Cross of Japan. It was approved by the Food and Drug Administration (FDA) in 1989. Because of limited success, complexity of use and side effects, it was withdrawn in 1994. However, Fluosol-DA remains the only oxygen therapeutic ever fully approved by the FDA. As of 2017 no hemoglobin-based product had been approved.
Perfluorocarbon based
Perfluorochemicals are not water soluble and will not mix with blood, therefore emulsions must be made by dispersing small drops of PFC in water. This liquid is then mixed with antibiotics, vitamins, nutrients and salts, producing a mixture that contains about 80 different components, and performs many of the vital functions of natural blood. PFC particles are about the size of the diameter of a red blood cell (RBC). This small size can enable PFC particles to traverse capillaries through which no RBCs are flowing. In theory this can benefit damaged, blood-starved tissue, which conventional red cells cannot reach. PFC solutions can carry oxygen so well that mammals, including humans, can survive breathing liquid PFC solution, called liquid breathing.
Perfluorocarbon-based blood substitutes are completely man-made; this provides advantages over blood substitutes that rely on modified hemoglobin, such as unlimited manufacturing capabilities, ability to be heat-sterilized, and PFCs' efficient oxygen delivery and carbon dioxide removal. PFCs in solution act as an intravascular oxygen carrier to temporarily augment oxygen delivery to tissues. PFCs are removed from the bloodstream within 48 hours by the body's normal clearance procedure for particles in the blood – exhalation. PFC particles in solution can carry several times more oxygen per cubic centimeter (cc) than blood, while being 40 to 50 times smaller than hemoglobin.
Fluosol was made mostly of perfluorodecalin or perfluorotributylamine suspended in an albumin emulsion. It was developed in Japan and first tested in the United States in November 1979. In order to "load" sufficient amounts of oxygen into it, people who had been given it had to breathe pure oxygen by mask or in a hyperbaric chamber. It was approved by the FDA in 1989, and was approved in eight other countries. Its use was associated with a reduction in ischemic complications and with an increase in pulmonary edema and congestive heart failure. Due to difficulty with the emulsion storage of Fluosol use (frozen storage and rewarming), its popularity declined and its production ended in 1994.
Oxygent was a second-generation, lecithin-stabilized emulsion of a PFC that was under development by Alliance Pharmaceuticals. In 2002 a Phase III study was halted early due an increase in incidences of strokes in the study arm.
Haemoglobin based
Haemoglobin is the main component of red blood cells, comprising about 33% of the cell mass. Haemoglobin-based products are called haemoglobin-based oxygen carriers (HBOCs).
Unmodified cell-free haemoglobin is not useful as a blood substitute because its oxygen affinity is too high for effective tissue oxygenation, the half-life within the intravascular space that is too short to be clinically useful, it has a tendency to undergo dissociation in dimers with resultant kidney damage and toxicity, and because free haemoglobin tends to take up nitric oxide, causing vasoconstriction.
Efforts to overcome this toxicity have included making genetically engineered versions, cross-linking, polymerization, and encapsulation.
HemAssist, a diaspirin cross-linked haemoglobin (DCLHb) was developed by Baxter Healthcare; it was the most widely studied of the haemoglobin-based blood substitutes, used in more than a dozen animal and clinical studies. It reached Phase III clinical trials, in which it failed due to increased mortality in the trial arm, mostly due to severe vasoconstriction complications. The results were published in 1999.
Hemolink (Hemosol Inc., Mississauga, Canada) was a haemoglobin solution that contained cross-linked an o-rafinose polymerised human haemoglobin. Hemosol struggled after Phase II trials were halted in 2003 on safety concerns and declared bankruptcy in 2005.
Hemopure was developed by Biopure Corp and was a chemically stabilized, cross-linked bovine (cow) haemoglobin in a salt solution intended for human use; the company developed the same product under the trade name Oxyglobin for veterinary use in dogs. Oxyglobin was approved in the US and Europe and was introduced to veterinary clinics and hospitals in March 1998. Hemopure was approved in South Africa and Russia. Biopure filed for bankruptcy protection in 2009. Its assets were subsequently purchased by HbO2 Therapeutics in 2014.
PolyHeme was developed over 20 years by Northfield Laboratories and began as a military project following the Vietnam War. It is human haemoglobin, extracted from red blood cells, then polymerized, then incorporated into an electrolyte solution. In April 2009, the FDA rejected Northfield's Biologic License Application and in June 2009, Northfield filed for bankruptcy.
Dextran-Haemoglobin was developed by Dextro-Sang Corp as a veterinary product, and was a conjugate of the polymer dextran with human haemoglobin.
Hemotech was developed by HemoBiotech and was a chemically modified haemoglobin.
Somatogen developed a genetically engineered and crosslinked tetramer it called Optro. It failed in a phase II trial and development was halted.
A pyridoxylated Hb conjugated with polyoxyethylene was created by scientists at Ajinomoto and eventually developed by Apex Biosciences, a subsidiary of Curacyte AG; it was called "PHP" and failed in a Phase III trial published in 2014, due to increased mortality in the control arm, which led to Curacyte shutting down.
Similarly, Hemospan was developed by Sangart, and was a pegylated haemoglobin provided in a powdered form. While early trials were promising Sangart ran out of funding and closed down.
Stem cells
Stem cells offer a possible means of producing transfusable blood. A study performed by Giarratana et al. describes a large-scale ex-vivo production of mature human blood cells using hematopoietic stem cells. The cultured cells possessed the same haemoglobin content and morphology as native red blood cells. The authors contend that the cells had a near-normal lifespan, when compared to natural red blood cells.
Scientists from the experimental arm of the United States Department of Defense began creating artificial blood for use in remote areas and transfuse blood to wounded soldiers more quickly in 2010. The blood is made from the hematopoietic stem cells removed from the umbilical cord between human mother and newborn using a method called blood pharming. Pharming has been used in the past on animals and plants to create medical substances in large quantities. Each cord can produce approximately 20 units of blood. The blood is being produced for the Defense Advanced Research Projects Agency by Arteriocyte. The Food and Drug Administration has examined and approved the safety of this blood from previously submitted O-negative blood. Using this particular artificial blood will reduce the costs per unit of blood from $5,000 to equal or less than $1,000. This blood will also serve as a blood donor to all common blood types.
See also
Artificial Cells, Blood Substitutes, and Biotechnology
Blood plasma substitute (disambiguation)
Blood transfusion
Bloodless surgery
Erythromer
Induced blood stem cells
Respirocyte
Theatrical blood
Vaska's complex: carries oxygen and hydrogen
References
External links
How Artificial Blood Works at HowStuffWorks
Synthetic biology
Transfusion medicine | Blood substitute | [
"Engineering",
"Biology"
] | 2,345 | [
"Synthetic biology",
"Biological engineering",
"Molecular genetics",
"Bioinformatics"
] |
415,153 | https://en.wikipedia.org/wiki/Major%20second | In Western music theory, a major second (sometimes also called whole tone or a whole step) is a second spanning two semitones (). A second is a musical interval encompassing two adjacent staff positions (see Interval number for more details). For example, the interval from C to D is a major second, as the note D lies two semitones above C, and the two notes are notated on adjacent staff positions. Diminished, minor and augmented seconds are notated on adjacent staff positions as well, but consist of a different number of semitones (zero, one, and three).
The major second is the interval that occurs between the first and second degrees of a major scale, the tonic and the supertonic. On a musical keyboard, a major second is the interval between two keys separated by one key, counting white and black keys alike. On a guitar string, it is the interval separated by two frets. In moveable-do solfège, it is the interval between do and re. It is considered a melodic step, as opposed to larger intervals called skips.
Intervals composed of two semitones, such as the major second and the diminished third, are also called tones, whole tones, or whole steps.
In just intonation, major seconds can occur in at least two different frequency ratios:
9:8 (about 203.9 cents) and 10:9 (about 182.4 cents). The largest (9:8) ones are called major tones or greater tones, the smallest (10:9) are called minor tones or lesser tones. Their size differs by exactly one syntonic comma (81:80, or about 21.5 cents).
Some equal temperaments, such as 15-ET and 22-ET, also distinguish between a greater and a lesser tone.
The major second was historically considered one of the most dissonant intervals of the diatonic scale, although much 20th-century music saw it reimagined as a consonance. It is common in many different musical systems, including Arabic music, Turkish music and music of the Balkans, among others. It occurs in both diatonic and pentatonic scales.
. Here, middle C is followed by D, which is a tone 200 cents sharper than C, and then by both tones together.
Major and minor tones
In tuning systems using just intonation, such as 5-limit tuning, in which major seconds occur in two different sizes, the wider of them is called a major tone or greater tone, and the narrower minor tone or, lesser tone. The difference in size between a major tone and a minor tone is equal to one syntonic comma (about 21.51 cents).
The major tone is the 9:8 interval , and it is an approximation thereof in other tuning systems, while the minor tone is the 10:9 ratio . The major tone may be derived from the harmonic series as the interval between the eighth and ninth harmonics. The minor tone may be derived from the harmonic series as the interval between the ninth and tenth harmonics. The 10:9 minor tone arises in the C major scale between D & E and G & A, and is "a sharper dissonance" than 9:8. The 9:8 major tone arises in the C major scale between C & D, F & G, and A & B. This 9:8 interval was named epogdoon (meaning 'one eighth in addition') by the Pythagoreans.
Notice that in these tuning systems, a third kind of whole tone, even wider than the major tone, exists. This interval of two semitones, with ratio 256:225, is simply called the diminished third (for further details, see ).
Some equal temperaments also produce major seconds of two different sizes, called greater and lesser tones (or major and minor tones). For instance, this is true for 15-ET, 22-ET, 34-ET, 41-ET, 53-ET, and 72-ET.
Conversely, in twelve-tone equal temperament, Pythagorean tuning, and meantone temperament (including 19-ET and 31-ET) all major seconds have the same size, so there cannot be a distinction between a greater and a lesser tone.
In any system where there is only one size of major second, the terms greater and lesser tone (or major and minor tone) are rarely used with a different meaning. Namely, they are used to indicate the two distinct kinds of whole tone, more commonly and more appropriately called major second (M2) and diminished third (d3). Similarly, major semitones and minor semitones are more often and more appropriately referred to as minor seconds (m2) and augmented unisons (A1), or diatonic and chromatic semitones.
Unlike almost all uses of the terms major and minor, these intervals span the same number of semitones. They both span 2 semitones, while, for example, a major third (4 semitones) and minor third (3 semitones) differ by one semitone. Thus, to avoid ambiguity, it is preferable to call them greater tone and lesser tone (see also greater and lesser diesis).
Two major tones equal a ditone.
Epogdoon
In Pythagorean music theory, the epogdoon () is the interval with the ratio 9 to 8. The word is composed of the prefix epi- meaning "on top of" and ogdoon meaning "one eighth"; so it means "one eighth in addition". For example, the natural numbers are 8 and 9 in this relation ().
According to Plutarch, the Pythagoreans hated the number 17 because it separates the 16 from its Epogdoon 18.
"[Epogdoos] is the 9:8 ratio that corresponds to the tone, [hêmiolios] is the 3:2 ratio that is associated with the musical fifth, and [epitritos] is the 4:3 ratio associated with the musical fourth. It is common to translate epogdoos as 'tone' [major second]."
Further reading
Barker, Andrew (2007). The Science of Harmonics in Classical Greece. Cambridge University Press. .
Plutarch (2005). Moralia. Translated by Frank Cole Babbitt. Kessinger Publishing. .
See also
Diminished third
List of meantone intervals
Minor second
Pythagorean interval
Whole tone scale
References
Major intervals
Seconds (music)
Units of level | Major second | [
"Physics",
"Mathematics"
] | 1,341 | [
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
415,167 | https://en.wikipedia.org/wiki/Semitone | A semitone, also called a minor second, half step, or a half tone, is the smallest musical interval commonly used in Western tonal music, and it is considered the most dissonant when sounded harmonically.
It is defined as the interval between two adjacent notes in a 12-tone scale (or half of a whole step), visually seen on a keyboard as the distance between two keys that are adjacent to each other. For example, C is adjacent to C; the interval between them is a semitone.
In a 12-note approximately equally divided scale, any interval can be defined in terms of an appropriate number of semitones (e.g. a whole tone or major second is 2 semitones wide, a major third 4 semitones, and a perfect fifth 7 semitones).
In music theory, a distinction is made between a diatonic semitone, or minor second (an interval encompassing two different staff positions, e.g. from C to D) and a chromatic semitone or augmented unison (an interval between two notes at the same staff position, e.g. from C to C). These are enharmonically equivalent if and only if twelve-tone equal temperament is used; for example, they are not the same thing in meantone temperament, where the diatonic semitone is distinguished from and larger than the chromatic semitone (augmented unison), or in Pythagorean tuning, where the diatonic semitone is smaller instead. See for more details about this terminology.
In twelve-tone equal temperament all semitones are equal in size (100 cents). In other tuning systems, "semitone" refers to a family of intervals that may vary both in size and name. In Pythagorean tuning, seven semitones out of twelve are diatonic, with ratio 256:243 or 90.2 cents (Pythagorean limma), and the other five are chromatic, with ratio 2187:2048 or 113.7 cents (Pythagorean apotome); they differ by the Pythagorean comma of ratio 531441:524288 or 23.5 cents. In quarter-comma meantone, seven of them are diatonic, and 117.1 cents wide, while the other five are chromatic, and 76.0 cents wide; they differ by the lesser diesis of ratio 128:125 or 41.1 cents. 12-tone scales tuned in just intonation typically define three or four kinds of semitones. For instance, Asymmetric five-limit tuning yields chromatic semitones with ratios 25:24 (70.7 cents) and 135:128 (92.2 cents), and diatonic semitones with ratios 16:15 (111.7 cents) and 27:25 (133.2 cents). For further details, see below.
The condition of having semitones is called hemitonia; that of having no semitones is anhemitonia. A musical scale or chord containing semitones is called hemitonic; one without semitones is anhemitonic.
Minor second
The minor second occurs in the major scale, between the third and fourth degree, (mi (E) and fa (F) in C major), and between the seventh and eighth degree (ti (B) and do (C) in C major). It is also called the diatonic semitone because it occurs between steps in the diatonic scale. The minor second is abbreviated m2 (or −2). Its inversion is the major seventh (M7 or Ma7).
. Here, middle C is followed by D, which is a tone 100 cents sharper than C, and then by both tones together.
Melodically, this interval is very frequently used, and is of particular importance in cadences. In the perfect and deceptive cadences it appears as a resolution of the leading-tone to the tonic. In the plagal cadence, it appears as the falling of the subdominant to the mediant. It also occurs in many forms of the imperfect cadence, wherever the tonic falls to the leading-tone.
Harmonically, the interval usually occurs as some form of dissonance or a nonchord tone that is not part of the functional harmony. It may also appear in inversions of a major seventh chord, and in many added tone chords.
In unusual situations, the minor second can add a great deal of character to the music. For instance, Frédéric Chopin's Étude Op. 25, No. 5 opens with a melody accompanied by a line that plays fleeting minor seconds. These are used to humorous and whimsical effect, which contrasts with its more lyrical middle section. This eccentric dissonance has earned the piece its nickname: the "wrong note" étude. This kind of usage of the minor second appears in many other works of the Romantic period, such as Modest Mussorgsky's Ballet of the Unhatched Chicks. More recently, the music to the movie Jaws exemplifies the minor second.
In other temperaments
In just intonation a 16:15 minor second arises in the C major scale between B & C and E & F, and is "the sharpest dissonance found in the [major] scale."
Augmented unison
The augmented unison, the interval produced by the augmentation, or widening by one half step, of the perfect unison, does not occur between diatonic scale steps, but instead between a scale step and a chromatic alteration of the same step. It is also called a chromatic semitone. The augmented unison is abbreviated A1, or aug 1. Its inversion is the diminished octave (d8, or dim 8). The augmented unison is also the inversion of the augmented octave, because the interval of the diminished unison does not exist. This is because a unison is always made larger when one note of the interval is changed with an accidental.
Melodically, an augmented unison very frequently occurs when proceeding to a chromatic chord, such as a secondary dominant, a diminished seventh chord, or an augmented sixth chord. Its use is also often the consequence of a melody proceeding in semitones, regardless of harmonic underpinning, e.g. D, D, E, F, F. (Restricting the notation to only minor seconds is impractical, as the same example would have a rapidly increasing number of accidentals, written enharmonically as D, E, F, G, A).
Harmonically, augmented unisons are quite rare in tonal repertoire. In the example to the right, Liszt had written an E against an E in the bass. Here E was preferred to a D to make the tone's function clear as part of an F dominant seventh chord, and the augmented unison is the result of superimposing this harmony upon an E pedal point.
In addition to this kind of usage, harmonic augmented unisons are frequently written in modern works involving tone clusters, such as Iannis Xenakis' Evryali for piano solo.
History
The semitone appeared in the music theory of Greek antiquity as part of a diatonic or chromatic tetrachord, and it has always had a place in the diatonic scales of Western music since. The various modal scales of medieval music theory were all based upon this diatonic pattern of tones and semitones.
Though it would later become an integral part of the musical cadence, in the early polyphony of the 11th century this was not the case. Guido of Arezzo suggested instead in his Micrologus other alternatives: either proceeding by whole tone from a major second to a unison, or an occursus having two notes at a major third move by contrary motion toward a unison, each having moved a whole tone.
"As late as the 13th century the half step was experienced as a problematic interval not easily understood, as the irrational remainder between the perfect fourth and the ditone ." In a melodic half step, no "tendency was perceived of the lower tone toward the upper, or of the upper toward the lower. The second tone was not taken to be the 'goal' of the first. Instead, the half step was avoided in clausulae because it lacked clarity as an interval."
However, beginning in the 13th century cadences begin to require motion in one voice by half step and the other a whole step in contrary motion. These cadences would become a fundamental part of the musical language, even to the point where the usual accidental accompanying the minor second in a cadence was often omitted from the written score (a practice known as musica ficta). By the 16th century, the semitone had become a more versatile interval, sometimes even appearing as an augmented unison in very chromatic passages. Semantically, in the 16th century the repeated melodic semitone became associated with weeping, see: passus duriusculus, lament bass, and pianto.
By the Baroque era (1600 to 1750), the tonal harmonic framework was fully formed, and the various musical functions of the semitone were rigorously understood. Later in this period the adoption of well temperaments for instrumental tuning and the more frequent use of enharmonic equivalences increased the ease with which a semitone could be applied. Its function remained similar through the Classical period, and though it was used more frequently as the language of tonality became more chromatic in the Romantic period, the musical function of the semitone did not change.
In the 20th century, however, composers such as Arnold Schoenberg, Béla Bartók, and Igor Stravinsky sought alternatives or extensions of tonal harmony, and found other uses for the semitone. Often the semitone was exploited harmonically as a caustic dissonance, having no resolution. Some composers would even use large collections of harmonic semitones (tone clusters) as a source of cacophony in their music (e.g. the early piano works of Henry Cowell). By now, enharmonic equivalence was a commonplace property of equal temperament, and instrumental use of the semitone was not at all problematic for the performer. The composer was free to write semitones wherever he wished.
Semitones in different tunings
The exact size of a semitone depends on the tuning system used. Meantone temperaments have two distinct types of semitones, but in the exceptional case of equal temperament, there is only one. The unevenly distributed well temperaments contain many different semitones. Pythagorean tuning, similar to meantone tuning, has two, but in other systems of just intonation there are many more possibilities.
Meantone temperament
In meantone systems, there are two different semitones. This results because of the break in the circle of fifths that occurs in the tuning system: diatonic semitones derive from a chain of five fifths that does not cross the break, and chromatic semitones come from one that does.
The chromatic semitone is usually smaller than the diatonic. In the common quarter-comma meantone, tuned as a cycle of tempered fifths from E to G, the chromatic and diatonic semitones are 76.0 and 117.1 cents wide respectively.
Extended meantone temperaments with more than 12 notes still retain the same two semitone sizes, but there is more flexibility for the musician about whether to use an augmented unison or minor second. 31-tone equal temperament is the most flexible of these, which makes an unbroken circle of 31 fifths, allowing the choice of semitone to be made for any pitch.
Equal temperament
12-tone equal temperament is a form of meantone tuning in which the diatonic and chromatic semitones are exactly the same, because its circle of fifths has no break. Each semitone is equal to one twelfth of an octave. This is a ratio of 21/12 (approximately 1.05946), or 100 cents, and is 11.7 cents narrower than the 16:15 ratio (its most common form in just intonation, discussed below).
All diatonic intervals can be expressed as an equivalent number of semitones. For instance a major sixth equals nine semitones.
There are many approximations, rational or otherwise, to the equal-tempered semitone. To cite a few:
suggested by Vincenzo Galilei and used by luthiers of the Renaissance,
suggested by Marin Mersenne as a constructible and more accurate alternative,
used by Julián Carrillo as part of a sixteenth-tone system.
For more examples, see Pythagorean and Just systems of tuning below.
Well temperament
There are many forms of well temperament, but the characteristic they all share is that their semitones are of an uneven size. Every semitone in a well temperament has its own interval (usually close to the equal-tempered version of 100 cents), and there is no clear distinction between a diatonic and chromatic semitone in the tuning. Well temperament was constructed so that enharmonic equivalence could be assumed between all of these semitones, and whether they were written as a minor second or augmented unison did not effect a different sound. Instead, in these systems, each key had a slightly different sonic color or character, beyond the limitations of conventional notation.
Pythagorean tuning
Like meantone temperament, Pythagorean tuning is a broken circle of fifths. This creates two distinct semitones, but because Pythagorean tuning is also a form of 3-limit just intonation, these semitones are rational. Also, unlike most meantone temperaments, the chromatic semitone is larger than the diatonic.
The Pythagorean diatonic semitone has a ratio of 256/243 (), and is often called the Pythagorean limma. It is also sometimes called the Pythagorean minor semitone. It is about 90.2 cents.
It can be thought of as the difference between three octaves and five just fifths, and functions as a diatonic semitone in a Pythagorean tuning.
The Pythagorean chromatic semitone has a ratio of 2187/2048 (). It is about 113.7 cents. It may also be called the Pythagorean apotome or the Pythagorean major semitone. (See Pythagorean interval.)
It can be thought of as the difference between four perfect octaves and seven just fifths, and functions as a chromatic semitone in a Pythagorean tuning.
The Pythagorean limma and Pythagorean apotome are enharmonic equivalents (chromatic semitones) and only a Pythagorean comma apart, in contrast to diatonic and chromatic semitones in meantone temperament and 5-limit just intonation.
Just 5-limit intonation
A minor second in just intonation typically corresponds to a pitch ratio of 16:15 () or 1.0666... (approximately 111.7 cents), called the just diatonic semitone. This is a practical just semitone, since it is the interval that occurs twice within the diatonic scale between a:
major third (5:4) and perfect fourth (4:3) and a
major seventh (15:8) and the perfect octave (2:1)
The 16:15 just minor second arises in the C major scale between B & C and E & F, and is, "the sharpest dissonance found in the scale".
An "augmented unison" (sharp) in just intonation is a different, smaller semitone, with frequency ratio 25:24 () or 1.0416... (approximately 70.7 cents). It is the interval between a major third (5:4) and a minor third (6:5). In fact, it is the spacing between the minor and major thirds, sixths, and sevenths (but not necessarily the major and minor second). Composer Ben Johnston used a sharp () to indicate a note is raised 70.7 cents, or a flat () to indicate a note is lowered 70.7 cents. (This is the standard practice for just intonation, but not for all other microtunings.)
Two other kinds of semitones are produced by 5 limit tuning. A chromatic scale defines 12 semitones as the 12 intervals between the 13 adjacent notes, spanning a full octave (e.g. from C to C). The 12 semitones produced by a commonly used version of 5 limit tuning have four different sizes, and can be classified as follows:
Just chromatic semitone chromatic semitone, or smaller, or minor chromatic semitone between harmonically related flats and sharps e.g. between E and E (6:5 and 5:4):
Larger chromatic semitone or major chromatic semitone, or larger limma, or major chroma, e.g. between C and an accute C (C raised by a syntonic comma) (1:1 and 135:128):
Just diatonic semitone or smaller, or minor diatonic semitone, e.g. between E and F (5:4 to 4:3):
Larger diatonic semitone or greater or major diatonic semitone, e.g. between A and B (5:3 to 9:5), or C and chromatic D (27:25), or F and G (25:18 and 3:2):
The most frequently occurring semitones are the just ones (, 16:15, and , 25:24): S occurs at 6 short intervals out of 12, 3 times, twice, and at only one interval (if diatonic D replaces chromatic D and sharp notes are not used).
The smaller chromatic and diatonic semitones differ from the larger by the syntonic comma (81:80 or 21.5 cents). The smaller and larger chromatic semitones differ from the respective diatonic semitones by the same 128:125 diesis as the above meantone semitones. Finally, while the inner semitones differ by the diaschisma (2048:2025 or 19.6 cents), the outer differ by the greater diesis (648:625 or 62.6 cents).
Extended just intonations
In 7 limit tuning there is the septimal diatonic semitone of 15:14 () available in between the 5 limit major seventh (15:8) and the 7 limit minor seventh / harmonic seventh (7:4). There is also a smaller septimal chromatic semitone of 21:20 () between a septimal minor seventh and a fifth (21:8) and an octave and a major third (5:2). Both are more rarely used than their 5 limit neighbours, although the former was often implemented by theorist Cowell, while Partch used the latter as part of his 43 tone scale.
Under 11 limit tuning, there is a fairly common undecimal neutral second (12:11) (), but it lies on the boundary between the minor and major second (150.6 cents). In just intonation there are infinitely many possibilities for intervals that fall within the range of the semitone (e.g. the Pythagorean semitones mentioned above), but most of them are impractical.
In 13 limit tuning, there is a tridecimal tone (13:12 or 138.57 cents) and tridecimal tone (27:26 or 65.34 cents).
In 17 limit just intonation, the major diatonic semitone is 15:14 or 119.4 cents (), and the minor diatonic semitone is 17:16 or 105.0 cents, and septendecimal limma is 18:17 or 98.95 cents.
Though the names diatonic and chromatic are often used for these intervals, their musical function is not the same as the meantone semitones. For instance, 15:14 would usually be written as an augmented unison, functioning as the chromatic counterpart to a diatonic 16:15. These distinctions are highly dependent on the musical context, and just intonation is not particularly well suited to chromatic use (diatonic semitone function is more prevalent).
Other equal temperaments
19-tone equal temperament distinguishes between the chromatic and diatonic semitones; in this tuning, the chromatic semitone is one step of the scale (), and the diatonic semitone is two (). 31-tone equal temperament also distinguishes between these two intervals, which become 2 and 3 steps of the scale, respectively. 53-ET has an even closer match to the two semitones with 3 and 5 steps of its scale while 72-ET uses 4 () and 7 () steps of its scale.
In general, because the smaller semitone can be viewed as the difference between a minor third and a major third, and the larger as the difference between a major third and a perfect fourth, tuning systems that closely match those just intervals (6/5, 5/4, and 4/3) will also distinguish between the two types of semitones and closely match their just intervals (25/24 and 16/15).
See also
12-tone equal temperament
List of meantone intervals
List of musical intervals
List of pitch intervals
Approach chord
Major second
Neutral second
Pythagorean interval
Regular temperament
References
Further reading
Grout, Donald Jay, and Claude V. Palisca. A History of Western Music, 6th ed. New York: Norton, 2001. .
Hoppin, Richard H. Medieval Music. New York: W. W. Norton, 1978. .
Minor intervals
Seconds (music)
Units of level | Semitone | [
"Physics",
"Mathematics"
] | 4,498 | [
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
415,513 | https://en.wikipedia.org/wiki/Net%20force | In mechanics, the net force is the sum of all the forces acting on an object. For example, if two forces are acting upon an object in opposite directions, and one force is greater than the other, the forces can be replaced with a single force that is the difference of the greater and smaller force. That force is the net force.
When forces act upon an object, they change its acceleration. The net force is the combined effect of all the forces on the object's acceleration, as described by Newton's second law of motion.
When the net force is applied at a specific point on an object, the associated torque can be calculated. The sum of the net force and torque is called the resultant force, which causes the object to rotate in the same way as all the forces acting upon it would if they were applied individually.
It is possible for all the forces acting upon an object to produce no torque at all. This happens when the net force is applied along the line of action.
In some texts, the terms resultant force and net force are used as if they mean the same thing. This is not always true, especially in complex topics like the motion of spinning objects or situations where everything is perfectly balanced, known as static equilibrium. In these cases, it is important to understand that "net force" and "resultant force" can have distinct meanings.
Concept
In physics, a force is considered a vector quantity. This means that it not only has a size (or magnitude) but also a direction in which it acts. We typically represent force with the symbol F in boldface, or sometimes, we place an arrow over the symbol to indicate its vector nature, like this: .
When we need to visually represent a force, we draw a line segment. This segment starts at a point A, where the force is applied, and ends at another point B. This line not only gives us the direction of the force (from A to B) but also its magnitude: the longer the line, the stronger the force.
One of the essential concepts in physics is that forces can be added together, which is the basis of vector addition. This concept has been central to physics since the times of Galileo and Newton, forming the cornerstone of Vector calculus, which came into its own in the late 1800s and early 1900s.
The picture to the right shows how to add two forces using the "tip-to-tail" method. This method involves drawing forces , and from the tip of the first force. The resulting force, or "total" force, , is then drawn from the start of the first force (the tail) to the end of the second force (the tip). Grasping this concept is fundamental to understanding how forces interact and combine to influence the motion and equilibrium of objects.
When forces are applied to an extended body (a body that's not a single point), they can be applied at different points. Such forces are called 'bound vectors'. It's important to remember that to add these forces together, they need to be considered at the same point.
The concept of "net force" comes into play when you look at the total effect of all of these forces on the body. However, the net force alone may not necessarily preserve the motion of the body. This is because, besides the net force, the 'torque' or rotational effect associated with these forces also matters. The net force must be applied at the right point, and with the right associated torque, to replicate the effect of the original forces.
When the net force and the appropriate torque are applied at a single point, they together constitute what is known as the resultant force. This resultant force-and-torque combination will have the same effect on the body as all the original forces and their associated torques.
Parallelogram rule for the addition of forces
A force is known as a bound vector—which means it has a direction and magnitude and a point of application. A convenient way to define a force is by a line segment from a point A to a point B. If we denote the coordinates of these points as A = (Ax, Ay, Az) and B = (Bx, By, Bz), then the force vector applied at A is given by
The length of the vector
defines the magnitude of and is given by
The sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them. Let F1 = B−A and F2 = D−A, then the sum of these two vectors is
which can be written as
where E is the midpoint of the segment BD that joins the points B and D.
Thus, the sum of the forces F1 and F2 is twice the segment joining A to the midpoint E of the segment joining the endpoints B and D of the two forces. The doubling of this length is easily achieved by defining a segments BC and DC parallel to AD and AB, respectively, to complete the parallelogram ABCD. The diagonal AC of this parallelogram is the sum of the two force vectors. This is known as the parallelogram rule for the addition of forces.
Translation and rotation due to a force
Point forces
When a force acts on a particle, it is applied to a single point (the particle volume is negligible): this is a point force and the particle is its application point. But an external force on an extended body (object) can be applied to a number of its constituent particles, i.e. can be "spread" over some volume or surface of the body. However, determining its rotational effect on the body requires that we specify its point of application (actually, the line of application, as explained below). The problem is usually resolved in the following ways:
Often, the volume or surface on which the force acts is relatively small compared to the size of the body, so that it can be approximated by a point. It is usually not difficult to determine whether the error caused by such approximation is acceptable.
If it is not acceptable (obviously e.g. in the case of gravitational force), such "volume/surface" force should be described as a system of forces (components), each acting on a single particle, and then the calculation should be done for each of them separately. Such a calculation is typically simplified by the use of differential elements of the body volume/surface, and the integral calculus. In a number of cases, though, it can be shown that such a system of forces may be replaced by a single point force without the actual calculation (as in the case of uniform gravitational force).
In any case, the analysis of the rigid body motion begins with the point force model. And when a force acting on a body is shown graphically, the oriented line segment representing the force is usually drawn so as to "begin" (or "end") at the application point.
Rigid bodies
In the example shown in the diagram opposite, a single force acts at the application point H on a free rigid body. The body has the mass and its center of mass is the point C. In the constant mass approximation, the force causes changes in the body motion described by the following expressions:
is the center of mass acceleration; and
is the angular acceleration of the body.
In the second expression, is the torque or moment of force, whereas is the moment of inertia of the body. A torque caused by a force is a vector quantity defined with respect to some reference point:
is the torque vector, and
is the amount of torque.
The vector is the position vector of the force application point, and in this example it is drawn from the center of mass as the reference point of (see diagram). The straight line segment is the lever arm of the force with respect to the center of mass. As the illustration suggests, the torque does not change (the same lever arm) if the application point is moved along the line of the application of the force (dotted black line). More formally, this follows from the properties of the vector product, and shows that rotational effect of the force depends only on the position of its line of application, and not on the particular choice of the point of application along that line.
The torque vector is perpendicular to the plane defined by the force and the vector , and in this example, it is directed towards the observer; the angular acceleration vector has the same direction. The right-hand rule relates this direction to the clockwise or counterclockwise rotation in the plane of the drawing.
The moment of inertia is calculated with respect to the axis through the center of mass that is parallel with the torque. If the body shown in the illustration is a homogeneous disc, this moment of inertia is . If the disc has the mass 0,5 kg and the radius 0,8 m, the moment of inertia is 0,16 kgm2. If the amount of force is 2 N, and the lever arm 0,6 m, the amount of torque is 1,2 Nm. At the instant shown, the force gives to the disc the angular acceleration α = /I = 7,5 rad/s2, and to its center of mass it gives the linear acceleration a = F/m = 4 m/s2.
Resultant force
Resultant force and torque replaces the effects of a system of forces acting on the movement of a rigid body. An interesting special case is a torque-free resultant, which can be found as follows:
Vector addition is used to find the net force;
Use the equation to determine the point of application with zero torque:
where is the net force, locates its application point, and individual forces are with application points . It may be that there is no point of application that yields a torque-free resultant.
The diagram opposite illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems:
Lines of application of the actual forces and on the leftmost illustration intersect. After vector addition is performed "at the location of ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force is equal to the sum of the torques of the actual forces.
The illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of ", the net force is translated to the appropriate line of application, where it becomes the resultant force . The procedure is based on decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships.
The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque where is the distance between their lines of application. Since there is no resultant force, this torque can be [is?] described as "pure" torque.
Usage
In general, a system of forces acting on a rigid body can always be replaced by one force plus one pure (see previous section) torque. The force is the net force, but to calculate the additional torque, the net force must be assigned the line of action. The line of action can be selected arbitrarily, but the additional pure torque depends on this choice. In a special case, it is possible to find such line of action that this additional torque is zero.
The resultant force and torque can be determined for any configuration of forces. However, an interesting special case is a torque-free resultant. This is useful, both conceptually and practically, because the body moves without rotating as if it was a particle.
Some authors do not distinguish the resultant force from the net force and use the terms as synonyms.
See also
Screw theory
Center of mass
Centers of gravity in non-uniform fields
References
Force
Dynamics (mechanics) | Net force | [
"Physics",
"Mathematics"
] | 2,458 | [
"Physical phenomena",
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Motion (physics)",
"Dynamics (mechanics)",
"Wikipedia categories named after physical quantities",
"Matter"
] |
415,883 | https://en.wikipedia.org/wiki/Hydrogen-alpha | Hydrogen-alpha, typically shortened to H-alpha or Hα, is a deep-red visible spectral line of the hydrogen atom with a wavelength of 656.28 nm in air and 656.46 nm in vacuum. It is the first spectral line in the Balmer series and is emitted when an electron falls from a hydrogen atom's third- to second-lowest energy level. H-alpha has applications in astronomy where its emission can be observed from emission nebulae and from features in the Sun's atmosphere, including solar prominences and the chromosphere.
Balmer series
According to the Bohr model of the atom, electrons exist in quantized energy levels surrounding the atom's nucleus. These energy levels are described by the principal quantum number n = 1, 2, 3, ... . Electrons may only exist in these states, and may only transit between these states.
The set of transitions from n ≥ 3 to n = 2 is called the Balmer series and its members are named sequentially by Greek letters:
n = 3 to n = 2 is called Balmer-alpha or H-alpha,
n = 4 to n = 2 is called Balmer-beta or H-beta,
n = 5 to n = 2 is called Balmer-gamma or H-gamma, etc.
For the Lyman series the naming convention is:
n = 2 to n = 1 is called Lyman-alpha,
n = 3 to n = 1 is called Lyman-beta, etc.
H-alpha has a wavelength of 656.281 nm, is visible in the red part of the electromagnetic spectrum, and is the easiest way for astronomers to trace the ionized hydrogen content of gas clouds. Since it takes nearly as much energy to excite the hydrogen atom's electron from n = 1 to n = 3 (12.1 eV, via the Rydberg formula) as it does to ionize the hydrogen atom (13.6 eV), ionization is far more probable than excitation to the n = 3 level. After ionization, the electron and proton recombine to form a new hydrogen atom. In the new atom, the electron may begin in any energy level, and subsequently cascades to the ground state (n = 1), emitting photons with each transition. Approximately half the time, this cascade will include the n = 3 to n = 2 transition and the atom will emit H-alpha light. Therefore, the H-alpha line occurs where hydrogen is being ionized.
The H-alpha line saturates (self-absorbs) relatively easily because hydrogen is the primary component of nebulae, so while it can indicate the shape and extent of the cloud, it cannot be used to accurately determine the cloud's mass. Instead, molecules such as carbon dioxide, carbon monoxide, formaldehyde, ammonia, or acetonitrile are typically used to determine the mass of a cloud.
Filter
An H-alpha filter is an optical filter designed to transmit a narrow bandwidth of light generally centred on the H-alpha wavelength. These filters can be dichroic filters manufactured by multiple (~50) vacuum-deposited layers. These layers are selected to produce interference effects that filter out any wavelengths except at the requisite band.
Taken in isolation, H-alpha dichroic filters are useful in astrophotography and for reducing the effects of light pollution. They do not have narrow enough bandwidth for observing the Sun's atmosphere.
For observing the Sun, a much narrower band filter can be made from three parts: an "energy rejection filter" which is usually a piece of red glass that absorbs most of the unwanted wavelengths, a Fabry–Pérot etalon which transmits several wavelengths including one centred on the H-alpha emission line, and a "blocking filter" -a dichroic filter which transmits the H-alpha line while stopping those other wavelengths that passed through the etalon. This combination will pass only a narrow (<0.1 nm) range of wavelengths of light centred on the H-alpha emission line.
The physics of the etalon and the dichroic interference filters are essentially the same (relying on constructive/destructive interference of light reflecting between surfaces), but the implementation is different (a dichroic interference filter relies on the interference of internal reflections while the etalon has a relatively large air gap). Due to the high velocities sometimes associated with features visible in H-alpha light (such as fast moving prominences and ejections), solar H-alpha etalons can often be tuned (by tilting or changing the temperature or air density) to cope with the associated Doppler effect.
Commercially available H-alpha filters for amateur solar observing usually state bandwidths in Angstrom units and are typically 0.7Å (0.07 nm). By using a second etalon, this can be reduced to 0.5Å leading to improved contrast in details observed on the Sun's disc.
An even more narrow band filter can be made using a Lyot filter.
See also
Hydrogen spectral series
Rydberg formula
Spectrohelioscope
References
External links
Description of etalon filter by Colin Kaminski
MCE Membrane Filter
Atomic physics
Astronomical spectroscopy
Hydrogen physics
Optical filters | Hydrogen-alpha | [
"Physics",
"Chemistry"
] | 1,079 | [
"Spectrum (physical sciences)",
"Optical filters",
"Quantum mechanics",
"Astrophysics",
"Filters",
"Atomic physics",
" molecular",
"Astronomical spectroscopy",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
415,893 | https://en.wikipedia.org/wiki/Balmer%20series | The Balmer series, or Balmer lines in atomic physics, is one of a set of six named series describing the spectral line emissions of the hydrogen atom. The Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885.
The visible spectrum of light from hydrogen displays four wavelengths, 410 nm, 434 nm, 486 nm, and 656 nm, that correspond to emissions of photons by electrons in excited states transitioning to the quantum level described by the principal quantum number n equals 2. There are several prominent ultraviolet Balmer lines with wavelengths shorter than 400 nm. The series continues with an infinite number of lines whose wavelengths asymptotically approach the limit of 364.5 nm in the ultraviolet.
After Balmer's discovery, five other hydrogen spectral series were discovered, corresponding to electrons transitioning to values of n other than two.
Overview
The Balmer series is characterized by the electron transitioning from n ≥ 3 to n = 2, where n refers to the radial quantum number or principal quantum number of the electron. The transitions are named sequentially by Greek letter: n = 3 to n = 2 is called H-α, 4 to 2 is H-β, 5 to 2 is H-γ, and 6 to 2 is H-δ. As the first spectral lines associated with this series are located in the visible part of the electromagnetic spectrum, these lines are historically referred to as "H-alpha", "H-beta", "H-gamma", and so on, where H is the element hydrogen.
{| class="wikitable"
! Transition of n
|align="center"|3→2
|align="center"|4→2
|align="center"|5→2
|align="center"|6→2
|align="center"|7→2
|align="center"|8→2
|align="center"|9→2
|align="center"|∞→2
|-
! Name
|align="center"|H-α / Ba-α
|align="center"|H-β / Ba-β
|align="center"|H-γ / Ba-γ
|align="center"|H-δ / Ba-δ
|align="center"|H-ε / Ba-ε
|align="center"|H-ζ / Ba-ζ
|align="center"|H-η / Ba-η
|align="center"|Balmer break
|-
! Wavelength (nm, air)
|align="center"|656.279
|align="center"|486.135
|align="center"|434.0472
|align="center"|410.1734
|align="center"|397.0075
|align="center"|388.9064
|align="center"|383.5397
|align="center"|364.5
|-
! Energy difference (eV)
|align="center"|1.89
|align="center"|2.55
|align="center"|2.86
|align="center"|3.03
|align="center"|3.13
|align="center"|3.19
|align="center"|3.23
|align="center"|3.40
|-
! Color
|align="center"|Red
|align="center"|Cyan
|align="center"|Blue
|align="center"|Violet
|align="center"|(Ultraviolet)
|align="center"|(Ultraviolet)
|align="center"|(Ultraviolet)
|align="center"|(Ultraviolet)
|}
Although physicists were aware of atomic emissions before 1885, they lacked a tool to accurately predict where the spectral lines should appear. The Balmer equation predicts the four visible spectral lines of hydrogen with high accuracy. Balmer's equation inspired the Rydberg equation as a generalization of it, and this in turn led physicists to find the Lyman, Paschen, and Brackett series, which predicted other spectral lines of hydrogen found outside the visible spectrum.
The red H-alpha spectral line of the Balmer series of atomic hydrogen, which is the transition from the shell n = 3 to the shell n = 2, is one of the conspicuous colours of the universe. It contributes a bright red line to the spectra of emission or ionisation nebula, like the Orion Nebula, which are often H II regions found in star forming regions. In true-colour pictures, these nebula have a reddish-pink colour from the combination of visible Balmer lines that hydrogen emits.
Later, it was discovered that when the Balmer series lines of the hydrogen spectrum were examined at very high resolution, they were closely spaced doublets. This splitting is called fine structure. It was also found that excited electrons from shells with n greater than 6 could jump to the n = 2 shell, emitting shades of ultraviolet when doing so.
Balmer's formula
Balmer noticed that a single wavelength had a relation to every line in the hydrogen spectrum that was in the visible light region. That wavelength was . When any integer higher than 2 was squared and then divided by itself squared minus 4, then that number multiplied by (see equation below) gave the wavelength of another line in the hydrogen spectrum. By this formula, he was able to show that some measurements of lines made in his time by spectroscopy were slightly inaccurate, and his formula also predicted lines that had not yet been observed but were found later. His number also proved to be the limit of the series.
The Balmer equation could be used to find the wavelength of the absorption/emission lines and was originally presented as follows (save for a notation change to give Balmer's constant as B):
Where
λ is the wavelength.
B is a constant with the value of or .
m is the initial state
n is the final state
In 1888 the physicist Johannes Rydberg generalized the Balmer equation for all transitions of hydrogen. The equation commonly used to calculate the Balmer series is a specific example of the Rydberg formula and follows as a simple reciprocal mathematical rearrangement of the formula above (conventionally using a notation of m for n as the single integral constant needed):
where λ is the wavelength of the absorbed/emitted light and RH is the Rydberg constant for hydrogen. The Rydberg constant is seen to be equal to in Balmer's formula, and this value, for an infinitely heavy nucleus, is = .
Role in astronomy
The Balmer series is particularly useful in astronomy because the Balmer lines appear in numerous stellar objects due to the abundance of hydrogen in the universe, and therefore are commonly seen and relatively strong compared to lines from other elements. The first two Balmer lines correspond to the Fraunhofer lines C and F.
The spectral classification of stars, which is primarily a determination of surface temperature, is based on the relative strength of spectral lines, and the Balmer series in particular is very important. Other characteristics of a star that can be determined by close analysis of its spectrum include surface gravity (related to physical size) and composition.
Because the Balmer lines are commonly seen in the spectra of various objects, they are often used to determine radial velocities due to doppler shifting of the Balmer lines. This has important uses all over astronomy, from detecting binary stars, exoplanets, compact objects such as neutron stars and black holes (by the motion of hydrogen in accretion disks around them), identifying groups of objects with similar motions and presumably origins (moving groups, star clusters, galaxy clusters, and debris from collisions), determining distances (actually redshifts) of galaxies or quasars, and identifying unfamiliar objects by analysis of their spectrum.
Balmer lines can appear as absorption or emission lines in a spectrum, depending on the nature of the object observed. In stars, the Balmer lines are usually seen in absorption, and they are "strongest" in stars with a surface temperature of about 10,000 kelvins (spectral type A). In the spectra of most spiral and irregular galaxies, active galactic nuclei, H II regions and planetary nebulae, the Balmer lines are emission lines.
In stellar spectra, the H-epsilon line (transition 7→2, 397.007 nm) is often mixed in with another absorption line caused by ionized calcium known as "H" (the original designation given by Joseph von Fraunhofer). H-epsilon is separated by 0.16 nm from Ca II H at 396.847 nm, and cannot be resolved in low-resolution spectra. The H-zeta line (transition 8→2) is similarly mixed in with a neutral helium line seen in hot stars.
See also
Astronomical spectroscopy
Bohr model
Hydrogen spectral series
Lyman series
Rydberg formula
Stellar classification
References
Emission spectroscopy
Hydrogen physics | Balmer series | [
"Physics",
"Chemistry"
] | 1,859 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
415,895 | https://en.wikipedia.org/wiki/Lyman%20series | In physics and chemistry, the Lyman series is a hydrogen spectral series of transitions and resulting ultraviolet emission lines of the hydrogen atom as an electron goes from n ≥ 2 to n = 1 (where n is the principal quantum number), the lowest energy level of the electron (groundstate). The transitions are named sequentially by Greek letters: from n = 2 to n = 1 is called Lyman-alpha, 3 to 1 is Lyman-beta, 4 to 1 is Lyman-gamma, and so on. The series is named after its discoverer, Theodore Lyman. The greater the difference in the principal quantum numbers, the higher the energy of the electromagnetic emission.
History
The first line in the spectrum of the Lyman series was discovered in 1906 by physicist Theodore Lyman IV, who was studying the ultraviolet spectrum of electrically excited hydrogen gas. The rest of the lines of the spectrum (all in the ultraviolet) were discovered by Lyman from 1906-1914.
The spectrum of radiation emitted by hydrogen is non-continuous or discrete. Here is an illustration of the first series of hydrogen emission lines:
Historically, explaining the nature of the hydrogen spectrum was a considerable problem in physics. Nobody could predict the wavelengths of the hydrogen lines until 1885 when the Balmer formula gave an empirical formula for the visible hydrogen spectrum. Within five years Johannes Rydberg came up with an empirical formula that solved the problem, presented first in 1888 and final form in 1890. Rydberg managed to find a formula to match the known Balmer series emission lines, and also predicted those not yet discovered. Different versions of the Rydberg formula with different simple numbers were found to generate different series of lines.
On December 1, 2011, it was announced that Voyager 1 detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but due to interference from the Sun, the radiation from the Milky Way was not detectable.
The Lyman series
The version of the Rydberg formula that generated the Lyman series was:
where n is a natural number greater than or equal to 2 (i.e., ).
Therefore, the lines seen in the image above are the wavelengths corresponding to n = 2 on the right, to n → ∞ on the left. There are infinitely many spectral lines, but they become very dense as they approach n → ∞ (the Lyman limit), so only some of the first lines and the last one appear.
The wavelengths in the Lyman series are all ultraviolet:
Explanation and derivation
In 1914, when Niels Bohr produced his Bohr model theory, the reason why hydrogen spectral lines fit Rydberg's formula was explained. Bohr found that the electron bound to the hydrogen atom must have quantized energy levels described by the following formula,
According to Bohr's third assumption, whenever an electron falls from an initial energy level Ei to a final energy level Ef, the atom must emit radiation with a wavelength of
There is also a more comfortable notation when dealing with energy in units of electronvolts and wavelengths in units of angstroms,
Å.
Replacing the energy in the above formula with the expression for the energy in the hydrogen atom where the initial energy corresponds to energy level n and the final energy corresponds to energy level m,
Where RH is the same Rydberg constant for hydrogen from Rydberg's long known formula. This also means that the inverse of the Rydberg constant is equal to the Lyman limit.
For the connection between Bohr, Rydberg, and Lyman, one must replace m with 1 to obtain
which is Rydberg's formula for the Lyman series. Therefore, each wavelength of the emission lines corresponds to an electron dropping from a certain energy level (greater than 1) to the first energy level.
See also
Bohr model
H-alpha
Hydrogen spectral series
K-alpha
Lyman-alpha line
Lyman continuum photon
Moseley's law
Rydberg formula
Balmer series
References
Emission spectroscopy
Hydrogen physics | Lyman series | [
"Physics",
"Chemistry"
] | 798 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
416,612 | https://en.wikipedia.org/wiki/Cross-validation%20%28statistics%29 | Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set.
Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters.
In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set). The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).
One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model's predictive performance.
In summary, cross-validation combines (averages) measures of fitness in prediction to derive a more accurate estimate of model prediction performance.
Motivation
Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect.
Example: linear regression
In linear regression, there exist real response values , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error (MSE). The MSE for given estimated parameter values a and β on the training set (xi, yi) 1 ≤ i ≤ n is defined as:
If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets). Thus, a fitted model and computed MSE on the training set will result in an optimistically biased assessment of how well the model will fit an independent data set. This biased estimate is called the in-sample estimate of the fit, whereas the cross-validation estimate is an out-of-sample estimate.
Since in linear regression it is possible to directly compute the factor (n − p − 1)/(n + p + 1) by which the training MSE underestimates the validation MSE under the assumption that the model specification is valid, cross-validation can be used for checking whether the model has been overfitted, in which case the MSE in the validation set will substantially exceed its anticipated value. (Cross-validation in the context of linear regression is also useful in that it can be used to select an optimally regularized cost function.)
General case
In most other regression procedures (e.g. logistic regression), there is no simple formula to compute the expected out-of-sample fit. Cross-validation is, thus, a generally applicable way to predict the performance of a model on unavailable data using numerical computation in place of theoretical analysis.
Types
Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation.
Exhaustive cross-validation
Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set.
Leave-p-out cross-validation
Leave-p-out cross-validation (LpO CV) involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set.
LpO cross-validation require training and validating the model times, where n is the number of observations in the original sample, and where is the binomial coefficient. For p > 1 and for even moderately large n, LpO CV can become computationally infeasible. For example, with n = 100 and p = 30,
A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area under ROC curve of binary classifiers.
Leave-one-out cross-validation
Leave-one-out cross-validation (LOOCV) is a particular case of leave-p-out cross-validation with p = 1. The process looks similar to jackknife; however, with cross-validation one computes a statistic on the left-out sample(s), while with jackknifing one computes a statistic from the kept samples only.
LOO cross-validation requires less computation time than LpO cross-validation because there are only passes rather than . However, passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate.
Pseudo-code algorithm:
Input:
x, {vector of length N with x-values of incoming points}
y, {vector of length N with y-values of the expected result}
interpolate( x_in, y_in, x_out ), { returns the estimation for point x_out after the model is trained with x_in-y_in pairs}
Output:
err, {estimate for the prediction error}
Steps:
err ← 0
for i ← 1, ..., N do
// define the cross-validation subsets
x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N])
y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N])
x_out ← x[i]
y_out ← interpolate(x_in, y_in, x_out)
err ← err + (y[i] − y_out)^2
end for
err ← err/N
Non-exhaustive cross-validation
Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-p-out cross-validation.
k-fold cross-validation
In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples, often referred to as "folds". Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used, but in general k remains an unfixed parameter.
For example, setting k = 2 results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two sets d0 and d1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on d0 and validate on d1, followed by training on d1 and validating on d0.
When k = n (the number of observations), k-fold cross-validation is equivalent to leave-one-out cross-validation.
In stratified k-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels.
In repeated cross-validation the data is randomly split into k partitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice.
When many different statistical or machine learning models are being considered, greedy k-fold cross-validation can be used to quickly identify the most promising candidate models.
Holdout method
In the holdout method, we randomly assign data points to two sets d0 and d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train (build a model) on d0 and test (evaluate its performance) on d1.
In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy (F*) will tend to be unstable since it will not be smoothed out by multiple iterations (see below). Similarly, indicators of the specific role played by various predictor variables (e.g., values of regression coefficients) will tend to be unstable.
While the holdout method can be framed as "the simplest kind of cross-validation", many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation.
Repeated random sub-sampling validation
This method, also known as Monte Carlo cross-validation, creates multiple random splits of the dataset into training and validation data. For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibits Monte Carlo variation, meaning that the results will vary if the analysis is repeated with different random splits.
As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.
In a stratified variant of this approach, the random samples are generated in such a way that the mean response value (i.e. the dependent variable in the regression) is equal in the training and testing sets. This is particularly useful if the responses are dichotomous with an unbalanced representation of the two response values in the data.
A method that applies repeated random sub-sampling is RANSAC.
Nested cross-validation
When cross-validation is used simultaneously for selection of the best set of hyperparameters and for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished:
k*l-fold cross-validation
This is a truly nested variant which contains an outer loop of k sets and an inner loop of l sets. The total data set is split into k sets. One by one, a set is selected as the (outer) test set and the k - 1 other sets are combined into the corresponding outer training set. This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set. This is repeated for each of the l sets. The inner training sets are used to fit model parameters, while the outer test set is used as a validation set to provide an unbiased evaluation of the model fit. Typically, this is repeated for many different hyperparameters (or even different model types) and the validation set is used to determine the best hyperparameter set (and model type) for this inner training set. After this, a new model is fit on the entire outer training set, using the best set of hyperparameters from the inner cross-validation. The performance of this model is then evaluated using the outer test set.
k-fold cross-validation with validation and test set
This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets. One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets. Finally, for the selected parameter set, the test set is used to evaluate the model with the best parameter set. Here, two variants are possible: either evaluating the model that was trained on the training set or evaluating a new model that was fit on the combination of the training and the validation set.
Measures of fit
The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, for binary classification problems, each case in the validation set is either predicted correctly or incorrectly. In this situation the misclassification error rate can be used to summarize the fit, although other measures derived from information (e.g., counts, frequency) contained within a contingency table or confusion matrix could also be used. When the value being predicted is continuously distributed, the mean squared error, root mean squared error or median absolute deviation could be used to summarize the errors.
Using prior information
When users apply cross-validation to select a good configuration , then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and include relevant information from previous research. In a forecasting combination exercise, for instance, cross-validation can be applied to estimate the weights that are assigned to each forecast. Since a simple equal-weighted forecast is difficult to beat, a penalty can be added for deviating from equal weights. Or, if cross-validation is applied to assign individual weights to observations, then one can penalize deviations from equal weights to avoid wasting potentially relevant information. Hoornweg (2018) shows how a tuning parameter can be defined so that a user can intuitively balance between the accuracy of cross-validation and the simplicity of sticking to a reference parameter that is defined by the user.
If denotes the candidate configuration that might be selected, then the loss function that is to be minimized can be defined as
Relative accuracy can be quantified as , so that the mean squared error of a candidate is made relative to that of a user-specified . The relative simplicity term measures the amount that deviates from relative to the maximum amount of deviation from . Accordingly, relative simplicity can be specified as , where corresponds to the value with the highest permissible deviation from . With , the user determines how high the influence of the reference parameter is relative to cross-validation.
One can add relative simplicity terms for multiple configurations by specifying the loss function as
Hoornweg (2018) shows that a loss function with such an accuracy-simplicity tradeoff can also be used to intuitively define shrinkage estimators like the (adaptive) lasso and Bayesian / ridge regression. Click on the lasso for an example.
Statistical properties
Suppose we choose a measure of fit F, and use cross-validation to produce an estimate F* of the expected fit EF of a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribution, the resulting values for F* will vary. The statistical properties of F* result from this variation.
The variance of F* can be large. For this reason, if two statistical procedures are compared based on the results of cross-validation, the procedure with the better estimated performance may not actually be the better of the two procedures (i.e. it may not have the better value of EF). Some progress has been made on constructing confidence intervals around cross-validation estimates, but this is considered a difficult problem.
Computational issues
Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the prediction method is expensive to train, cross-validation can be very slow since the training must be carried out repeatedly. In some cases such as least squares and kernel regression, cross-validation can be sped up significantly by pre-computing certain values that are needed repeatedly in the training, or by using fast "updating rules" such as the Sherman–Morrison formula. However one must be careful to preserve the "total blinding" of the validation set from the training procedure, otherwise bias may result. An extreme example of accelerating cross-validation occurs in linear regression, where the results of cross-validation have a closed-form expression known as the prediction residual error sum of squares (PRESS).
Limitations and misuse
Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled.
In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). Both of these can introduce systematic differences between the training and validation sets. For example, if a model for prediction of trend changes in financial quotations is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for being diagnosed with a particular disease within the next year. If the model is trained using data from a study involving only a specific population group (e.g. young people or males), but is then applied to the general population, the cross-validation results from the training set could differ greatly from the actual predictive performance.
In many applications, models also may be incorrectly specified and vary as a function of modeler biases and/or arbitrary choices. When this occurs, there may be an illusion that the system changes in external samples, whereas the reason is that the model has missed a critical predictor and/or included a confounded predictor. New evidence is that cross-validation by itself is not very predictive of external validity, whereas a form of experimental validation known as swap sampling that does control for human bias can be much more predictive of external validity. As defined by this large MAQC-II study across 30,000 models, swap sampling incorporates cross-validation in the sense that predictions are tested across independent training and validation samples. Yet, models are also developed across these independent samples and by modelers who are blinded to one another. When there is a mismatch in these models developed across these swapped training and validation samples as happens quite frequently, MAQC-II shows that this will be much more predictive of poor external predictive validity than traditional cross-validation.
The reason for the success of the swapped sampling is a built-in control for human biases in model building. In addition to placing too much faith in predictions that may vary across modelers and lead to poor external validity due to these confounding modeler effects, these are some other ways that cross-validation can be misused:
By performing an initial analysis to identify the most informative features using the entire data set – if feature selection or model tuning is required by the modeling procedure, this must be repeated on every training set. Otherwise, predictions will certainly be upwardly biased. If cross-validation is used to decide which features to use, an inner cross-validation to carry out the feature selection on every training set must be performed.
Performing mean-centering, rescaling, dimensionality reduction, outlier removal or any other data-dependent preprocessing using the entire data set. While very common in practice, this has been shown to introduce biases into the cross-validation estimates.
By allowing some of the training data to also be included in the test set – this can happen due to "twinning" in the data set, whereby some exactly identical or nearly identical samples are present in the data set, see pseudoreplication. To some extent twinning always takes place even in perfectly independent training and validation samples. This is because some of the training sample observations will have nearly identical values of predictors as validation sample observations. And some of these will correlate with a target at better than chance levels in the same direction in both training and validation when they are actually driven by confounded predictors with poor external validity. If such a cross-validated model is selected from a k-fold set, human confirmation bias will be at work and determine that such a model has been validated. This is why traditional cross-validation needs to be supplemented with controls for human bias and confounded model specification like swap sampling and prospective studies.
Cross validation for time-series models
Due to correlations, cross-validation with random splits might be problematic for time-series models (if we are more interested in evaluating extrapolation, rather than interpolation). A more appropriate approach might be to use rolling cross-validation.
However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length.
Applications
Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested in optical character recognition, and we are considering using either a Support Vector Machine (SVM) or k-nearest neighbors (KNN) to predict the true character from an image of a handwritten character. Using cross-validation, we can obtain empirical estimates comparing these two methods in terms of their respective fractions of misclassified characters. In contrast, the in-sample estimate will not represent the quantity of interest (i.e. the generalization error).
Cross-validation can also be used in variable selection. Suppose we are using the expression levels of 20 proteins to predict whether a cancer patient will respond to a drug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. For most modeling procedures, if we compare feature subsets using the in-sample error rates, the best performance will occur when all 20 features are used. However under cross-validation, the model with the best fit will generally include only a subset of the features that are deemed truly informative.
A recent development in medical statistics is its use in meta-analysis. It forms the basis of the validation statistic, Vn which is used to test the statistical validity of meta-analysis summary estimates. It has also been used in a more conventional sense in meta-analysis to estimate the likely prediction error of meta-analysis results.
See also
Boosting (machine learning)
Bootstrap aggregating (bagging)
Out-of-bag error
Bootstrapping (statistics)
Leakage (machine learning)
Model selection
Stability (learning theory)
Validity (statistics)
Notes and references
Further reading
Model selection
Regression variable selection
Machine learning | Cross-validation (statistics) | [
"Engineering"
] | 5,319 | [
"Artificial intelligence engineering",
"Machine learning"
] |
416,651 | https://en.wikipedia.org/wiki/Continental%20shelf | A continental shelf is a portion of a continent that is submerged under an area of relatively shallow water, known as a shelf sea. Much of these shelves were exposed by drops in sea level during glacial periods. The shelf surrounding an island is known as an "insular shelf."
The continental margin, between the continental shelf and the abyssal plain, comprises a steep continental slope, surrounded by the flatter continental rise, in which sediment from the continent above cascades down the slope and accumulates as a pile of sediment at the base of the slope. Extending as far as 500 km (310 mi) from the slope, it consists of thick sediments deposited by turbidity currents from the shelf and slope. The continental rise's gradient is intermediate between the gradients of the slope and the shelf.
Under the United Nations Convention on the Law of the Sea, the name continental shelf was given a legal definition as the stretch of the seabed adjacent to the shores of a particular country to which it belongs.
Topography
The shelf usually ends at a point of increasing slope (called the shelf break). The sea floor below the break is the continental slope. Below the slope is the continental rise, which finally merges into the deep ocean floor, the abyssal plain. The continental shelf and the slope are part of the continental margin.
The shelf area is commonly subdivided into the inner continental shelf, mid continental shelf, and outer continental shelf, each with their specific geomorphology and marine biology.
The character of the shelf changes dramatically at the shelf break, where the continental slope begins. With a few exceptions, the shelf break is located at a remarkably uniform depth of roughly ; this is likely a hallmark of past ice ages, when sea level was lower than it is now.
The continental slope is much steeper than the shelf; the average angle is 3°, but it can be as low as 1° or as high as 10°. The slope is often cut with submarine canyons. The physical mechanisms involved in forming these canyons were not well understood until the 1960s.
Geographical distribution
Continental shelves cover an area of about 27 million km2 (10 million sq mi), equal to about 7% of the surface area of the oceans. The width of the continental shelf varies considerably—it is not uncommon for an area to have virtually no shelf at all, particularly where the forward edge of an advancing oceanic plate dives beneath continental crust in an offshore subduction zone such as off the coast of Chile or the west coast of Sumatra. The largest shelf—the Siberian Shelf in the Arctic Ocean—stretches to in width. The South China Sea lies over another extensive area of continental shelf, the Sunda Shelf, which joins Borneo, Sumatra, and Java to the Asian mainland. Other familiar bodies of water that overlie continental shelves are the North Sea and the Persian Gulf. The average width of continental shelves is about . The depth of the shelf also varies, but is generally limited to water shallower than . The slope of the shelf is usually quite low, on the order of 0.5°; vertical relief is also minimal, at less than .
Though the continental shelf is treated as a physiographic province of the ocean, it is not part of the deep ocean basin proper, but the flooded margins of the continent. Passive continental margins such as most of the Atlantic coasts have wide and shallow shelves, made of thick sedimentary wedges derived from long erosion of a neighboring continent. Active continental margins have narrow, relatively steep shelves, due to frequent earthquakes that move sediment to the deep sea.
Sediments
The continental shelves are covered by terrigenous sediments; that is, those derived from erosion of the continents. However, little of the sediment is from current rivers; some 60–70% of the sediment on the world's shelves is relict sediment, deposited during the last ice age, when sea level was 100–120 m lower than it is now.
Sediments usually become increasingly fine with distance from the coast; sand is limited to shallow, wave-agitated waters, while silt and clays are deposited in quieter, deep water far offshore. These accumulate every millennium, much faster than deep-sea pelagic sediments.
Shelf seas
"Shelf seas" are the ocean waters on the continental shelf. Their motion is controlled by the combined influences of the tides, wind-forcing and brackish water formed from river inflows (Regions of Freshwater Influence). These regions can often be biologically highly productive due to mixing caused by the shallower waters and the enhanced current speeds. Despite covering only about 8% of Earth's ocean surface area, shelf seas support 15–20% of global primary productivity.
In temperate continental shelf seas, three distinctive oceanographic regimes are found, as a consequence of the interplay between surface heating, lateral buoyancy gradients (due to river inflow), and turbulent mixing by the tides and to a lesser extent the wind.
In shallower water with stronger tides and away from river mouths, tidal turbulence overcomes the stratifying influence of surface heating, and the water column remains well mixed for the entire seasonal cycle.
In contrast, in deeper water, the surface heating wins out in summer, to produce seasonal stratification with a warm surface layer overlying the isolated deep water.
(The well mixed and seasonally stratifying regimes are separated by persistent features called tidal mixing fronts.)
A third regime which links estuaries to shelf seas, Regions of Freshwater Influence (ROFIs), is found where estuaries enter shelf seas, for example in the Liverpool Bay area of the Irish Sea and Rhine Outflow region of the North Sea. Here, stratification can vary on timescales from the semidiurnal tidal cycle through to the springs-neap tidal cycle due to a process known as "tidal straining". While the North Sea and Irish Sea are two of the better studied shelf seas, they are not necessarily representative of all shelf seas as there is a wide variety of behaviours to be found:
Indian Ocean shelf seas are dominated by major river systems, including the Ganges and Indus rivers. The shelf seas around New Zealand are complicated because the submerged continent of Zealandia creates wide plateaus. Shelf seas around Antarctica and the shores of the Arctic Ocean are influenced by sea ice production and polynya.
There is evidence that changing wind, rainfall, and regional ocean currents in a warming ocean are having an effect on some shelf seas. Improved data collection via Integrated Ocean Observing Systems in shelf sea regions is making identification of these changes possible.
Biota
Continental shelves teem with life because of the sunlight available in shallow waters, in contrast to the biotic desert of the oceans' abyssal plain. The pelagic (water column) environment of the continental shelf constitutes the neritic zone, and the benthic (sea floor) province of the shelf is the sublittoral zone. The shelves make up less than 10% of the ocean, and a rough estimate suggests that only about 30% of the continental shelf sea floor receives enough sunlight to allow benthic photosynthesis.
Though the shelves are usually fertile, if anoxic conditions prevail during sedimentation, the deposits may over geologic time become sources for fossil fuels.
Economic significance
The continental shelf is the best understood part of the ocean floor, as it is relatively accessible. Most commercial exploitation of the sea, such as extraction of metallic ore, non-metallic ore, and hydrocarbons, takes place on the continental shelf.
Sovereign rights over their continental shelves down to a depth of or to a distance where the depth of waters admitted of resource exploitation were claimed by the marine nations that signed the Convention on the Continental Shelf drawn up by the UN's International Law Commission in 1958. This was partly superseded by the 1982 United Nations Convention on the Law of the Sea (UNCLOS). The 1982 convention created the exclusive economic zone, plus continental shelf rights for states with physical continental shelves that extend beyond that distance.
The legal definition of a continental shelf differs significantly from the geological definition. UNCLOS states that the shelf extends to the limit of the continental margin, but no less than and no more than from the baseline. Thus inhabited volcanic islands such as the Canaries, which have no actual continental shelf, nonetheless have a legal continental shelf, whereas uninhabitable islands have no shelf.
See also
Baseline
Continental island
Continental shelf of Brazil
Continental shelf pump
Exclusive economic zone
International waters
Land bridge
Outer Continental Shelf
Passive margin
Region of freshwater influence
Territorial waters
Notes
References
External links
Office of Naval Research: Ocean Regions: Continental Margin & Rise
UNEP Shelf Programme
GEBCO world map 2014
Anna Cavnar, Accountability and the Commission on the Limits of the Continental Shelf: Deciding Who Owns the Ocean Floor
Aquatic biomes
Aquatic ecology
Bodies of water
Coastal and oceanic landforms
Coastal geography
Law of the sea
Oceanographical terminology
Physical oceanography | Continental shelf | [
"Physics",
"Biology"
] | 1,806 | [
"Aquatic ecology",
"Ecosystems",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
416,666 | https://en.wikipedia.org/wiki/Surface%20energy | In surface science, surface energy (also interfacial free energy or surface free energy) quantifies the disruption of intermolecular bonds that occurs when a surface is created. In solid-state physics, surfaces must be intrinsically less energetically favorable than the bulk of the material (that is, the atoms on the surface must have more energy than the atoms in the bulk), otherwise there would be a driving force for surfaces to be created, removing the bulk of the material by sublimation. The surface energy may therefore be defined as the excess energy at the surface of a material compared to the bulk, or it is the work required to build an area of a particular surface. Another way to view the surface energy is to relate it to the work required to cut a bulk sample, creating two surfaces. There is "excess energy" as a result of the now-incomplete, unrealized bonding between the two created surfaces.
Cutting a solid body into pieces disrupts its bonds and increases the surface area, and therefore increases surface energy. If the cutting is done reversibly, then conservation of energy means that the energy consumed by the cutting process will be equal to the energy inherent in the two new surfaces created. The unit surface energy of a material would therefore be half of its energy of cohesion, all other things being equal; in practice, this is true only for a surface freshly prepared in vacuum. Surfaces often change their form away from the simple "cleaved bond" model just implied above. They are found to be highly dynamic regions, which readily rearrange or react, so that energy is often reduced by such processes as passivation or adsorption.
Assessment
Measurement
Contact angle
The most common way to measure surface energy is through contact angle experiments. In this method, the contact angle of the surface is measured with several liquids, usually water and diiodomethane. Based on the contact angle results and knowing the surface tension of the liquids, the surface energy can be calculated. In practice, this analysis is done automatically by a contact angle meter.
There are several different models for calculating the surface energy based on the contact angle readings. The most commonly used method is OWRK, which requires the use of two probe liquids and gives out as a result the total surface energy as well as divides it into polar and dispersive components.
Contact angle method is the standard surface energy measurement method due to its simplicity, applicability to a wide range of surfaces and quickness. The measurement can be fully automated and is standardized.
In general, as surface energy increases, the contact angle decreases because more of the liquid is being "grabbed" by the surface. Conversely, as surface energy decreases, the contact angle increases, because the surface doesn't want to interact with the liquid.
Other methods
The surface energy of a liquid may be measured by stretching a liquid membrane (which increases the surface area and hence the surface energy). In that case, in order to increase the surface area of a mass of liquid by an amount, , a quantity of work, , is needed (where is the surface energy density of the liquid). However, such a method cannot be used to measure the surface energy of a solid because stretching of a solid membrane induces elastic energy in the bulk in addition to increasing the surface energy.
The surface energy of a solid is usually measured at high temperatures. At such temperatures the solid creeps and even though the surface area changes, the volume remains approximately constant. If is the surface energy density of a cylindrical rod of radius and length at high temperature and a constant uniaxial tension , then at equilibrium, the variation of the total Helmholtz free energy vanishes and we have
where is the Helmholtz free energy and is the surface area of the rod:
Also, since the volume () of the rod remains constant, the variation () of the volume is zero, that is,
Therefore, the surface energy density can be expressed as
The surface energy density of the solid can be computed by measuring , , and at equilibrium.
This method is valid only if the solid is isotropic, meaning the surface energy is the same for all crystallographic orientations. While this is only strictly true for amorphous solids (glass) and liquids, isotropy is a good approximation for many other materials. In particular, if the sample is polygranular (most metals) or made by powder sintering (most ceramics) this is a good approximation.
In the case of single-crystal materials, such as natural gemstones, anisotropy in the surface energy leads to faceting. The shape of the crystal (assuming equilibrium growth conditions) is related to the surface energy by the Wulff construction. The surface energy of the facets can thus be found to within a scaling constant by measuring the relative sizes of the facets.
Calculation
Deformed solid
In the deformation of solids, surface energy can be treated as the "energy required to create one unit of surface area", and is a function of the difference between the total energies of the system before and after the deformation:
.
Calculation of surface energy from first principles (for example, density functional theory) is an alternative approach to measurement. Surface energy is estimated from the following variables: width of the d-band, the number of valence d-electrons, and the coordination number of atoms at the surface and in the bulk of the solid.
Surface formation energy of a crystalline solid
In density functional theory, surface energy can be calculated from the following expression:
where
is the total energy of surface slab obtained using density functional theory.
is the number of atoms in the surface slab.
is the bulk energy per atom.
is the surface area.
For a slab, we have two surfaces and they are of the same type, which is reflected by the number 2 in the denominator. To guarantee this, we need to create the slab carefully to make sure that the upper and lower surfaces are of the same type.
Strength of adhesive contacts is determined by the work of adhesion which is also called relative surface energy of two contacting bodies. The relative surface energy can be determined by detaching of bodies of well defined shape made of one material from the substrate made from the second material. For example, the relative surface energy of the interface "acrylic glass – gelatin" is equal to 0.03 N/m. Experimental setup for measuring relative surface energy and its function can be seen in the video.
Estimation from the heat of sublimation
To estimate the surface energy of a pure, uniform material, an individual region of the material can be modeled as a cube. In order to move a cube from the bulk of a material to the surface, energy is required. This energy cost is incorporated into the surface energy of the material, which is quantified by:
where and are coordination numbers corresponding to the surface and the bulk regions of the material, and are equal to 5 and 6, respectively; is the surface area of an individual molecule, and is the pairwise intermolecular energy.
Surface area can be determined by squaring the cube root of the volume of the molecule:
Here, corresponds to the molar mass of the molecule, corresponds to the density, and is the Avogadro constant.
In order to determine the pairwise intermolecular energy, all intermolecular forces in the material must be broken. This allows thorough investigation of the interactions that occur for single molecules. During sublimation of a substance, intermolecular forces between molecules are broken, resulting in a change in the material from solid to gas. For this reason, considering the enthalpy of sublimation can be useful in determining the pairwise intermolecular energy. Enthalpy of sublimation can be calculated by the following equation:
Using empirically tabulated values for enthalpy of sublimation, it is possible to determine the pairwise intermolecular energy. Incorporating this value into the surface energy equation allows for the surface energy to be estimated.
The following equation can be used as a reasonable estimate for surface energy:
Interfacial energy
The presence of an interface influences generally all thermodynamic parameters of a system. There are two models that are commonly used to demonstrate interfacial phenomena: the Gibbs ideal interface model and the Guggenheim model. In order to demonstrate the thermodynamics of an interfacial system using the Gibbs model, the system can be divided into three parts: two immiscible liquids with volumes and and an infinitesimally thin boundary layer known as the Gibbs dividing plane () separating these two volumes.
The total volume of the system is:
All extensive quantities of the system can be written as a sum of three components: bulk phase , bulk phase , and the interface . Some examples include internal energy , the number of molecules of the th substance , and the entropy .
While these quantities can vary between each component, the sum within the system remains constant. At the interface, these values may deviate from those present within the bulk phases. The concentration of molecules present at the interface can be defined as:
where and represent the concentration of substance in bulk phase and , respectively.
It is beneficial to define a new term interfacial excess which allows us to describe the number of molecules per unit area:
Wetting
Spreading parameter
Surface energy comes into play in wetting phenomena. To examine this, consider a drop of liquid on a solid substrate. If the surface energy of the substrate changes upon the addition of the drop, the substrate is said to be wetting. The spreading parameter can be used to mathematically determine this:
where is the spreading parameter, the surface energy of the substrate, the surface energy of the liquid, and the interfacial energy between the substrate and the liquid.
If , the liquid partially wets the substrate. If , the liquid completely wets the substrate.
Contact angle
A way to experimentally determine wetting is to look at the contact angle (), which is the angle connecting the solid–liquid interface and the liquid–gas interface (as in the figure).
If , the liquid completely wets the substrate.
If , high wetting occurs.
If , low wetting occurs.
If , the liquid does not wet the substrate at all.
The Young equation relates the contact angle to interfacial energy:
where is the interfacial energy between the solid and gas phases, the interfacial energy between the substrate and the liquid, is the interfacial energy between the liquid and gas phases, and is the contact angle between the solid–liquid and the liquid–gas interface.
Wetting of high- and low-energy substrates
The energy of the bulk component of a solid substrate is determined by the types of interactions that hold the substrate together. High-energy substrates are held together by bonds, while low-energy substrates are held together by forces. Covalent, ionic, and metallic bonds are much stronger than forces such as van der Waals and hydrogen bonding. High-energy substrates are more easily wetted than low-energy substrates. In addition, more complete wetting will occur if the substrate has a much higher surface energy than the liquid.
Modification techniques
The most commonly used surface modification protocols are plasma activation, wet chemical treatment, including grafting, and thin-film coating. Surface energy mimicking is a technique that enables merging the device manufacturing and surface modifications, including patterning, into a single processing step using a single device material.
Many techniques can be used to enhance wetting. Surface treatments, such as corona treatment, plasma treatment and acid etching, can be used to increase the surface energy of the substrate. Additives can also be added to the liquid to decrease its surface tension. This technique is employed often in paint formulations to ensure that they will be evenly spread on a surface.
The Kelvin equation
As a result of the surface tension inherent to liquids, curved surfaces are formed in order to minimize the area. This phenomenon arises from the energetic cost of forming a surface. As such the Gibbs free energy of the system is minimized when the surface is curved.
The Kelvin equation is based on thermodynamic principles and is used to describe changes in vapor pressure caused by liquids with curved surfaces. The cause for this change in vapor pressure is the Laplace pressure. The vapor pressure of a drop is higher than that of a planar surface because the increased Laplace pressure causes the molecules to evaporate more easily. Conversely, in liquids surrounding a bubble, the pressure with respect to the inner part of the bubble is reduced, thus making it more difficult for molecules to evaporate. The Kelvin equation can be stated as:
where is the vapor pressure of the curved surface, is the vapor pressure of the flat surface, is the surface tension, is the molar volume of the liquid, is the universal gas constant, is temperature (in kelvin), and and are the principal radii of curvature of the surface.
Surface modified pigments for coatings
Pigments offer great potential in modifying the application properties of a coating. Due to their fine particle size and inherently high surface energy, they often require a surface treatment in order to enhance their ease of dispersion in a liquid medium. A wide variety of surface treatments have been previously used, including the adsorption on the surface of a molecule in the presence of polar groups, monolayers of polymers, and layers of inorganic oxides on the surface of organic pigments.
New surfaces are constantly being created as larger pigment particles get broken down into smaller subparticles. These newly-formed surfaces consequently contribute to larger surface energies, whereby the resulting particles often become cemented together into aggregates. Because particles dispersed in liquid media are in constant thermal or Brownian motion, they exhibit a strong affinity for other pigment particles nearby as they move through the medium and collide. This natural attraction is largely attributed to the powerful short-range van der Waals forces, as an effect of their surface energies.
The chief purpose of pigment dispersion is to break down aggregates and form stable dispersions of optimally sized pigment particles. This process generally involves three distinct stages: wetting, deaggregation, and stabilization. A surface that is easy to wet is desirable when formulating a coating that requires good adhesion and appearance. This also minimizes the risks of surface tension related defects, such as crawling, cratering, and orange peel. This is an essential requirement for pigment dispersions; for wetting to be effective, the surface tension of the pigment's vehicle must be lower than the surface free energy of the pigment. This allows the vehicle to penetrate into the interstices of the pigment aggregates, thus ensuring complete wetting. Finally, the particles are subjected to a repulsive force in order to keep them separated from one another and lowers the likelihood of flocculation.
Dispersions may become stable through two different phenomena: charge repulsion and steric or entropic repulsion. In charge repulsion, particles that possess the same like electrostatic charges repel each other. Alternatively, steric or entropic repulsion is a phenomenon used to describe the repelling effect when adsorbed layers of material (such as polymer molecules swollen with solvent) are present on the surface of the pigment particles in dispersion. Only certain portions (anchors) of the polymer molecules are adsorbed, with their corresponding loops and tails extending out into the solution. As the particles approach each other their adsorbed layers become crowded; this provides an effective steric barrier that prevents flocculation. This crowding effect is accompanied by a decrease in entropy, whereby the number of conformations possible for the polymer molecules is reduced in the adsorbed layer. As a result, energy is increased and often gives rise to repulsive forces that aid in keeping the particles separated from each other.
Surface energies of common materials
See also
Contact angle
Surface tension
Sessile drop technique
Capillary surface
Wulff Construction
References
External links
What is surface free energy?
Surface Energy and Adhesion
Forms of energy
Condensed matter physics
Surface science
Area-specific quantities | Surface energy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,292 | [
"Physical quantities",
"Area-specific quantities",
"Quantity",
"Phases of matter",
"Materials science",
"Surface science",
"Forms of energy",
"Energy (physics)",
"Condensed matter physics",
"Matter"
] |
416,681 | https://en.wikipedia.org/wiki/Reversible%20process%20%28thermodynamics%29 | In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature.
Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation.
To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur.
While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible.
Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible.
Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in corresponding real processes. Other applications exploit that entropy and internal energy are state functions whose change depends only on the initial and final states of the system, not on how the process occurred. Therefore, the entropy and internal-energy change in a real process can be calculated quite easily by analyzing a reversible process connecting the real initial and final system states. In addition, reversibility defines the thermodynamic condition for chemical equilibrium.
Overview
Thermodynamic processes can be carried out in one of two ways: reversibly or irreversibly. An ideal thermodynamically reversible process is free of dissipative losses and therefore the magnitude of work performed by or on the system would be maximized. The incomplete conversion of heat to work in a cyclic process, however, applies to both reversible and irreversible cycles. The dependence of work on the path of the thermodynamic process is also unrelated to reversibility, since expansion work, which can be visualized on a pressure–volume diagram as the area beneath the equilibrium curve, is different for different reversible expansion processes (e.g. adiabatic, then isothermal; vs. isothermal, then adiabatic) connecting the same initial and final states.
Irreversibility
In an irreversible process, finite changes are made; therefore the system is not at equilibrium throughout the process. In a cyclic process, the difference between the reversible work and the actual work for a process as shown in the following equation:
Boundaries and states
Simple reversible processes change the state of a system in such a way that the net change in the combined entropy of the system and its surroundings is zero. (The entropy of the system alone is conserved only in reversible adiabatic processes.) Nevertheless, the Carnot cycle demonstrates that the state of the surroundings may change in a reversible process as the system returns to its initial state. Reversible processes define the boundaries of how efficient heat engines can be in thermodynamics and engineering: a reversible process is one where the machine has maximum efficiency (see Carnot cycle).
In some cases, it may be important to distinguish between reversible and quasistatic processes. Reversible processes are always quasistatic, but the converse is not always true. For example, an infinitesimal compression of a gas in a cylinder where there is friction between the piston and the cylinder is a quasistatic, but not reversible process. Although the system has been driven from its equilibrium state by only an infinitesimal amount, energy has been irreversibly lost to waste heat, due to friction, and cannot be recovered by simply moving the piston in the opposite direction by the infinitesimally same amount.
Engineering archaisms
Historically, the term Tesla principle was used to describe (among other things) certain reversible processes invented by Nikola Tesla. However, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner. It was developed during Tesla's research in alternating currents where the current's magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved and machinery fastened to the shaft was operated by the engine. If the turbine's operation was reversed, the disks acted as a pump.
Footnotes
See also
Time reversibility
Carnot cycle
Entropy production
Toffoli gate
Time evolution
Quantum circuit
Reversible computing
Maxwell's demon
Stirling engine
References
Thermodynamic processes | Reversible process (thermodynamics) | [
"Physics",
"Chemistry"
] | 1,136 | [
"Thermodynamic processes",
"Thermodynamics"
] |
416,754 | https://en.wikipedia.org/wiki/Photodynamic%20therapy | Photodynamic therapy (PDT) is a form of phototherapy involving light and a photosensitizing chemical substance used in conjunction with molecular oxygen to elicit cell death (phototoxicity).
PDT is used in treating acne, wet age-related macular degeneration, psoriasis, and herpes. It is used to treat malignant cancers, including head and neck, lung, bladder and skin.
Advantages lessen the need for delicate surgery and lengthy recuperation and minimal formation of scar tissue and disfigurement. A side effect is the associated photosensitisation of skin tissue.
Basics
PDT applications involve three components: a photosensitizer, a light source and tissue oxygen. The wavelength of the light source needs to be appropriate for exciting the photosensitizer to produce radicals and/or reactive oxygen species. These are free radicals (Type I) generated through electron abstraction or transfer from a substrate molecule and highly reactive state of oxygen known as singlet oxygen (Type II).
PDT is a multi-stage process. First a photosensitiser, ideally with negligible toxicity other than its phototoxicity, is administered in the absence of light, either systemically or topically. When a sufficient amount of photosensitiser appears in diseased tissue, the photosensitiser is activated by exposure to light for a specified period. The light dose supplies sufficient energy to stimulate the photosensitiser, but not enough to damage neighbouring healthy tissue. The reactive oxygen kills the target cells.
Reactive oxygen species
In air and tissue, molecular oxygen (O2) occurs in a triplet state, whereas almost all other molecules are in a singlet state. Reactions between triplet and singlet molecules are forbidden by quantum mechanics, making oxygen relatively non-reactive at physiological conditions. A photosensitizer is a chemical compound that can be promoted to an excited state upon absorption of light and undergo intersystem crossing (ISC) with oxygen to produce singlet oxygen. This species is highly cytotoxic, rapidly attacking any organic compounds it encounters. It is rapidly eliminated from cells, in an average of 3 μs.
Photochemical processes
When a photosensitiser is in its excited state (3Psen*) it can interact with molecular triplet oxygen (3O2) and produce radicals and reactive oxygen species (ROS), crucial to the Type II mechanism. These species include singlet oxygen (1O2), hydroxyl radicals (•OH) and superoxide (O2−) ions. They can interact with cellular components including unsaturated lipids, amino acid residues and nucleic acids. If sufficient oxidative damage ensues, this will result in target-cell death (only within the illuminated area).
Photochemical mechanisms
When a chromophore molecule, such as a cyclic tetrapyrrolic molecule, absorbs a photon, one of its electrons is promoted into a higher-energy orbital, elevating the chromophore from the ground state (S0) into a short-lived, electronically excited state (Sn) composed of vibrational sub-levels (Sn′). The excited chromophore can lose energy by rapidly decaying through these sub-levels via internal conversion (IC) to populate the first excited singlet state (S1), before quickly relaxing back to the ground state.
The decay from the excited singlet state (S1) to the ground state (S0) is via fluorescence (S1 → S0). Singlet state lifetimes of excited fluorophores are very short (τfl. = 10−9–10−6 seconds) since transitions between the same spin states (S → S or T → T) conserve the spin multiplicity of the electron and, according to the Spin Selection Rules, are therefore considered "allowed" transitions. Alternatively, an excited singlet state electron (S1) can undergo spin inversion and populate the lower-energy first excited triplet state (T1) via intersystem crossing (ISC); a spin-forbidden process, since the spin of the electron is no longer conserved. The excited electron can then undergo a second spin-forbidden inversion and depopulate the excited triplet state (T1) by decaying to the ground state (S0) via phosphorescence (T1→ S0). Owing to the spin-forbidden triplet to singlet transition, the lifetime of phosphorescence (τP = 10−3 − 1 second) is considerably longer than that of fluorescence.
Photosensitisers and photochemistry
Tetrapyrrolic photosensitisers in the excited singlet state (1Psen*, S>0) are relatively efficient at intersystem crossing and can consequently have a high triplet-state quantum yield. The longer lifetime of this species is sufficient to allow the excited triplet state photosensitiser to interact with surrounding bio-molecules, including cell membrane constituents.
Photochemical reactions
Excited triplet-state photosensitisers can react via Type-I and Type-II processes. Type-I processes can involve the excited singlet or triplet photosensitiser (1Psen*, S1; 3Psen*, T1), however due to the short lifetime of the excited singlet state, the photosensitiser can only react if it is intimately associated with a substrate. In both cases the interaction is with readily oxidisable or reducible substrates. Type-II processes involve the direct interaction of the excited triplet photosensitiser (3Psen*, T1) with molecular oxygen (3O2, 3Σg).
Type-I processes
Type-I processes can be divided into Type I(i) and Type I(ii). Type I (i) involves the transfer of an electron (oxidation) from a substrate molecule to the excited state photosensitiser (Psen*), generating a photosensitiser radical anion (Psen•−) and a substrate radical cation (Subs•+). The majority of the radicals produced from Type-I(i) reactions react instantaneously with molecular oxygen (O2), generating a mixture of oxygen intermediates. For example, the photosensitiser radical anion can react instantaneously with molecular oxygen (3O2) to generate a superoxide radical anion (O2•−), which can go on to produce the highly reactive hydroxyl radical (OH•), initiating a cascade of cytotoxic free radicals; this process is common in the oxidative damage of fatty acids and other lipids.
The Type-I process (ii) involves the transfer of a hydrogen atom (reduction) to the excited state photosensitiser (Psen*). This generates free radicals capable of rapidly reacting with molecular oxygen and creating a complex mixture of reactive oxygen intermediates, including reactive peroxides.
Type-II processes
Type-II processes involve the direct interaction of the excited triplet state photosensitiser (3Psen*) with ground state molecular oxygen (3O2, 3Σg); a spin allowed transition—the excited state photosensitiser and ground state molecular oxygen are of the same spin state (T).
When the excited photosensitiser collides with molecular oxygen, a process of triplet-triplet annihilation takes place (3Psen* →1Psen and 3O2 →1O2). This inverts the spin of one oxygen molecule's (3O2) outermost antibonding electrons, generating two forms of singlet oxygen (1Δg and 1Σg), while simultaneously depopulating the photosensitiser's excited triplet state (T1 → S0). The higher-energy singlet oxygen state (1Σg, 157kJ mol−1 > 3Σg) is very short-lived (1Σg ≤ 0.33 milliseconds (methanol), undetectable in H2O/D2O) and rapidly relaxes to the lower-energy excited state (1Δg, 94kJ mol−1 > 3Σg). It is, therefore, this lower-energy form of singlet oxygen (1Δg) that is implicated in cell injury and cell death.
The highly-reactive singlet oxygen species (1O2) produced via the Type-II process act near to their site generation and within a radius of approximately 20 nm, with a typical lifetime of approximately 40 nanoseconds in biological systems.
It is possible that (over a 6 μs period) singlet oxygen can diffuse up to approximately 300 nm in vivo. Singlet oxygen can theoretically only interact with proximal molecules and structures within this radius. ROS initiate reactions with many biomolecules, including amino acid residues in proteins, such as tryptophan; unsaturated lipids like cholesterol and nucleic acid bases, particularly guanosine and guanine derivatives, with the latter base more susceptible to ROS. These interactions cause damage and potential destruction to cellular membranes and enzyme deactivation, culminating in cell death.
It is probable that in the presence of molecular oxygen and as a direct result of the photoirradiation of the photosensitiser molecule, both Type-I and II pathways play a pivotal role in disrupting cellular mechanisms and cellular structure. Nevertheless, considerable evidence suggests that the Type-II photo-oxygenation process predominates in the induction of cell damage, a consequence of the interaction between the irradiated photosensitiser and molecular oxygen. Cells in vivo may be partially protected against the effects of photodynamic therapy by the presence of singlet oxygen scavengers (such as histidine). Certain skin cells are somewhat resistant to PDT in the absence of molecular oxygen; further supporting the proposal that the Type-II process is at the heart of photoinitiated cell death.
The efficiency of Type-II processes is dependent upon the triplet state lifetime τT and the triplet quantum yield (ΦT) of the photosensitiser. Both of these parameters have been implicated in phototherapeutic effectiveness; further supporting the distinction between Type-I and Type-II mechanisms. However, the success of a photosensitiser is not exclusively dependent upon a Type-II process. Multiple photosensitisers display excited triplet lifetimes that are too short to permit a Type-II process to occur. For example, the copper metallated octaethylbenzochlorin photosensitiser has a triplet state lifetime of less than 20 nanoseconds and is still deemed to be an efficient photodynamic agent.
Photosensitizers
Many photosensitizers for PDT exist. They divide into porphyrins, chlorins and dyes. Examples include aminolevulinic acid (ALA), Silicon Phthalocyanine Pc 4, m-tetrahydroxyphenylchlorin (mTHPC) and mono-L-aspartyl chlorin e6 (NPe6).
Photosensitizers commercially available for clinical use include Allumera, Photofrin, Visudyne, Levulan, Foscan, Metvix, Hexvix, Cysview and Laserphyrin, with others in development, e.g. Antrin, Photochlor, Photosens, Photrex, Lumacan, Cevira, Visonac, BF-200 ALA, Amphinex and Azadipyrromethenes.
The major difference between photosensitizers is the parts of the cell that they target. Unlike in radiation therapy, where damage is done by targeting cell DNA, most photosensitizers target other cell structures. For example, mTHPC localizes in the nuclear envelope. In contrast, ALA localizes in the mitochondria and methylene blue in the lysosomes.
Cyclic tetrapyrrolic chromophores
Cyclic tetrapyrrolic molecules are fluorophores and photosensitisers. Cyclic tetrapyrrolic derivatives have an inherent similarity to the naturally occurring porphyrins present in living matter.
Porphyrins
Porphyrins are a group of naturally occurring and intensely coloured compounds, whose name is drawn from the Greek word porphura, or purple. These molecules perform biologically important roles, including oxygen transport and photosynthesis and have applications in fields ranging from fluorescent imaging to medicine. Porphyrins are tetrapyrrolic molecules, with the heart of the skeleton a heterocyclic macrocycle, known as a porphine. The fundamental porphine frame consists of four pyrrolic sub-units linked on opposing sides (α-positions, numbered 1, 4, 6, 9, 11, 14, 16 and 19) through four methine (CH) bridges (5, 10, 15 and 20), known as the meso-carbon atoms/positions. The resulting conjugated planar macrocycle may be substituted at the meso- and/or β-positions (2, 3, 7, 8, 12, 13, 17 and 18): if the meso- and β-hydrogens are substituted with non-hydrogen atoms or groups, the resulting compounds are known as porphyrins.
The inner two protons of a free-base porphyrin can be removed by strong bases such as alkoxides, forming a dianionic molecule; conversely, the inner two pyrrolenine nitrogens can be protonated with acids such as trifluoroacetic acid affording a dicationic intermediate. The tetradentate anionic species can readily form complexes with most metals.
Absorption spectroscopy
Porphyrin's highly conjugated skeleton produces a characteristic ultra-violet visible (UV-VIS) spectrum. The spectrum typically consists of an intense, narrow absorption band (ε > 200000 L⋅mol−1 cm−1) at around 400 nm, known as the Soret band or B band, followed by four longer wavelength (450–700 nm), weaker absorptions (ε > 20000 L⋅mol−1⋅cm−1 (free-base porphyrins)) referred to as the Q bands.
The Soret band arises from a strong electronic transition from the ground state to the second excited singlet state (S0 → S2); whereas the Q band is a result of a weak transition to the first excited singlet state (S0 → S1). The dissipation of energy via internal conversion (IC) is so rapid that fluorescence is only observed from depopulation of the first excited singlet state to the lower-energy ground state (S1 → S0).
Ideal photosensitisers
The key characteristic of a photosensitiser is the ability to preferentially accumulate in diseased tissue and induce a desired biological effect via the generation of cytotoxic species. Specific criteria:
Strong absorption with a high extinction coefficient in the red/near infrared region of the electromagnetic spectrum (600–850 nm)—allows deeper tissue penetration. (Tissue is much more transparent at longer wavelengths (~700–850 nm). Longer wavelengths allow the light to penetrate deeper and treat larger structures.)
Suitable photophysical characteristics: a high-quantum yield of triplet formation (ΦT ≥ 0.5); a high singlet oxygen quantum yield (ΦΔ ≥ 0.5); a relatively long triplet state lifetime (τT, μs range); and a high triplet-state energy (≥ 94 kJ mol−1). Values of ΦT= 0.83 and ΦΔ = 0.65 (haematoporphyrin); ΦT = 0.83 and ΦΔ = 0.72 (etiopurpurin); and ΦT = 0.96 and ΦΔ = 0.82 (tin etiopurpurin) have been achieved
Low dark toxicity and negligible cytotoxicity in the absence of light. (The photosensitizer should not be harmful to the target tissue until the treatment beam is applied.)
Preferential accumulation in diseased/target tissue over healthy tissue
Rapid clearance from the body post-procedure
High chemical stability: single, well-characterised compounds, with a known and constant composition
Short and high-yielding synthetic route (with easy translation into multi-gram scales/reactions)
Simple and stable formulation
Soluble in biological media, allowing intravenous administration. Otherwise, a hydrophilic delivery system must enable efficient and effective transportation of the photosensitiser to the target site via the bloodstream.
Low photobleaching to prevent degradation of the photosensitizer so it can continue producing singlet oxygen
Natural fluorescence (Many optical dosimetry techniques, such as fluorescence spectroscopy, depend on fluorescence.)
First generation
Porfimer sodium
Porfimer sodium is a drug used to treat some types of cancer. When absorbed by cancer cells and exposed to light, porfimer sodium becomes active and kills the cancer cells. It is a type of photodynamic therapy (PDT) agent and also called Photofrin.
PDT was first discovered more than a century ago in Germany, it was not until Thomas Dougherty's when PDT became more mainstream. Prior to Dr. Dougherty, researchers had ways of using light-sensitive compounds to treat disease. Dougherty successfully treated cancer with PDT in preclinical models in 1975. Three years later, he conducted the first controlled clinical study in humans. In 1994, the FDA approved PDT with the photosensitizer porfimer sodium for palliative treatment of advanced esophageal cancer, specifically the palliation of patients with completely obstructing esophageal cancer, or for patients with partially obstructing esophageal cancer. Porfimer Sodium is also FDA-approved for the treatment of types of lung cancer, more specifically for the treatment of microinvasive endobronchial non-small-cell lung cancer (NSCLC) in patients for whom surgery and radiotherapy are not indicated and also FDA approved in the US for high grade dysplasia in Barrett's Esophagus.
Disadvantages associated with first generation photosensitisers include skin sensitivity and absorption at 630 nm permitted some therapeutic use, but they markedly limited application to the wider field of disease. Second generation photosensitisers were key to the development of photodynamic therapy.
Second generation
5-Aminolaevulinic acid
5-Aminolaevulinic acid (ALA) is a prodrug used to treat and image multiple superficial cancers and tumours. ALA a key precursor in the biosynthesis of the naturally occurring porphyrin, haem.
Haem is synthesised in every energy-producing cell in the body and is a key structural component of haemoglobin, myoglobin and other haemproteins. The immediate precursor to haem is protoporphyrin IX (PPIX), an effective photosensitiser. Haem itself is not a photosensitiser, due to the coordination of a paramagnetic ion in the centre of the macrocycle, causing significant reduction in excited state lifetimes.
The haem molecule is synthesised from glycine and succinyl coenzyme A (succinyl CoA). The rate-limiting step in the biosynthesis pathway is controlled by a tight (negative) feedback mechanism in which the concentration of haem regulates the production of ALA. However, this controlled feedback can be by-passed by artificially adding excess exogenous ALA to cells. The cells respond by producing PPIX (photosensitiser) at a faster rate than the ferrochelatase enzyme can convert it to haem.
ALA, marketed as Levulan, has shown promise in photodynamic therapy (tumours) via both intravenous and oral administration, as well as through topical administration in the treatment of malignant and non-malignant dermatological conditions, including psoriasis, Bowen's disease and Hirsutism (Phase II/III clinical trials).
ALA accumulates more rapidly in comparison to other intravenously administered sensitisers. Typical peak tumour accumulation levels post-administration for PPIX are usually achieved within several hours; other (intravenous) photosensitisers may take up to 96 hours to reach peak levels. ALA is also excreted more rapidly from the body (~24 hours) than other photosensitisers, minimising photosensitivity side effects.
Esterified ALA derivatives with improved bioavailability have been examined. A methyl ALA ester (Metvix) is now available for basal cell carcinoma and other skin lesions. Benzyl (Benvix) and hexyl ester (Hexvix) derivatives are used for gastrointestinal cancers and for the diagnosis of bladder cancer.
Verteporfin
Benzoporphyrin derivative monoacid ring A (BPD-MA), marketed as Visudyne (Verteporfin, for injection), has been approved by health authorities in multiple jurisdictions, including US FDA, for the treatment of wet AMD beginning in 1999. It has also undergone Phase III clinical trials (USA) for the treatment of cutaneous non-melanoma skin cancer.
The chromophore of BPD-MA has a red-shifted and intensified long-wavelength absorption maxima at approximately 690 nm. Tissue penetration by light at this wavelength is 50% greater than that achieved for Photofrin (λmax. = 630 nm).
Verteporfin has further advantages over the first generation sensitiser Photofrin. It is rapidly absorbed by the tumour (optimal tumour-normal tissue ratio 30–150 minutes post-intravenous injection) and is rapidly cleared from the body, minimising patient photosensitivity (1–2 days).
Purlytin
Chlorin photosensitiser tin etiopurpurin is marketed as Purlytin. Purlytin has undergone Phase II clinical trials for cutaneous metastatic breast cancer and Kaposi's sarcoma in patients with AIDS (acquired immunodeficiency syndrome). Purlytin has been used successfully to treat the non-malignant conditions psoriasis and restenosis.
Chlorins are distinguished from the parent porphyrins by a reduced exocyclic double bond, decreasing the symmetry of the conjugated macrocycle. This leads to increased absorption in the long-wavelength portion of the visible region of the electromagnetic spectrum (650–680 nm). Purlytin is a purpurin; a degradation product of chlorophyll.
Purlytin has a tin atom chelated in its central cavity that causes a red-shift of approximately 20–30 nm (with respect to Photofrin and non-metallated etiopurpurin, λmax.SnEt2 = 650 nm). Purlytin has been reported to localise in skin and produce a photoreaction 7–14 days post-administration.
Foscan
Tetra(m-hydroxyphenyl)chlorin (mTHPC) is in clinical trials for head and neck cancers under the trade name Foscan. It has also been investigated in clinical trials for gastric and pancreatic cancers, hyperplasia, field sterilisation after cancer surgery and for the control of antibiotic-resistant bacteria.
Foscan has a singlet oxygen quantum yield comparable to other chlorin photosensitisers but lower drug and light doses (approximately 100 times more photoactive than Photofrin).
Foscan can render patients photosensitive for up to 20 days after initial illumination.
Lutex
Lutetium texaphyrin, marketed under the trade name Lutex and Lutrin, is a large porphyrin-like molecule. Texaphyrins are expanded porphyrins that have a penta-aza core. It offers strong absorption in the 730–770 nm region. Tissue transparency is optimal in this range. As a result, Lutex-based PDT can (potentially) be carried out more effectively at greater depths and on larger tumours.
Lutex has entered Phase II clinical trials for evaluation against breast cancer and malignant melanomas.
A Lutex derivative, Antrin, has undergone Phase I clinical trials for the prevention of restenosis of vessels after cardiac angioplasty by photoinactivating foam cells that accumulate within arteriolar plaques. A second Lutex derivative, Optrin, is in Phase I trials for AMD.
Texaphyrins also have potential as radiosensitisers (Xcytrin) and chemosensitisers. Xcytrin, a gadolinium texaphyrin (motexafin gadolinium), has been evaluated in Phase III clinical trials against brain metastases and Phase I clinical trials for primary brain tumours.
ATMPn
9-Acetoxy-2,7,12,17-tetrakis-(β-methoxyethyl)-porphycene has been evaluated as an agent for dermatological applications against psoriasis vulgaris and superficial non-melanoma skin cancer.
Zinc phthalocyanine
A liposomal formulation of zinc phthalocyanine (CGP55847) has undergone clinical trials (Phase I/II, Switzerland) against squamous cell carcinomas of the upper aerodigestive tract. Phthalocyanines (PCs) are related to tetra-aza porphyrins. Instead of four bridging carbon atoms at the meso-positions, as for the porphyrins, PCs have four nitrogen atoms linking the pyrrolic sub-units. PCs also have an extended conjugate pathway: a benzene ring is fused to the β-positions of each of the four-pyrrolic sub-units. These rings strengthen the absorption of the chromophore at longer wavelengths (with respect to porphyrins). The absorption band of PCs is almost two orders of magnitude stronger than the highest Q band of haematoporphyrin. These favourable characteristics, along with the ability to selectively functionalise their peripheral structure, make PCs favourable photosensitiser candidates.
A sulphonated aluminium PC derivative (Photosense) has entered clinical trials (Russia) against skin, breast and lung malignancies and cancer of the gastrointestinal tract. Sulphonation significantly increases PC solubility in polar solvents including water, circumventing the need for alternative delivery vehicles.
PC4 is a silicon complex under investigation for the sterilisation of blood components against human colon, breast and ovarian cancers and against glioma.
A shortcoming of many of the metallo-PCs is their tendency to aggregate in aqueous buffer (pH 7.4), resulting in a decrease, or total loss, of their photochemical activity. This behaviour can be minimised in the presence of detergents.
Metallated cationic porphyrazines (PZ), including PdPZ+, CuPZ+, CdPZ+, MgPZ+, AlPZ+ and GaPZ+, have been tested in vitro on V-79 (Chinese hamster lung fibroblast) cells. These photosensitisers display substantial dark toxicity.
Naphthalocyanines
Naphthalocyanines (NCs) are an extended PC derivative. They have an additional benzene ring attached to each isoindole sub-unit on the periphery of the PC structure. Subsequently, NCs absorb strongly at even longer wavelengths (approximately 740–780 nm) than PCs (670–780 nm). This absorption in the near infrared region makes NCs candidates for highly pigmented tumours, including melanomas, which present significant absorption problems for visible light.
However, problems associated with NC photosensitisers include lower stability, as they decompose in the presence of light and oxygen. Metallo-NCs, which lack axial ligands, have a tendency to form H-aggregates in solution. These aggregates are photoinactive, thus compromising the photodynamic efficacy of NCs.
Silicon naphthalocyanine attached to copolymer PEG-PCL (poly(ethylene glycol)-block-poly(ε-caprolactone)) accumulates selectively in cancer cells and reaches a maximum concentration after about one day. The compound provides real time near-infrared (NIR) fluorescence imaging with an extinction coefficient of 2.8 × 105 M−1 cm−1 and combinatorial phototherapy with dual photothermal and photodynamic therapeutic mechanisms that may be appropriate for adriamycin-resistant tumors. The particles had a hydrodynamic size of 37.66 ± 0.26 nm (polydispersity index = 0.06) and surface charge of −2.76 ± 1.83 mV.
Functional groups
Altering the peripheral functionality of porphyrin-type chromophores can affect photodynamic activity.
Diamino platinum porphyrins show high anti-tumour activity, demonstrating the combined effect of the cytotoxicity of the platinum complex and the photodynamic activity of the porphyrin species.
Positively charged PC derivatives have been investigated. Cationic species are believed to selectively localise in the mitochondria.
Zinc and copper cationic derivatives have been investigated. The positively charged zinc complexed PC is less photodynamically active than its neutral counterpart in vitro against V-79 cells.
Water-soluble cationic porphyrins bearing nitrophenyl, aminophenyl, hydroxyphenyl and/or pyridiniumyl functional groups exhibit varying cytotoxicity to cancer cells in vitro, depending on the nature of the metal ion (Mn, Fe, Zn, Ni) and on the number and type of functional groups. The manganese pyridiniumyl derivative has shown the highest photodynamic activity, while the nickel analogue is photoinactive.
Another metallo-porphyrin complex, the iron chelate, is more photoactive (towards HIV and simian immunodeficiency virus in MT-4 cells) than the manganese complexes; the zinc derivative is photoinactive.
The hydrophilic sulphonated porphyrins and PCs (AlPorphyrin and AlPC) compounds were tested for photodynamic activity. The disulphonated analogues (with adjacent substituted sulphonated groups) exhibited greater photodynamic activity than their di-(symmetrical), mono-, tri- and tetra-sulphonated counterparts; tumour activity increased with increasing degree of sulphonation.
Third generation
Many photosensitisers are poorly soluble in aqueous media, particularly at physiological pH, limiting their use.
Alternate delivery strategies range from the use of oil-in-water (o/w) emulsions to carrier vehicles such as liposomes and nanoparticles. Although these systems may increase therapeutic effects, the carrier system may inadvertently decrease the "observed" singlet oxygen quantum yield (ΦΔ): the singlet oxygen generated by the photosensitiser must diffuse out of the carrier system; and since singlet oxygen is believed to have a narrow radius of action, it may not reach the target cells. The carrier may limit light absorption, reducing singlet oxygen yield.
Another alternative that does not display the scattering problem is the use of moieties. Strategies include directly attaching photosensitisers to biologically active molecules such as antibodies.
Metallation
Various metals form into complexes with photosensitiser macrocycles. Multiple second generation photosensitisers contain a chelated central metal ion. The main candidates are transition metals, although photosensitisers co-ordinated to group 13 (Al, AlPcS4) and group 14 (Si, SiNC and Sn, SnEt2) metals have been synthesised.
The metal ion does not confer definite photoactivity on the complex. Copper (II), cobalt (II), iron (II) and zinc (II) complexes of Hp are all photoinactive in contrast to metal-free porphyrins. However, texaphyrin and PC photosensitisers do not contain metals; only the metallo-complexes have demonstrated efficient photosensitisation.
The central metal ion, bound by a number of photosensitisers, strongly influences the photophysical properties of the photosensitiser. Chelation of paramagnetic metals to a PC chromophore appears to shorten triplet lifetimes (down to nanosecond range), generating variations in the triplet quantum yield and triplet lifetime of the photoexcited triplet state.
Certain heavy metals are known to enhance inter-system crossing (ISC). Generally, diamagnetic metals promote ISC and have a long triplet lifetime. In contrast, paramagnetic species deactivate excited states, reducing the excited-state lifetime and preventing photochemical reactions. However, exceptions to this generalisation include copper octaethylbenzochlorin.
Many metallated paramagnetic texaphyrin species exhibit triplet-state lifetimes in the nanosecond range. These results are mirrored by metallated PCs. PCs metallated with diamagnetic ions, such as Zn2+, Al3+ and Ga3+, generally yield photosensitisers with desirable quantum yields and lifetimes (ΦT 0.56, 0.50 and 0.34 and τT 187, 126 and 35 μs, respectively). Photosensitiser ZnPcS4 has a singlet oxygen quantum yield of 0.70; nearly twice that of most other mPCs (ΦΔ at least 0.40).
Expanded metallo-porphyrins
Expanded porphyrins have a larger central binding cavity, increasing the range of potential metals.
Diamagnetic metallo-texaphyrins have shown photophysical properties; high triplet quantum yields and efficient generation of singlet oxygen. In particular, the zinc and cadmium derivatives display triplet quantum yields close to unity. In contrast, the paramagnetic metallo-texaphyrins, Mn-Tex, Sm-Tex and Eu-Tex, have undetectable triplet quantum yields. This behaviour is parallel with that observed for the corresponding metallo-porphyrins.
The cadmium-texaphyrin derivative has shown in vitro photodynamic activity against human leukemia cells and Gram positive (Staphylococcus) and Gram negative (Escherichia coli) bacteria. Although follow-up studies have been limited with this photosensitiser due to the toxicity of the complexed cadmium ion.
A zinc-metallated seco-porphyrazine has a high quantum singlet oxygen yield (ΦΔ 0.74). This expanded porphyrin-like photosensitiser has shown the best singlet oxygen photosensitising ability of any of the reported seco-porphyrazines. Platinum and palladium derivatives have been synthesised with singlet oxygen quantum yields of 0.59 and 0.54, respectively.
Metallochlorins/bacteriochlorins
The tin (IV) purpurins are more active when compared with analogous zinc (II) purpurins, against human cancers.
Sulphonated benzochlorin derivatives demonstrated a reduced phototherapeutic response against murine leukemia L1210 cells in vitro and transplanted urothelial cell carcinoma in rats, whereas the tin (IV) metallated benzochlorins exhibited an increased photodynamic effect in the same tumour model.
Copper octaethylbenzochlorin demonstrated greater photoactivity towards leukemia cells in vitro and a rat bladder tumour model. It may derive from interactions between the cationic iminium group and biomolecules. Such interactions may allow electron-transfer reactions to take place via the short-lived excited singlet state and lead to the formation of radicals and radical ions. The copper-free derivative exhibited a tumour response with short intervals between drug administration and photodynamic activity. Increased in vivo activity was observed with the zinc benzochlorin analogue.
Metallo-phthalocyanines
PCs properties are strongly influenced by the central metal ion. Co-ordination of transition metal ions gives metallo-complexes with short triplet lifetimes (nanosecond range), resulting in different triplet quantum yields and lifetimes (with respect to the non-metallated analogues). Diamagnetic metals such as zinc, aluminium and gallium, generate metallo-phthalocyanines (MPC) with high triplet quantum yields (ΦT ≥ 0.4) and short lifetimes (ZnPCS4 τT = 490 Fs and AlPcS4 τT = 400 Fs) and high singlet oxygen quantum yields (ΦΔ ≥ 0.7). As a result, ZnPc and AlPc have been evaluated as second generation photosensitisers active against certain tumours.
Metallo-naphthocyaninesulfobenzo-porphyrazines (M-NSBP)
Aluminium (Al3+) has been successfully coordinated to M-NSBP. The resulting complex showed photodynamic activity against EMT-6 tumour-bearing Balb/c mice (disulphonated analogue demonstrated greater photoactivity than the mono-derivative).
Metallo-naphthalocyanines
Work with zinc NC with various amido substituents revealed that the best phototherapeutic response (Lewis lung carcinoma in mice) with a tetrabenzamido analogue. Complexes of silicon (IV) NCs with two axial ligands in anticipation the ligands minimise aggregation. Disubstituted analogues as potential photodynamic agents (a siloxane NC substituted with two methoxyethyleneglycol ligands) are an efficient photosensitiser against Lewis lung carcinoma in mice. SiNC[OSi(i-Bu)2-n-C18H37]2 is effective against Balb/c mice MS-2 fibrosarcoma cells. Siloxane NCs may be efficacious photosensitisers against EMT-6 tumours in Balb/c mice. The ability of metallo-NC derivatives (AlNc) to generate singlet oxygen is weaker than the analogous (sulphonated) metallo-PCs (AlPC); reportedly 1.6–3 orders of magnitude less.
In porphyrin systems, the zinc ion (Zn2+) appears to hinder the photodynamic activity of the compound. By contrast, in the higher/expanded π-systems, zinc-chelated dyes form complexes with good to high results.
An extensive study of metallated texaphyrins focused on the lanthanide (III) metal ions, Y, In, Lu, Cd, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb found that when diamagnetic Lu (III) was complexed to texaphyrin, an effective photosensitiser (Lutex) was generated. However, using the paramagnetic Gd (III) ion for the Lu metal, exhibited no photodynamic activity. The study found a correlation between the excited-singlet and triplet state lifetimes and the rate of ISC of the diamagnetic texaphyrin complexes, Y(III), In (III) and Lu (III) and the atomic number of the cation.
Paramagnetic metallo-texaphyrins displayed rapid ISC. Triplet lifetimes were strongly affected by the choice of metal ion. The diamagnetic ions (Y, In and Lu) displayed triplet lifetimes ranging from 187, 126 and 35 μs, respectively. Comparable lifetimes for the paramagnetic species (Eu-Tex 6.98 μs, Gd-Tex 1.11, Tb-Tex < 0.2, Dy-Tex 0.44 × 10−3, Ho-Tex 0.85 × 10−3, Er-Tex 0.76 × 10−3, Tm-Tex 0.12 × 10−3 and Yb-Tex 0.46) were obtained.
Three measured paramagnetic complexes measured significantly lower than the diamagnetic metallo-texaphyrins.
In general, singlet oxygen quantum yields closely followed the triplet quantum yields.
Various diamagnetic and paramagnetic texaphyrins investigated have independent photophysical behaviour with respect to a complex's magnetism. The diamagnetic complexes were characterised by relatively high fluorescence quantum yields, excited-singlet and triplet lifetimes and singlet oxygen quantum yields; in distinct contrast to the paramagnetic species.
The +2 charged diamagnetic species appeared to exhibit a direct relationship between their fluorescence quantum yields, excited state lifetimes, rate of ISC and the atomic number of the metal ion. The greatest diamagnetic ISC rate was observed for Lu-Tex; a result ascribed to the heavy atom effect. The heavy atom effect also held for the Y-Tex, In-Tex and Lu-Tex triplet quantum yields and lifetimes. The triplet quantum yields and lifetimes both decreased with increasing atomic number. The singlet oxygen quantum yield correlated with this observation.
Photophysical properties displayed by paramagnetic species were more complex. The observed data/behaviour was not correlated with the number of unpaired electrons located on the metal ion. For example:
ISC rates and the fluorescence lifetimes gradually decreased with increasing atomic number.
Gd-Tex and Tb-Tex chromophores showed (despite more unpaired electrons) slower rates of ISC and longer lifetimes than Ho-Tex or Dy-Tex.
To achieve selective target cell destruction, while protecting normal tissues, either the photosensitizer can be applied locally to the target area, or targets can be locally illuminated. Skin conditions, including acne, psoriasis and also skin cancers, can be treated topically and locally illuminated. For internal tissues and cancers, intravenously administered photosensitizers can be illuminated using endoscopes and fiber optic catheters.
Photosensitizers can target viral and microbial species, including HIV and MRSA. Using PDT, pathogens present in samples of blood and bone marrow can be decontaminated before the samples are used further for transfusions or transplants. PDT can also eradicate a wide variety of pathogens of the skin and of the oral cavities. Given the seriousness that drug resistant pathogens have now become, there is increasing research into PDT as a new antimicrobial therapy.
Applications
Acne
PDT is currently in clinical trials as a treatment for severe acne. Initial results have shown for it to be effective as a treatment only for severe acne. A systematic review conducted in 2016 found that PDT is a "safe and effective method of treatment" for acne. The treatment may cause severe redness and moderate to severe pain and burning sensation in some people. (see also: Levulan) One phase II trial, while it showed improvement, was not superior to blue/violet light alone.
Cancer
The FDA has approved photodynamic therapy to treat actinic keratosis, advanced cutaneous T-cell lymphoma, Barrett esophagus, basal cell skin cancer, esophageal (throat) cancer, non-small cell lung cancer, and squamous cell skin cancer (Stage 0). Photodynamic therapy is also used to relieve symptoms of some cancers, including esophageal cancer when it blocks the throat and non-small cell lung cancer when it blocks the airways.
When cells that have absorbed photosensitizers are exposed to a specific wavelength of light, the photosensitizer produces a form of oxygen, called an oxygen radical, that kills them. Photodynamic therapy (PDT) may also damage blood vessels in the tumor, which prevents it from receiving the blood it needs to keep growing. PDT may trigger the immune system to attack tumor cells, even in other areas of the body.
PDT is a minimally invasive treatment that is used to treat many conditions including acne, psoriasis, age related macular degeneration, and several cancers such as skin, lung, brain, mesothelioma, bladder, bile-duct, esophageal, and head and neck cancers.
In February 2019, medical scientists announced that iridium attached to albumin, creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light, destroy the cancer cells.
Ophthalmology
As cited above, verteporfin was widely approved for the treatment of wet AMD beginning in 1999. The drug targets the neovasculature that is caused by the condition.
Photoimmunotherapy
Photoimmunotherapy is an oncological treatment for various cancers that combines photodynamic therapy of tumor with immunotherapy treatment. Combining photodynamic therapy with immunotherapy enhances the immunostimulating response and has synergistic effects for metastatic cancer treatment.
Vascular targeting
Some photosensitisers naturally accumulate in the endothelial cells of vascular tissue allowing 'vascular targeted' PDT.
Verteporfin was shown to target the neovasculature resulting from macular degeneration in the macula within the first thirty minutes after intravenous administration of the drug.
Compared to normal tissues, most types of cancers are especially active in both the uptake and accumulation of photosensitizers agents, which makes cancers especially vulnerable to PDT. Since photosensitizers can also have a high affinity for vascular endothelial cells.
Antimicrobial effects
Photodynamic skin disinfection is effective at killing topical microbes, including drug-resistant bacteria, viruses, and fungi. Photodynamic disinfection remains effective after repeat treatments, with no evidence of resistance formation. The method can effectively treat polymicrobial antibiotic resistant Pseudomonas aeruginosa and methicillin-resistant Staphylococcus aureus biofilms in a maxillary sinus cavity model.
History
Modern era
In the late nineteenth century. Finsen successfully demonstrated phototherapy by employing heat-filtered light from a carbon-arc lamp (the "Finsen lamp") in the treatment of a tubercular condition of the skin known as lupus vulgaris, for which he won the 1903 Nobel Prize in Physiology or Medicine.
In 1913 another German scientist, Meyer-Betz, described the major stumbling block of photodynamic therapy. After injecting himself with haematoporphyrin (Hp, a photosensitiser), he swiftly experienced a general skin sensitivity upon exposure to sunlight—a recurrent problem with many photosensitisers.
The first evidence that agents, photosensitive synthetic dyes, in combination with a light source and oxygen could have potential therapeutic effect was made at the turn of the 20th century in the laboratory of Hermann von Tappeiner in Munich, Germany. Germany was leading the world in industrial dye synthesis at the time.
While studying the effects of acridine on paramecia cultures, Oscar Raab, a student of von Tappeiner observed a toxic effect. Fortuitously Raab also observed that light was required to kill the paramecia. Subsequent work in von Tappeiner's laboratory showed that oxygen was essential for the 'photodynamic action' – a term coined by von Tappeiner.
Von Tappeiner and colleagues performed the first PDT trial in patients with skin carcinoma using the photosensitizer, eosin. Of six patients with a facial basal cell carcinoma, treated with a 1% eosin solution and long-term exposure either to sunlight or arc-lamp light, four patients showed total tumour resolution and a relapse-free period of 12 months.
In 1924 Policard revealed the diagnostic capabilities of hematoporphyrin fluorescence when he observed that ultraviolet radiation excited red fluorescence in the sarcomas of laboratory rats. Policard hypothesized that the fluorescence was associated with endogenous hematoporphyrin accumulation.
In 1948 Figge and co-workers showed on laboratory animals that porphyrins exhibit a preferential affinity to rapidly dividing cells, including malignant, embryonic and regenerative cells. They proposed that porphyrins could be used to treat cancer.
Photosensitizer Haematoporphyrin Derivative (HpD), was first characterised in 1960 by Lipson. Lipson sought a diagnostic agent suitable for tumor detection. HpD allowed Lipson to pioneer the use of endoscopes and HpD fluorescence. HpD is a porphyrin species derived from haematoporphyrin, Porphyrins have long been considered as suitable agents for tumour photodiagnosis and tumour PDT because cancerous cells exhibit significantly greater uptake and affinity for porphyrins compared to normal tissues. This had been observed by other researchers prior to Lipson.
Thomas Dougherty and co-workers at Roswell Park Comprehensive Cancer Center in Buffalo, New York, clinically tested PDT in 1978. They treated 113 cutaneous or subcutaneous malignant tumors with HpD and observed total or partial resolution of 111 tumors. Dougherty helped expand clinical trials and formed the International Photodynamic Association, in 1986.
John Toth, product manager for Cooper Medical Devices Corp/Cooper Lasersonics, noticed the "photodynamic chemical effect" of the therapy and wrote the first white paper naming the therapy "Photodynamic Therapy" (PDT) with early clinical argon dye lasers circa 1981. The company set up 10 clinical sites in Japan where the term "radiation" had negative connotations.
HpD, under the brand name Photofrin, was the first PDT agent approved for clinical use in 1993 to treat a form of bladder cancer in Canada. Over the next decade, both PDT and the use of HpD received international attention and greater clinical acceptance and led to the first PDT treatments approved by U.S. Food and Drug Administration Japan and parts of Europe for use against certain cancers of the oesophagus and non-small cell lung cancer.
Photofrin had the disadvantages of prolonged patient photosensitivity and a weak long-wavelength absorption (630 nm). This led to the development of second generation photosensitisers, including Verteporfin (a benzoporphyrin derivative, also known as Visudyne) and more recently, third generation targetable photosensitisers, such as antibody-directed photosensitisers.
In the 1980s, David Dolphin, Julia Levy and colleagues developed a novel photosensitizer, verteporfin. Verteporfin, a porphyrin derivative, is activated at 690 nm, a much longer wavelength than Photofrin. It has the property of preferential uptake by neovasculature. It has been widely tested for its use in treating skin cancers and received FDA approval in 2000 for the treatment of wet age related macular degeneration. As such it was the first medical treatment ever approved for this condition, which is a major cause of vision loss.
Russian scientists pioneered a photosensitizer called Photogem which, like HpD, was derived from haematoporphyrin in 1990 by Mironov and coworkers. Photogem was approved by the Ministry of Health of Russia and tested clinically from February 1992 to 1996. A pronounced therapeutic effect was observed in 91 percent of the 1500 patients. 62 percent had total tumor resolution. A further 29 percent had >50% tumor shrinkage. In early diagnosis patients 92 percent experienced complete resolution.
Russian scientists collaborated with NASA scientists who were looking at the use of LEDs as more suitable light sources, compared to lasers, for PDT applications.
Since 1990, the Chinese have been developing clinical expertise with PDT, using domestically produced photosensitizers, derived from Haematoporphyrin. China is notable for its expertise in resolving difficult-to-reach tumours.
Miscellany
PUVA therapy uses psoralen as photosensitiser and UVA ultraviolet as light source, but this form of therapy is usually classified as a separate form of therapy from photodynamic therapy.
To allow treatment of deeper tumours some researchers are using internal chemiluminescence to activate the photosensitiser.
See also
Antimicrobial photodynamic therapy
Blood irradiation therapy
Laser medicine
Light Harvesting Materials
Photoimmunotherapy
Photomedicine
Photopharmacology
Photostatin
Sonodynamic therapy
Photosensitizer
Nanodumbbells, being studied for possible use in photodynamic therapy
Neurotherapy
References
External links
International Photodynamic Association
Photodynamic Therapy for Cancer from the NCI
Cancer treatments
Medical physics
Laser medicine
Light therapy | Photodynamic therapy | [
"Physics"
] | 11,194 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
417,014 | https://en.wikipedia.org/wiki/Passive%20transport | Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
Passive transport follows Fick's first law.
Diffusion
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorphous solid dispersions for drug bioavailability enhancement.
Simple diffusion and osmosis are in some ways similar. Simple diffusion is the passive movement of solute from a high concentration to a lower concentration until the concentration of the solute is uniform throughout and reaches equilibrium. Osmosis is much like simple diffusion but it specifically describes the movement of water (not the solute) across a selectively permeable membrane until there is an equal concentration of water and solute on both sides of the membrane. Simple diffusion and osmosis are both forms of passive transport and require none of the cell's ATP energy.
Speed of diffusion
For passive diffusion, the law of diffusion states that the mean squared displacement is with d being the number of dimensions and D the diffusion coefficient). So to diffuse a distance of about takes time , and the "average speed" is . This means that in the same physical environment, diffusion is fast when the distance is small, but less when the distance is large.
This can be seen in material transport within the cell. Prokaryotes typically have small bodies, allowing diffusion to suffice for material transport within the cell. Larger cells like eukaryotes would either have very low metabolic rate to accommodate the slowness of diffusion, or invest in complex cellular machinery to allow active transport within the cell, such as kinesin walking along microtubules.
Example of diffusion: gas exchange
A biological example of diffusion is the gas exchange that occurs during respiration within the human body. Upon inhalation, oxygen is brought into the lungs and quickly diffuses across the membrane of alveoli and enters the circulatory system by diffusing across the membrane of the pulmonary capillaries. Simultaneously, carbon dioxide moves in the opposite direction, diffusing across the membrane of the capillaries and entering into the alveoli, where it can be exhaled. The process of moving oxygen into the cells, and carbon dioxide out, occurs because of the concentration gradient of these substances, each moving away from their respective areas of higher concentration toward areas of lower concentration. Cellular respiration is the cause of the low concentration of oxygen and high concentration of carbon dioxide within the blood which creates the concentration gradient. Because the gasses are small and uncharged, they are able to pass directly through the cell membrane without any special membrane proteins. No energy is required because the movement of the gasses follows Fick's first law and the second law of thermodynamics.
Facilitated diffusion
Facilitated diffusion, also called carrier-mediated osmosis, is the movement of molecules across the cell membrane via special transport proteins that are embedded in the plasma membrane by actively taking up or excluding ions [14]. Through facilitated diffusion, energy is not required in order for molecules to pass through the cell membrane. Active transport of protons by H+ ATPases alters membrane potential allowing for facilitated passive transport of particular ions such as potassium down their charge gradient through high affinity transporters and channels.
Example of facilitated diffusion: GLUT2
An example of facilitated diffusion is when glucose is absorbed into cells through Glucose transporter 2 (GLUT2) in the human body. There are many other types of glucose transport proteins, some that do require energy, and are therefore not examples of passive transport. Since glucose is a large molecule, it requires a specific channel to facilitate its entry across plasma membranes and into cells. When diffusing into a cell through GLUT2, the driving force that moves glucose into the cell is the concentration gradient. The main difference between simple diffusion and facilitated diffusion is that facilitated diffusion requires a transport protein to 'facilitate' or assist the substance through the membrane. After a meal, the cell is signaled to move GLUT2 into membranes of the cells lining the intestines called enterocytes. With GLUT2 in place after a meal and the relative high concentration of glucose outside of these cells as compared to within them, the concentration gradient drives glucose across the cell membrane through GLUT2.
Filtration
Filtration is movement of water and solute molecules across the cell membrane due to hydrostatic pressure generated by the cardiovascular system. Depending on the size of the membrane pores, only solutes of a certain size may pass through it. For example, the membrane pores of the Bowman's capsule in the kidneys are very small, and only albumins, the smallest of the proteins, have any chance of being filtered through. On the other hand, the membrane pores of liver cells are extremely large, but not forgetting cells are extremely small to allow a variety of solutes to pass through and be metabolized.
Osmosis
Osmosis is the net movement of water molecules across a selectively permeable membrane from an area of high water potential to an area of low water potential. A cell with a less negative water potential will draw in water, but this depends on other factors as well such as solute potential (pressure in the cell e.g. solute molecules) and pressure potential (external pressure e.g. cell wall). There are three types of Osmosis solutions: the isotonic solution, hypotonic solution, and hypertonic solution. Isotonic solution is when the extracellular solute concentration is balanced with the concentration inside the cell. In the Isotonic solution, the water molecules still move between the solutions, but the rates are the same from both directions, thus the water movement is balanced between the inside of the cell as well as the outside of the cell. A hypotonic solution is when the solute concentration outside the cell is lower than the concentration inside the cell. In hypotonic solutions, the water moves into the cell, down its concentration gradient (from higher to lower water concentrations). That can cause the cell to swell. Cells that don't have a cell wall, such as animal cells, could burst in this solution. A hypertonic solution is when the solute concentration is higher (think of hyper - as high) than the concentration inside the cell. In hypertonic solution, the water will move out, causing the cell to shrink.
See also
Active transport
Transport phenomena
References
Transport phenomena
Cellular processes
Membrane biology
Physiology
Cell biology | Passive transport | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 1,646 | [
"Transport phenomena",
"Physical phenomena",
"Cell biology",
"Physiology",
"Chemical engineering",
"Membrane biology",
"Cellular processes",
"Molecular biology"
] |
417,036 | https://en.wikipedia.org/wiki/Agency%20for%20Nuclear%20Projects | The Agency for Nuclear Projects (Nuclear Waste Project Office) is a part of the Nevada state government, under the administration of the Governor of Nevada. The organization is based in Carson City.
The Nevada Legislature created the Commission in 1982 to assure that the health, safety, and welfare of Nevada's citizens and the State's unique environment and economy are adequately protected from any federal high-level nuclear waste repository and related activities in the state. Nuclear Waste Policy Act of 1982 The seven-member Commission advises the Governor and Legislature on nuclear waste matters and oversees activities of the Agency for Nuclear Projects (Agency). The Agency oversees the U.S. Department of Energy's (DOE) proposed Yucca Mountain nuclear waste repository project, Federal high-level radioactive waste management program, and related Federal programs. The Agency remains prepared to act to support Nevada's interests as they relate to high-level radioactive waste management.
See also
Yucca Mountain
References
State Agencies and Departments
External links
Nevada Agency for Nuclear Projects
Nuclear Projects
Government agencies established in 1985
1985 establishments in Nevada
Nuclear energy | Agency for Nuclear Projects | [
"Physics",
"Chemistry"
] | 215 | [
"Nuclear energy",
"Radioactivity",
"Nuclear physics"
] |
1,751,047 | https://en.wikipedia.org/wiki/Momentum%20theory | In fluid dynamics, momentum theory or disk actuator theory is a theory describing a mathematical model of an ideal actuator disk, such as a propeller or helicopter rotor, by W.J.M. Rankine (1865), Alfred George Greenhill (1888) and Robert Edmund Froude (1889).
The rotor is modeled as an infinitely thin disc, inducing a constant velocity along the axis of rotation. The basic state of a helicopter is hovering. This disc creates a flow around the rotor. Under certain mathematical premises of the fluid, there can be extracted a mathematical connection between power, radius of the rotor, torque and induced velocity. Friction is not included.
For a stationary open rotor with no outer duct, such as a helicopter in hover, the power required to produce a given thrust is:
where:
T is the thrust
is the density of air (or other medium)
A is the area of the rotor disc
P is power
A device which converts the translational energy of the fluid into rotational energy of the axis or vice versa is called a Rankine disk actuator. The real life implementations of such devices include marine and aviation propellers, windmills, helicopter rotors, centrifugal pumps, wind turbines, turbochargers and chemical agitators.
See also
Blade element theory
Circulation (fluid dynamics)
Disk loading
Kutta–Joukowski theorem
References
Fluid dynamics
Propellers
Momentum
Aircraft aerodynamics | Momentum theory | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 289 | [
"Physical quantities",
"Chemical engineering",
"Quantity",
"Piping",
"Moment (physics)",
"Momentum",
"Fluid dynamics"
] |
1,752,414 | https://en.wikipedia.org/wiki/Contractible%20space | In mathematics, a topological space X is contractible if the identity map on X is null-homotopic, i.e. if it is homotopic to some constant map. Intuitively, a contractible space is one that can be continuously shrunk to a point within that space.
Properties
A contractible space is precisely one with the homotopy type of a point. It follows that all the homotopy groups of a contractible space are trivial. Therefore any space with a nontrivial homotopy group cannot be contractible. Similarly, since singular homology is a homotopy invariant, the reduced homology groups of a contractible space are all trivial.
For a nonempty topological space X the following are all equivalent:
X is contractible (i.e. the identity map is null-homotopic).
X is homotopy equivalent to a one-point space.
X deformation retracts onto a point. (However, there exist contractible spaces which do not strongly deformation retract to a point.)
For any path-connected space Y, any two maps f,g: X → Y are homotopic.
For any nonempty space Y, any map f: Y → X is null-homotopic.
The cone on a space X is always contractible. Therefore any space can be embedded in a contractible one (which also illustrates that subspaces of contractible spaces need not be contractible).
Furthermore, X is contractible if and only if there exists a retraction from the cone of X to X.
Every contractible space is path connected and simply connected. Moreover, since all the higher homotopy groups vanish, every contractible space is n-connected for all n ≥ 0.
Locally contractible spaces
A topological space X is locally contractible at a point x if for every neighborhood U of x there is a neighborhood V of x contained in U such that the inclusion of V is nulhomotopic in U. A space is locally contractible if it is locally contractible at every point. This definition is occasionally referred to as the "geometric topologist's locally contractible," though is the most common usage of the term. In Hatcher's standard Algebraic Topology text, this definition is referred to as "weakly locally contractible," though that term has other uses.
If every point has a local base of contractible neighborhoods, then we say that X is strongly locally contractible. Contractible spaces are not necessarily locally contractible nor vice versa. For example, the comb space is contractible but not locally contractible (if it were, it would be locally connected which it is not). Locally contractible spaces are locally n-connected for all n ≥ 0. In particular, they are locally simply connected, locally path connected, and locally connected. The circle is (strongly) locally contractible but not contractible.
Strong local contractibility is a strictly stronger property than local contractibility; the counterexamples are sophisticated, the first being given by Borsuk and Mazurkiewicz in their paper Sur les rétractes absolus indécomposables, C.R.. Acad. Sci. Paris 199 (1934), 110-112).
There is some disagreement about which definition is the "standard" definition of local contractibility ; the first definition is more commonly used in geometric topology, especially historically, whereas the second definition fits better with the typical usage of the term "local" with respect to topological properties. Care should always be taken regarding the definitions when interpreting results about these properties.
Examples and counterexamples
Any Euclidean space is contractible, as is any star domain on a Euclidean space.
The Whitehead manifold is contractible.
Spheres of any finite dimension are not contractible, although they are simply connected in dimension at least 2.
The unit sphere in an infinite-dimensional Hilbert space is contractible.
The house with two rooms is a standard example of a space which is contractible, but not intuitively so.
The Dunce hat is contractible, but not collapsible.
The cone on a Hawaiian earring is contractible (since it is a cone), but not locally contractible or even locally simply connected.
All manifolds and CW complexes are locally contractible, but in general not contractible.
The Warsaw circle is obtained by "closing up" the topologist's sine curve by an arc connecting (0,−1) and (1,sin(1)). It is a one-dimensional continuum whose homotopy groups are all trivial, but it is not contractible.
See also
References
Topology
Homotopy theory
Properties of topological spaces | Contractible space | [
"Physics",
"Mathematics"
] | 967 | [
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
1,754,980 | https://en.wikipedia.org/wiki/Multi-component%20reaction | A multi-component reaction (or MCR), sometimes referred to as a "Multi-component Assembly Process" (or MCAP), is a chemical reaction where three or more compounds
react to form a single product. By definition, multicomponent reactions are those reactions whereby more than two reactants combine in a sequential manner to give highly selective products that retain majority of the atoms of the starting material.
History and types of multicomponent reactions
Multicomponent reactions have been known for over 150 years. The first documented multicomponent reaction was the Strecker synthesis of α-amino cyanides in 1850 from which α-amino acids could be derived. A multitude of MCRs exist today, of which the isocyanide based MCRs are the most documented. Other MCRs include free-radical mediated MCRs, MCRs based on organoboron compounds and metal-catalyzed MCRs.
Isocyanide based MCRs are most frequently exploited because the isocyanide is an extraordinary functional group. It is believed to exhibit resonance between its tetravalent and divalent carbon forms. This induces the isocyanide group to undergo both electrophilic and nucleophilic reactions at the CII atom, which then converts to the CIV form in an exothermic reaction. The occurrence of isocyanides in natural products has also made it a useful functional group. The two most important isocyanide-based multicomponent reactions are the Passerini 3-component reaction to produce α-acyloxy carboxamides and the Ugi 4-component reaction, which yields the α-amino carboxamides.
Examples of three component reactions:
Alkyne trimerisation
Biginelli reaction
Bucherer–Bergs reaction
Gewald reaction
Grieco three-component coupling
Hantzsch pyridine synthesis
Kabachnik–Fields reaction
Mannich reaction
Passerini reaction
Pauson–Khand reaction
Petasis reaction
Strecker amino acid synthesis
Ugi reaction
Asinger reaction
A3 coupling reaction
The exact nature of this type of reaction is often difficult to assess, in collision theory a simultaneous interaction of 3 or more different molecules is less likely resulting in a low reaction rate. These reactions are more likely to involve a series of bimolecular reactions.
New MCR's are found by building a chemical library from combinatorial chemistry or by combining existing MCR's. For example, a 7-component MCR results from combining the Ugi reaction with the Asinger reaction. MCR's are an important tool in new drug discovery. MCR's can often be extended into combinatorial, solid phase or flow syntheses for developing new lead structures of active agents.
See also
A tandem reaction is a consecutive series of intramolecular organic reactions.
References
External links
Multicomponent reactions
Presentation on Multicomponent Reactions
Chemical kinetics
Chemical synthesis | Multi-component reaction | [
"Chemistry"
] | 609 | [
"Chemical kinetics",
"Chemical reaction engineering",
"nan",
"Chemical synthesis"
] |
1,755,301 | https://en.wikipedia.org/wiki/Ugi%20reaction | In organic chemistry, the Ugi reaction is a multi-component reaction involving a ketone or aldehyde, an amine, an isocyanide and a carboxylic acid to form a bis-amide.
The reaction is named after Ivar Karl Ugi, who first reported this reaction in 1959.
The Ugi reaction is exothermic and usually complete within minutes of adding the isocyanide. High concentration (0.5M - 2.0M) of reactants give the highest yields. Polar, aprotic solvents, like DMF, work well. However, methanol and ethanol have also been used successfully. This uncatalyzed reaction has an inherent high atom economy as only a molecule of water is lost, and the chemical yield in general is high. Several reviews have been published.
Due to the reaction products being potential protein mimetics there have been many attempts to development an enantioselective Ugi reaction, the first successful report of which was in 2018.
Reaction mechanism
One plausible reaction mechanism is depicted below:
Amine 1 and ketone 2 form the imine 3 with loss of one equivalent of water. Proton exchange with carboxylic acid 4 activates the iminium ion 5 for nucleophilic addition of the isocyanide 6 with its terminal carbon atom to nitrilium ion 7. A second nucleophilic addition takes place at this intermediate with the carboxylic acid anion to 8. The final step is a Mumm rearrangement with transfer of the R4 acyl group from oxygen to nitrogen. All reaction steps are reversible except for the Mumm rearrangement, which drives the whole reaction sequence.
In the related Passerini reaction (lacking the amine), the isocyanide reacts directly with the carbonyl group, but other aspects of the reaction are the same. This reaction can take place concurrently with the Ugi reaction, acting as a source of impurities.
Variations
Combination of reaction components
The usage of bifunctional reaction components greatly increases the diversity of possible reaction products. Likewise, several combinations lead to structurally interesting products. The Ugi reaction has been applied in combination with an intramolecular Diels-Alder reaction in an extended multistep reaction.
A reaction in its own right is the Ugi–Smiles reaction with the carboxylic acid component replaced by a phenol. In this reaction the Mumm rearrangement in the final step is replaced by the Smiles rearrangement.
Another combination (with separate workup of the Ugi intermediate) is one with the Buchwald–Hartwig reaction. In the Ugi–Heck reaction a Heck aryl-aryl coupling takes place in a second step.
Combination of amine and carboxylic acid
Several groups have used β-amino acids in the Ugi reaction to prepare β-lactams.
This approach relies on acyl transfer in the Mumm rearrangement to form the four-membered ring. The reaction proceeds in moderate yield at room temperature in methanol with formaldehyde or a variety of aryl aldehydes. For example, p-nitrobenzaldehyde reacts to form the β-lactam shown in 71% yield as a 4:1 diastereomeric mixture:
Combination of carbonyl compound and carboxylic acid
Zhang et al. have combined aldehydes with carboxylic acids and used the Ugi reaction to create lactams of various sizes. Short et al. have prepared γ-lactams from keto-acids on solid-support.
Applications
Chemical libraries
The Ugi reaction is one of the first reactions to be exploited explicitly to develop chemical libraries. These chemical libraries are sets of compounds that can be tested repeatedly. Using the principles of combinatorial chemistry, the Ugi reaction offers the possibility to synthesize a great number of compounds in one reaction, by the reaction of various ketones (or aldehydes), amines, isocyanides and carboxylic acids. These libraries can then be tested with enzymes or living organisms to find new active pharmaceutical substances. One drawback is the lack of chemical diversity of the products. Using the Ugi reaction in combination with other reactions enlarges the chemical diversity of possible products.
Examples of Ugi reaction combinations:
Isoquinolines from Ugi and Heck reactions.
Pharmaceutical industry
Crixivan can be prepared using the Ugi reaction.
Additionally, many of the caine-type anesthetics are synthesized using this reaction. Examples include lidocaine and bupivacaine.
See also
Passerini reaction
References
Carbon-carbon bond forming reactions
Multiple component reactions
Name reactions
Amide synthesis reactions | Ugi reaction | [
"Chemistry"
] | 981 | [
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions",
"Amide synthesis reactions"
] |
19,163,321 | https://en.wikipedia.org/wiki/Leak%20detection | Pipeline leak detection is used to determine if (and in some cases where) a leak has occurred in systems which contain liquids and gases. Methods of detection include hydrostatic testing, tracer-gas leak testing, infrared, laser technology, and acoustic or sonar technologies. Some technologies are used only during initial pipeline installation and commissioning, while other technologies can be used for continuous monitoring during service.
Pipeline networks are a mode of transportation for oil, gases, and other fluid products. As a means of long-distance transport, pipelines have to fulfill high demands of safety, reliability and efficiency. If properly maintained, pipelines can last indefinitely without leaks. Some significant leaks that do occur are caused by damage from nearby excavation, but most leaks are caused by corrosion and equipment failure and incorrect operation. If a pipeline is not properly maintained, it can corrode, particularly at construction joints, low points where moisture collects, or locations with imperfections in the pipe. Other reasons for leaks include exterior force damage (such as damage by car collisions or drilling rigs) and natural forces (such as earth movement, heavy rain and flooding, lightning, and temperature).
Overview
The most common leak detection method for pipeline operators is called the Supervisory Control And Data Acquisition (SCADA) system. This system uses a series of sensors to track data such as pressure, flow rates, temperature, and whether valves are open or closed. The sensors relay the information to a control room where operators determine the legitimacy of the leak alarms. Some systems have added the Computational Pipeline Monitoring System (CPM), whose main task is to detect leaks. These systems have been reported by pipeline operators to the US Department of Transportation's Pipeline and Hazardous Materials Safety Administration to be inefficient in leak detection. Even with these in place, the SCADA system is reported to have detected only 19% of leaks, and the CPM system only detecting 10% of leaks.
The primary purpose of leak detection systems (LDS) is to help pipeline controllers to detect and localize leaks. LDS provide alarms and display other related data to the pipeline controllers to assist decision-making. Pipeline leak detection systems can also enhance productivity and system reliability thanks to reduced downtime and inspection time.
According to the API document "RP 1130", LDS are divided into internally based LDS and externally based LDS. Internally based systems use field instrumentation (for example flow, pressure or fluid temperature sensors) to monitor internal pipeline parameters. Externally based systems use a different, independent set of field instrumentation (for example infrared radiometers or thermal cameras, vapor sensors, acoustic microphones or fiber-optic cables) to monitor external pipeline parameters.
Rules and regulations
Some countries formally regulate pipeline operation.
API RP 1130 "Computational Pipeline Monitoring for Liquids" (US)
This recommended practice (RP) focuses on the design, implementation, testing and operation of LDS that use an algorithmic approach. The purpose of this recommended practice is to assist the Pipeline Operator in identifying issues relevant to the selection, implementation, testing, and operation of an LDS.
TRFL (Germany)
TRFL is the abbreviation for "Technische Regel für Fernleitungsanlagen" (Technical Rule for Pipeline Systems). The TRFL summarizes requirements for pipelines being subject of official regulations. It covers pipelines transporting flammable liquids, pipelines transporting liquids that are dangerous for water, and most of the pipelines transporting gas. Five different kinds of LDS or LDS functions are required:
Two independent LDS for continuous leak detection during steady-state operation. One of these systems or an additional one must also be able to detect leaks during transient operation, e.g. during start-up of the pipeline
One LDS for leak detection during shut-in operation
One LDS for creeping leaks
One LDS for fast leak location
Requirements
API 1155(replaced by API RP 1130) defines the following important requirements for an LDS:
Sensitivity: An LDS must ensure that the loss of fluid as a result of a leak is as small as possible. This places two requirements on the system: it must detect small leaks, and it must detect them quickly.
Reliability: The user must be able to trust the LDS. This means that it must correctly report any real alarms, but it is equally important that it does not generate false alarms.
Accuracy: Some LDS are able to calculate leak flow and leak location. This must be done accurately.
Robustness: The LDS should continue to operate in non-ideal circumstances. For example, in case of a transducer failure, the system should detect the failure and continue to operate (possibly with necessary compromises such as reduced sensitivity).
Steady-state and transient conditions
During steady-state conditions, the flow, pressures, etc. in the pipeline are (more or less) constant over time. During transient conditions, these variables may change rapidly. The changes propagate like waves through the pipeline with the speed of sound of the fluid. Transient conditions occur in a pipeline for example at start-up,
if the pressure at inlet or outlet changes (even if the change is small), and when a batch changes, or when multiple products are in the pipeline. Gas pipelines are almost always in transient conditions, because gases are very compressible. Even in liquid pipelines, transient effects cannot be disregarded most of the time. LDS should allow for detection of leaks for both conditions to provide leak detection during the entire operating time of the pipeline.
Internally based LDS
Internally based systems use field instrumentation (e.g. for flow, pressure and fluid temperature) to monitor internal pipeline parameters which are used to detect possible leaks. System cost and complexity of internally based LDS are moderate because they use existing field instrumentation. This kind of LDS is used for standard safety requirements.
Pressure/flow monitoring
A leak changes the hydraulics of the pipeline, and therefore changes the pressure or flow readings after some time. Local monitoring of pressure or flow at only one point can therefore provide simple leak detection. As it is done locally it requires in principle no telemetry. It is only useful in steady-state conditions, however, and its ability to deal with gas pipelines is limited.
Acoustic pressure waves
The acoustic pressure wave method analyses the rarefaction waves produced when a leak occurs. When a pipeline wall breakdown occurs, fluid or gas escapes in the form of a high velocity jet. This produces negative pressure waves which propagate in both directions within the pipeline and can be detected and analyzed. The operating principles of the method are based on the very important characteristic of pressure waves to travel over long distances at the speed of sound guided by the pipeline walls. The amplitude of a pressure wave increases with the leak size. A complex mathematical algorithm analyzes data from pressure sensors and is able in a matter of seconds to point to the location of the leakage with accuracy less than 50 m (164 ft). Experimental data has shown the method's ability to detect leaks less than 3mm (0.1 inch) in diameter and operate with the lowest false alarm rate in the industry – less than 1 false alarm per year.
However, the method is unable to detect an ongoing leak after the initial event: after the pipeline wall breakdown (or rupture), the initial pressure waves subside and no subsequent pressure waves are generated. Therefore, if the system fails to detect the leak (for instance, because the pressure waves were masked by transient pressure waves caused by an operational event such as a change in pumping pressure or valve switching), the system will not detect the ongoing leak.
Balancing methods
These methods base on the principle of conservation of mass. In the steady state, the mass flow entering a leak-free pipeline will balance the mass flow leaving it; any drop in mass leaving the pipeline (mass imbalance ) indicates a leak. Balancing methods measure and using flowmeters and finally compute the imbalance which is an estimate of the unknown, true leak flow. Comparing this imbalance (typically monitored over a number of periods) against a leak alarm threshold generates an alarm if this monitored imbalance. Enhanced balancing methods additionally take into account the change rate of the mass inventory of the pipeline. Names that are used for enhanced line balancing techniques are volume balance, modified volume balance, and compensated mass balance.
State-observer-based methods
These methods are based on state observers which are designed from fluid mathematical models expressed in state-space representation.
These methods can be classified into two types: infinite-dimensional observers and finite-dimensional observers. The first type is based on a couple of quasi-linear hyperbolic partial differential equations: a momentum and a continuity equations that represent the fluid dynamics in a pipeline. The finite-dimensional observers are constructed from a lumped version of the momentum and a continuity equations.
Several types of observers have been used for leak detection, for instance Kalman filters, high gain observers, sliding mode observers
and Luenberger-type observers.
Statistical methods
Statistical LDS use statistical methods (e.g. from the field of decision theory) to analyse pressure/flow at only one point or the imbalance in order to detect a leak. This leads to the opportunity to optimise the leak decision if some statistical assumptions hold. A common approach is the use of the hypothesis test procedure
This is a classical detection problem, and there are various solutions known from statistics.
RTTM methods
RTTM means "Real-Time Transient Model". RTTM LDS use mathematical models of the flow within a pipeline using basic physical laws such as conservation of mass, conservation of momentum, and conservation of energy. RTTM methods can be seen as an enhancement of balancing methods as they additionally use the conservation principle of momentum and energy. An RTTM makes it possible to calculate mass flow, pressure, density and temperature at every point along the pipeline in real-time with the help of mathematical algorithms. RTTM LDS can easily model steady-state and transient flow in a pipeline. Using RTTM technology, leaks can be detected during steady-state and transient conditions. With proper functioning instrumentation, leak rates may be functionally estimated using available formulas.
E-RTTM methods
E-RTTM stands for "Extended Real-Time Transient Model", using RTTM technology with statistical methods. So, leak detection is possible during steady-state and transient condition with high sensitivity, and false alarms will be avoided using statistical methods.
For the residual method, an RTTM module calculates estimates , for MASS FLOW at inlet and outlet, respectively. This can be done using measurements for pressure and temperature at inlet (, ) and outlet (, ). These estimated mass flows are compared with the measured mass flows , , yielding the residuals and . These residuals are close to zero if there is no leak; otherwise the residuals show a characteristic signature. In a next step, the residuals are subject of a leak signature analysis. This module analyses their temporal behaviour by extracting and comparing the leak signature with leak signatures in a database ("fingerprint"). Leak alarm is declared if the extracted leak signature matches the fingerprint.
Externally based LDS
Externally based systems use local, dedicated sensors. Such LDS are highly sensitive and accurate, but system cost and complexity of installation are usually very high; applications are therefore limited to special high-risk areas, e.g. near rivers or nature-protection areas.
Analytic thermal leak detector for above ground pipelines
Video analytics driven thermal imaging using uncooled microbolometer infrared sensors is emerging as a new and effective method of visualizing, detecting and generating alerts of unplanned surface emissions of liquids and hydrocarbon gas liquids. Detection to alarm generation takes less than 30 seconds. This technology is suitable for above-ground piping facilities, such as pump stations, refineries, storage sites, mines, chemical plants, water crossings, and water treatment plants. The need for new solutions in this area is driven by the fact that more than half of pipeline leaks occur at facilities.
High quality thermographic technology accurately measures and visualizes emissivity or infrared radiation (thermal heat) of objects into gray scale imagery without the need for ambient lighting. The monitored petroleum product (e.g. oil) is distinguished from background objects by this heat difference. The addition of an analytic software component, typically optimizable to better address a specific application or environment, enables automated onsite leak analysis, validation and reporting, thereby reducing reliance on man power. A leak appearing within an analytic region (a rule added to the camera) is immediately analyzed for its attributes, including thermal temperature, size, and behaviour (e.g. spraying, pooling, spilling). When a leak is determined to be valid based on set parameters, an alarm notification with leak video is generated and sent to a monitoring station.
Optimal detection distance varies and is influenced by camera lens size, resolution, field of view, thermal detection range and sensitivity, leak size, and other factors. The system's layers of filters and immunity to environmental elements, such as snow, ice, rain, fog and glare, contribute to false alarms reduction. The video monitoring architecture can be integrated onto existing leak detection and repair (LDAR) systems, including SCADA networks, as well as other surveillance systems.
Digital oil leak detection cable
Digital sense cables consist of a braid of semi-permeable internal conductors protected by a permeable insulating moulded braid. An electrical signal is passed through the internal conductors and is monitored by an inbuilt microprocessor inside the cable connector. Escaping fluids pass through the external permeable braid and make contact with the internal semi-permeable conductors. This causes a change in the electrical properties of the cable that is detected by the microprocessor. The microprocessor can locate the fluid to within a 1-metre resolution along its length and provide an appropriate signal to monitoring systems or operators. The sense cables can be wrapped around pipelines, buried sub-surface with pipelines or installed as a pipe-in-pipe configuration.
Infrared radiometric pipeline testing
Infrared thermographic pipeline testing has shown itself to be both accurate and efficient in detecting and locating subsurface pipeline leaks, voids caused by erosion, deteriorated pipeline insulation, and poor backfill. When a pipeline leak has allowed a fluid, such as water, to form a plume near a pipeline, the fluid has a thermal conductance different from the dry soil or backfill. This will be reflected in different surface temperature patterns above the leak location. A high-resolution infrared radiometer allows entire areas to be scanned and the resulting data to be displayed as pictures with areas of differing temperatures designated by differing grey tones on a black & white image or by various colours on a colour image. This system measures surface energy patterns only, but the patterns that are measured on the surface of the ground above a buried pipeline can help show where pipeline leaks and resulting erosion voids are forming; it detects problems as deep as 30 meters below the ground surface.
Acoustic emission detectors
Escaping liquids create an acoustic signal as they pass through a hole in the pipe. Acoustic sensors affixed to the outside of the pipeline can create a baseline acoustic "fingerprint" of the line from the internal noise of the pipeline in its undamaged state. When a leak occurs, a resulting low-frequency acoustic signal is detected and analysed. Deviations from the baseline "fingerprint" signal an alarm. New sensors have improved frequency band selection, time delay range selection, etc. This makes the graphs more distinct and easy to analyse.
There are other ways to detect leakage, while reducing costs for exploratory excavation. Ground geo-phones with signal filtering are very useful to pinpoint leakage locations. An escaping pressurized water jet underground can create a faint noise which will be muffled on its way to the surface. Maximum signal can be picked up above the leakage position. Some types of gases escaping from a pipeline will create a range of sounds.
Apart from passive detection, active sonar-based methods for leak detection have been proposed, based on characteristic responses from multiphase bubbles and antibubbles.
Vapour-sensing tubes
The vapour-sensing tube leak detection method involves the installation of a tube along the entire length of the pipeline. This tube – in cable form – is highly permeable to the substances to be detected in the particular application. If a leak occurs, the substances to be measured come into contact with the tube in the form of vapour, gas or dissolved in water. In the event of a leak, some of the leaking substance diffuses into the tube. After a certain period of time, the inside of the tube produces an accurate image of the substances surrounding the tube. In order to analyse the concentration distribution present in the sensor tube, a pump pushes the column of air in the tube past a detection unit at a constant speed. The detector unit at the end of the sensor tube is equipped with gas sensors. Every increase in gas concentration results in a pronounced "leak peak".
Fibre-optic leak detection
At least two fibre-optic leak detection methods are being commercialized: Distributed Temperature Sensing (DTS) and Distributed Acoustic Sensing (DAS). The DTS method involves the installation of a fibre-optic cable along the length of pipeline being monitored. The substances to be measured come into contact with the cable when a leak occurs, changing the temperature of the cable and changing the reflection of the laser beam pulse, signalling a leak. The location is known by measuring the time delay between when the laser pulse was emitted and when the reflection is detected. This only works if the substance is at a temperature different from the ambient environment. In addition, the distributed fibre-optical temperature-sensing technique offers the possibility to measure temperature along the pipeline. Scanning the entire length of the fibre, the temperature profile along the fibre is determined, leading to leak detection.
The DAS method involves a similar installation of fiber-optic cable along the length of pipeline being monitored. Vibrations caused by a substance leaving the pipeline via a leak changes the reflection of the laser beam pulse, signaling a leak. The location is known by measuring the time delay between when the laser pulse was emitted and when the reflection is detected. This technique can also be combined with the Distributed Temperature Sensing method to provide a temperature profile of the pipeline.
Pipeline flyovers
Flyovers of the pipeline are frequently carried out to either confirm the location or to detect and locate small releases that cannot be identified by other methods. Typically the flyover of the right of way is recorded by video, which may have some image filtering, such as thermal imaging. Larger spills will typically be identified by a "sheen" in wetland or an area of dead vegetation around the release location.
Flyovers are typically scheduled and not recommended as a primary leak-detection method. They may be used to rapidly confirm the presence and location of a leak.
Biological leak detection
Biological methods of leak detection includes the use of dogs, which are more likely to be used once a release has been identified but not located due to its small size; or by landscapers who keep the pipeline right of way clear.
There are several companies who can provide dogs trained to identify the scent of release. Typically a technician injects a fluid into the pipeline that the scent dogs are trained to track. The dogs will then direct handlers towards a pipeline leak. They are trained to indicated at the strongest concentration therefore their pinpointing abilities can be typically within a meter. It typically takes 24 to 48 hours to mobilise a team, and may take several days to actually locate a release depending on the remoteness of the area.
Pipeline rights of way are kept clear by landscapers who are also trained to look for signs of pipeline releases. This is typically a scheduled process and should not be considered a primary form of leak detection.
See also
Pipeline pre-commissioning
References
Natural gas safety
Pipeline transport | Leak detection | [
"Chemistry"
] | 4,037 | [
"Natural gas safety",
"Natural gas technology"
] |
19,172,363 | https://en.wikipedia.org/wiki/Total%20curvature | In mathematical study of the differential geometry of curves, the total curvature of an immersed plane curve is the integral of curvature along a curve taken with respect to arc length:
The total curvature of a closed curve is always an integer multiple of 2, where N is called the index of the curve or turning number – it is the winding number of the unit tangent vector about the origin, or equivalently the degree of the map to the unit circle assigning to each point of the curve, the unit velocity vector at that point. This map is similar to the Gauss map for surfaces.
Comparison to surfaces
This relationship between a local geometric invariant, the curvature, and a global topological invariant, the index, is characteristic of results in higher-dimensional Riemannian geometry such as the Gauss–Bonnet theorem.
Invariance
According to the Whitney–Graustein theorem, the total curvature is invariant under a regular homotopy of a curve: it is the degree of the Gauss map. However, it is not invariant under homotopy: passing through a kink (cusp) changes the turning number by 1.
By contrast, winding number about a point is invariant under homotopies that do not pass through the point, and changes by 1 if one passes through the point.
Generalizations
A finite generalization is that the exterior angles of a triangle, or more generally any simple polygon, add up to 360° = 2 radians, corresponding to a turning number of 1. More generally, polygonal chains that do not go back on themselves (no 180° angles) have well-defined total curvature, interpreting the curvature as point masses at the angles.
The total absolute curvature of a curve is defined in almost the same way as the total curvature, but using the absolute value of the curvature instead of the signed curvature.
It is 2 for convex curves in the plane, and larger for non-convex curves. It can also be generalized to curves in higher dimensional spaces by flattening out the tangent developable to into a plane, and computing the total curvature of the resulting curve. That is, the total curvature of a curve in -dimensional space is
where is last Frenet curvature (the torsion of the curve) and is the signum function.
The minimum total absolute curvature of any three-dimensional curve representing a given knot is an invariant of the knot. This invariant has the value 2 for the unknot, but by the Fáry–Milnor theorem it is at least 4 for any other knot.
References
Further reading
(translated by Bruce Hunt)
Curves
Curvature (mathematics) | Total curvature | [
"Physics"
] | 527 | [
"Geometric measurement",
"Physical quantities",
"Curvature (mathematics)"
] |
19,174,354 | https://en.wikipedia.org/wiki/Polyaddition | Polyaddition (or addition polymerisation ) is a polymerization reaction that forms polymers via individual independent addition reactions. Polyaddition occurs as a reaction between functional groups on molecules with low degrees of polymerization, such as dimers, trimers and oligomers, to form species of higher molar mass. Only at nearly complete conversions does the polymer form, as in polycondensation and in contrast to chain polymerization.
A typical polyaddition is the formation of a polyurethane.
References
Polymer chemistry | Polyaddition | [
"Chemistry",
"Materials_science",
"Engineering"
] | 107 | [
"Polymer stubs",
"Organic chemistry stubs",
"Materials science",
"Polymer chemistry"
] |
19,174,720 | https://en.wikipedia.org/wiki/Electric%20battery | An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons. When a battery is connected to an external electric load, those negatively charged electrons flow through the circuit and reach to the positive terminal, thus cause a redox reaction by attracting positively charged ions, cations. Thus converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell.
Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones.
Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines.
History
Invention
Benjamin Franklin first used the term "battery" in 1749 when he was doing experiments with electricity using a set of linked Leyden jar capacitors. Franklin grouped a number of the jars into what he described as a "battery", using the military term for weapons functioning together. By multiplying the number of holding vessels, a stronger charge could be stored, and more power would be available on discharge.
Italian physicist Alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800. This was a stack of copper and zinc plates, separated by brine-soaked paper disks, that could produce a steady current for a considerable length of time. Volta did not understand that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834.
Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. The Daniell cell, invented in 1836 by British chemist John Frederic Daniell, was the first practical source of electricity, becoming an industry standard and seeing widespread adoption as a power source for electrical telegraph networks. It consisted of a copper pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode.
These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile and potentially dangerous. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.
Batteries in vacuum tube devices historically used a wet cell for the "A" battery (to provide power to the filament) and a dry cell for the "B" battery (to provide the plate voltage).
Ongoing developments
Between 2010 and 2018, annual battery demand grew by 30%, reaching a total of 180 GWh in 2018. Conservatively, the growth rate is expected to be maintained at an estimated 25%, culminating in demand reaching 2600 GWh in 2030. In addition, cost reductions are expected to further increase the demand to as much as 3562 GWh.
Important reasons for this high rate of growth of the electric battery industry include the electrification of transport, and large-scale deployment in electricity grids, supported by decarbonization initiatives.
Distributed electric batteries, such as those used in battery electric vehicles (vehicle-to-grid), and in home energy storage, with smart metering and that are connected to smart grids for demand response, are active participants in smart power supply grids.
New methods of reuse, such as echelon use of partly-used batteries, add to the overall utility of electric batteries, reduce energy storage costs, and also reduce pollution/emission impacts due to longer lives. In echelon use of batteries, vehicle electric batteries that have their battery capacity reduced to less than 80%, usually after service of 5–8 years, are repurposed for use as backup supply or for renewable energy storage systems.
Grid scale energy storage envisages the large-scale use of batteries to collect and store energy from the grid or a power plant and then discharge that energy at a later time to provide electricity or other grid services when needed. Grid scale energy storage (either turnkey or distributed) are important components of smart power supply grids.
Chemistry and principles
Batteries convert chemical energy directly to electrical energy. In many cases, the electrical energy released is the difference in the cohesive or bond energies of the metals, oxides, or molecules undergoing the electrochemical reaction. For instance, energy can be stored in Zn or Li, which are high-energy metals because they are not stabilized by d-electron bonding, unlike transition metals. Batteries are designed so that the energetically favorable redox reaction can occur only when electrons move through the external part of the circuit.
A battery consists of some number of voltaic cells. Each cell consists of two half-cells connected in series by a conductive electrolyte containing metal cations. One half-cell includes electrolyte and the negative electrode, the electrode to which anions (negatively charged ions) migrate; the other half-cell includes electrolyte and the positive electrode, to which cations (positively charged ions) migrate. Cations are reduced (electrons are added) at the cathode, while metal atoms are oxidized (electrons are removed) at the anode. Some cells use different electrolytes for each half-cell; then a separator is used to prevent mixing of the electrolytes while allowing ions to flow between half-cells to complete the electrical circuit.
Each half-cell has an electromotive force (emf, measured in volts) relative to a standard. The net emf of the cell is the difference between the emfs of its half-cells. Thus, if the electrodes have emfs and , then the net emf is ; in other words, the net emf is the difference between the reduction potentials of the half-reactions.
The electrical driving force or across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and produced a charge of one coulomb then on complete discharge it would have performed 1.5 joules of work. In actual cells, the internal resistance increases under discharge and the open-circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed.
The voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries, but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. The high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.
Almost any liquid or moist object that has enough ions to be electrically conductive can serve as the electrolyte for a cell. As a novelty or science demonstration, it is possible to insert two electrodes made of different metals into a lemon, potato, etc. and generate small amounts of electricity.
A voltaic pile can be made from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile generates a very low voltage but, when many are stacked in series, they can replace normal batteries for a short time.
Types
Primary and secondary batteries
Batteries are classified into primary and secondary forms:
Primary batteries are designed to be used until exhausted of energy then discarded. Their chemical reactions are generally not reversible, so they cannot be recharged. When the supply of reactants in the battery is exhausted, the battery stops producing current and is useless.
Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by applying electric current to the cell. This regenerates the original chemical reactants, so they can be used, recharged, and used again multiple times.
Some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the electrodes. Secondary batteries are not indefinitely rechargeable due to dissipation of the active materials, loss of electrolyte and internal corrosion.
Primary batteries, or primary cells, can produce current immediately on assembly. These are most commonly used in portable devices that have low current drain, are used only intermittently, or are used well away from an alternative power source, such as in alarm and communication circuits where other electric power is only intermittently available. Disposable primary cells cannot be reliably recharged, since the chemical reactions are not easily reversible and active materials may not return to their original forms. Battery manufacturers recommend against attempting to recharge primary cells. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω). Common types of disposable batteries include zinc–carbon batteries and alkaline batteries.
Secondary batteries, also known as secondary cells, or rechargeable batteries, must be charged before first use; they are usually assembled with active materials in the discharged state. Rechargeable batteries are (re)charged by applying electric current, which reverses the chemical reactions that occur during discharge/use. Devices to supply the appropriate current are called chargers. The oldest form of rechargeable battery is the lead–acid battery, which are widely used in automotive and boating applications. This technology contains liquid electrolyte in an unsealed container, requiring that the battery be kept upright and the area be well ventilated to ensure safe dispersal of the hydrogen gas it produces during overcharging. The lead–acid battery is relatively heavy for the amount of electrical energy it can supply. Its low manufacturing cost and its high surge current levels make it common where its capacity (over approximately 10 Ah) is more important than weight and handling issues. A common application is the modern car battery, which can, in general, deliver a peak current of 450 amperes.
Composition
Many types of electrochemical cells have been produced, with varying chemical processes and designs, including galvanic cells, electrolytic cells, fuel cells, flow cells and voltaic piles.
A wet cell battery has a liquid electrolyte. Other names are flooded cell, since the liquid covers all internal parts or vented cell, since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells. Molten salt batteries are primary or secondary batteries that use a molten salt as electrolyte. They operate at high temperatures and must be well insulated to retain heat.
A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.
A reserve battery can be stored unassembled (unactivated and supplying no power) for a long period (perhaps years). When the battery is needed, then it is assembled (e.g., by adding electrolyte); once assembled, the battery is charged and ready to work. For example, a battery for an electronic artillery fuze might be activated by the impact of firing a gun. The acceleration breaks a capsule of electrolyte that activates the battery and powers the fuze's circuits. Reserve batteries are usually designed for a short service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water.
On 28 February 2017, the University of Texas at Austin issued a press release about a new type of solid-state battery, developed by a team led by lithium-ion battery inventor John Goodenough, "that could lead to safer, faster-charging, longer-lasting rechargeable batteries for handheld mobile devices, electric cars and stationary energy storage". The solid-state battery is also said to have "three times the energy density", increasing its useful life in electric vehicles, for example. It should also be more ecologically sound since the technology uses less expensive, earth-friendly materials such as sodium extracted from seawater. They also have much longer life.
Sony has developed a biological battery that generates electricity from sugar in a way that is similar to the processes observed in living organisms. The battery generates electricity through the use of enzymes that break down carbohydrates.
The sealed valve regulated lead–acid battery (VRLA battery) is popular in the automotive industry as a replacement for the lead–acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life. VRLA batteries immobilize the electrolyte. The two types are:
Gel batteries (or "gel cell") use a semi-solid electrolyte.
Absorbed Glass Mat (AGM) batteries absorb the electrolyte in a special fiberglass matting.
Other portable rechargeable batteries include several sealed "dry cell" types, that are useful in applications such as mobile phones and laptop computers. Cells of this type (in order of increasing power density and cost) include nickel–cadmium (NiCd), nickel–zinc (NiZn), nickel–metal hydride (NiMH), and lithium-ion (Li-ion) cells. Li-ion has by far the highest share of the dry cell rechargeable market. NiMH has replaced NiCd in most applications due to its higher capacity, but NiCd remains in use in power tools, two-way radios, and medical equipment.
In the 2000s, developments include batteries with embedded electronics such as USBCELL, which allows charging an AA battery through a USB connector, nanoball batteries that allow for a discharge rate about 100x greater than current batteries, and smart battery packs with state-of-charge monitors and battery protection circuits that prevent damage on over-discharge. Low self-discharge (LSD) allows secondary cells to be charged prior to shipping.
Lithium–sulfur batteries were used on the longest and highest solar-powered flight.
Consumer and industrial grades
Batteries of all types are manufactured in consumer and industrial grades. Costlier industrial-grade batteries may use chemistries that provide higher power-to-size ratio, have lower self-discharge and hence longer life when not in use, more resistance to leakage and, for example, ability to handle the high temperature and humidity associated with medical autoclave sterilization.
Combination and management
Standard-format batteries are inserted into battery holder in the device that uses them. When a device does not uses standard-format batteries, they are typically combined into a custom battery pack which holds multiple batteries in addition to features such as a battery management system and battery isolator which ensure that the batteries within are charged and discharged evenly.
Sizes
Primary batteries readily available to consumers range from tiny button cells used for electric watches, to the No. 6 cell used for signal circuits or other long duration applications. Secondary cells are made in very large sizes; very large batteries can power a submarine or stabilize an electrical grid and help level out peak loads.
, the world's largest battery was built in South Australia by Tesla. It can store 129 MWh. A battery in Hebei Province, China, which can store 36 MWh of electricity was built in 2013 at a cost of $500 million. Another large battery, composed of Ni–Cd cells, was in Fairbanks, Alaska. It covered —bigger than a football pitch—and weighed 1,300 tonnes. It was manufactured by ABB to provide backup power in the event of a blackout. The battery can provide 40 MW of power for up to seven minutes. Sodium–sulfur batteries have been used to store wind power. A 4.4 MWh battery system that can deliver 11 MW for 25 minutes stabilizes the output of the Auwahi wind farm in Hawaii.
Comparison
Many important cell properties, such as voltage, energy density, flammability, available cell constructions, operating temperature range and shelf life, are dictated by battery chemistry.
Performance, capacity and discharge
A battery's characteristics may vary over load cycle, over charge cycle, and over lifetime due to many factors including internal chemistry, current drain, and temperature. At low temperatures, a battery cannot deliver as much power. As such, in cold climates, some car owners install battery warmers, which are small electric heating pads that keep the car battery warm.
A battery's capacity is the amount of electric charge it can deliver at a voltage that does not drop below the specified terminal voltage. The more electrode material contained in the cell the greater its capacity. A small cell has less capacity than a larger cell with the same chemistry, although they develop the same open-circuit voltage. Capacity is usually stated in ampere-hours (A·h) (mAh for small batteries). The rated capacity of a battery is usually expressed as the product of 20 hours multiplied by the current that a new battery can consistently supply for 20 hours at , while remaining above a specified terminal voltage per cell. For example, a battery rated at 100 A·h can deliver 5 A over a 20-hour period at room temperature. The fraction of the stored charge that a battery can deliver depends on multiple factors, including battery chemistry, the rate at which the charge is delivered (current), the required terminal voltage, the storage period, ambient temperature and other factors.
The higher the discharge rate, the lower the capacity. The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law:
where
is the capacity when discharged at a rate of 1 amp.
is the current drawn from battery (A).
is the amount of time (in hours) that a battery can sustain.
is a constant around 1.3.
Charged batteries (rechargeable or disposable) lose charge by internal self-discharge over time although not discharged, due to the presence of generally irreversible side reactions that consume charge carriers without producing current. The rate of self-discharge depends upon battery chemistry and construction, typically from months to years for significant loss. When batteries are recharged, additional side reactions reduce capacity for subsequent discharges. After enough recharges, in essence all capacity is lost and the battery stops producing power. Internal energy losses and limitations on the rate that ions pass through the electrolyte cause battery efficiency to vary. Above a minimum threshold, discharging at a low rate delivers more of the battery's capacity than at a higher rate. Installing batteries with varying A·h ratings changes operating time, but not device operation unless load limits are exceeded. High-drain loads such as digital cameras can reduce total capacity of rechargeable or disposable batteries. For example, a battery rated at 2 A·h for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity suggests.
The C-rate is a measure of the rate at which a battery is being charged or discharged. It is defined as the current through the battery divided by the theoretical current draw under which the battery would deliver its nominal rated capacity in one hour. It has the units h−1. Because of internal resistance loss and the chemical processes inside the cells, a battery rarely delivers nameplate rated capacity in only one hour. Typically, maximum capacity is found at a low C-rate, and charging or discharging at a higher C-rate reduces the usable life and capacity of a battery. Manufacturers often publish datasheets with graphs showing capacity versus C-rate curves. C-rate is also used as a rating on batteries to indicate the maximum current that a battery can safely deliver in a circuit. Standards for rechargeable batteries generally rate the capacity and charge cycles over a 4-hour (0.25C), 8 hour (0.125C) or longer discharge time. Types intended for special purposes, such as in a computer uninterruptible power supply, may be rated by manufacturers for discharge periods much less than one hour (1C) but may suffer from limited cycle life.
In 2009 experimental lithium iron phosphate () battery technology provided the fastest charging and energy delivery, discharging all its energy into a load in 10 to 20 seconds. In 2024 a prototype battery for electric cars that could charge from 10% to 80% in five minutes was demonstrated, and a Chinese company claimed that car batteries it had introduced charged 10% to 80% in 10.5 minutes—the fastest batteries available—compared to Tesla's 15 minutes to half-charge.
Lifespan and endurance
Battery life (or lifetime) has two meanings for rechargeable batteries but only one for non-chargeables. It can be used to describe the length of time a device can run on a fully charged battery—this is also unambiguously termed "endurance". For a rechargeable battery it may also be used for the number of charge/discharge cycles possible before the cells fail to operate satisfactorily—this is also termed "lifespan". The term shelf life is used to describe how long a battery will retain its performance between manufacture and use. Available capacity of all batteries drops with decreasing temperature. In contrast to most of today's batteries, the Zamboni pile, invented in 1812, offers a very long service life without refurbishment or recharge, although it can supply very little current (nanoamps). The Oxford Electric Bell has been ringing almost continuously since 1840 on its original pair of batteries, thought to be Zamboni piles.
Disposable batteries typically lose 8–20% of their original charge per year when stored at room temperature (20–30 °C). This is known as the "self-discharge" rate, and is due to non-current-producing "side" chemical reactions that occur within the cell even when no load is applied. The rate of side reactions is reduced for batteries stored at lower temperatures, although some can be damaged by freezing and storing in a fridge will not meaningfully prolong shelf life and risks damaging condensation. Old rechargeable batteries self-discharge more rapidly than disposable alkaline batteries, especially nickel-based batteries; a freshly charged nickel cadmium (NiCd) battery loses 10% of its charge in the first 24 hours, and thereafter discharges at a rate of about 10% a month. However, newer low self-discharge nickel–metal hydride (NiMH) batteries and modern lithium designs display a lower self-discharge rate (but still higher than for primary batteries).
The active material on the battery plates changes chemical composition on each charge and discharge cycle; active material may be lost due to physical changes of volume, further limiting the number of times the battery can be recharged. Most nickel-based batteries are partially discharged when purchased, and must be charged before first use. Newer NiMH batteries are ready to be used when purchased, and have only 15% discharge in a year.
Some deterioration occurs on each charge–discharge cycle. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material detaches from the electrodes. Low-capacity NiMH batteries (1,700–2,000 mA·h) can be charged some 1,000 times, whereas high-capacity NiMH batteries (above 2,500 mA·h) last about 500 cycles. NiCd batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values. Fast charging increases component changes, shortening battery lifespan. If a charger cannot detect when the battery is fully charged then overcharging is likely, damaging it.
NiCd cells, if used in a particular repetitive manner, may show a decrease in capacity called "memory effect". The effect can be avoided with simple practices. NiMH cells, although similar in chemistry, suffer less from memory effect.
Automotive lead–acid rechargeable batteries must endure stress due to vibration, shock, and temperature range. Because of these stresses and sulfation of their lead plates, few automotive batteries last beyond six years of regular use. Automotive starting (SLI: Starting, Lighting, Ignition) batteries have many thin plates to maximize current. In general, the thicker the plates the longer the life. They are typically discharged only slightly before recharge. "Deep-cycle" lead–acid batteries such as those used in electric golf carts have much thicker plates to extend longevity. The main benefit of the lead–acid battery is its low cost; its main drawbacks are large size and weight for a given capacity and voltage. Lead–acid batteries should never be discharged to below 20% of their capacity, because internal resistance will cause heat and damage when they are recharged. Deep-cycle lead–acid systems often use a low-charge warning light or a low-charge power cut-off switch to prevent the type of damage that will shorten the battery's life.
Battery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer, which slows the side reactions. Such storage can extend the life of alkaline batteries by about 5%; rechargeable batteries can hold their charge much longer, depending upon type. To reach their maximum voltage, batteries must be returned to room temperature; discharging an alkaline battery at 250 mA at 0 °C is only half as efficient as at 20 °C. Alkaline battery manufacturers such as Duracell do not recommend refrigerating batteries.
Hazards
A battery explosion is generally caused by misuse or malfunction, such as attempting to recharge a primary (non-rechargeable) battery, or a short circuit.
When a battery is recharged at an excessive rate, an explosive gas mixture of hydrogen and oxygen may be produced faster than it can escape from within the battery (e.g. through a built-in vent), leading to pressure build-up and eventual bursting of the battery case. In extreme cases, battery chemicals may spray violently from the casing and cause injury. An expert summary of the problem indicates that this type uses "liquid electrolytes to transport lithium ions between the anode and the cathode. If a battery cell is charged too quickly, it can cause a short circuit, leading to explosions and fires". Car batteries are most likely to explode when a short circuit generates very large currents. Such batteries produce hydrogen, which is very explosive, when they are overcharged (because of electrolysis of the water in the electrolyte). During normal use, the amount of overcharging is usually very small and generates little hydrogen, which dissipates quickly. However, when "jump starting" a car, the high current can cause the rapid release of large volumes of hydrogen, which can be ignited explosively by a nearby spark, e.g. when disconnecting a jumper cable.
Overcharging (attempting to charge a battery beyond its electrical capacity) can also lead to a battery explosion, in addition to leakage or irreversible damage. It may also cause damage to the charger or device in which the overcharged battery is later used.
Disposing of a battery via incineration may cause an explosion as steam builds up within the sealed case.
Many battery chemicals are corrosive, poisonous or both. If leakage occurs, either spontaneously or through accident, the chemicals released may be dangerous. For example, disposable batteries often use a zinc "can" both as a reactant and as the container to hold the other reagents. If this kind of battery is over-discharged, the reagents can emerge through the cardboard and plastic that form the remainder of the container. The active chemical leakage can then damage or disable the equipment that the batteries power. For this reason, many electronic device manufacturers recommend removing the batteries from devices that will not be used for extended periods of time.
Many types of batteries employ toxic materials such as lead, mercury, and cadmium as an electrode or electrolyte. When each battery reaches end of life it must be disposed of to prevent environmental damage. Batteries are one form of electronic waste (e-waste). E-waste recycling services recover toxic substances, which can then be used for new batteries. Of the nearly three billion batteries purchased annually in the United States, about 179,000 tons end up in landfills across the country.
Batteries may be harmful or fatal if swallowed. Small button cells can be swallowed, in particular by young children. While in the digestive tract, the battery's electrical discharge may lead to tissue damage; such damage is occasionally serious and can lead to death. Ingested disk batteries do not usually cause problems unless they become lodged in the gastrointestinal tract. The most common place for disk batteries to become lodged is the esophagus, resulting in clinical sequelae. Batteries that successfully traverse the esophagus are unlikely to lodge elsewhere. The likelihood that a disk battery will lodge in the esophagus is a function of the patient's age and battery size. Older children do not have problems with batteries smaller than 21–23 mm. Liquefaction necrosis may occur because sodium hydroxide is generated by the current produced by the battery (usually at the anode). Perforation has occurred as rapidly as 6 hours after ingestion.
Some battery manufactures have added a bad taste to batteries to discourage swallowing.
Legislation and regulation
Legislation around electric batteries includes such topics as safe disposal and recycling.
In the United States, the Mercury-Containing and Rechargeable Battery Management Act of 1996 banned the sale of mercury-containing batteries, enacted uniform labeling requirements for rechargeable batteries and required that rechargeable batteries be easily removable. California and New York City prohibit the disposal of rechargeable batteries in solid waste. The rechargeable battery industry operates nationwide recycling programs in the United States and Canada, with dropoff points at local retailers.
The Battery Directive of the European Union has similar requirements, in addition to requiring increased recycling of batteries and promoting research on improved battery recycling methods. In accordance with this directive all batteries to be sold within the EU must be marked with the "collection symbol" (a crossed-out wheeled bin). This must cover at least 3% of the surface of prismatic batteries and 1.5% of the surface of cylindrical batteries. All packaging must be marked likewise.
In response to reported accidents and failures, occasionally ignition or explosion, recalls of devices using lithium-ion batteries have become more common in recent years.
On 9 December 2022, the European Parliament reached an agreement to force, from 2026, manufacturers to design all electrical appliances sold in the EU (and not used predominantly in wet conditions) so that consumers can easily remove and replace batteries themselves.
See also
Battery simulator
Nanowire battery
Search for the Super Battery
References
Bibliography
Ch. 21 (pp. 662–695) is on electrochemistry.
Chs. 28–31 (pp. 879–995) contain information on electric potential.
Chs. 8–9 (pp. 336–418) have more information on batteries.
Turner, James Morton. Charged: A History of Batteries and Lessons for a Clean Energy Future (University of Washington Press, 2022). online review
External links
Non-rechargeable batteries (archived 22 October 2013)
HowStuffWorks: How batteries work
Other Battery Cell Types
DoITPoMS Teaching and Learning Package- "Batteries"
Italian inventions
Electric power
Consumer electronics
18th-century inventions | Electric battery | [
"Physics",
"Engineering"
] | 7,370 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
19,174,753 | https://en.wikipedia.org/wiki/Mueller%E2%80%93Hinton%20agar | Mueller Hinton agar is a type of growth medium used in microbiology to culture bacterial isolates and test their susceptibility to antibiotics. This medium was first developed in 1941 by John Howard Mueller and Jane Hinton, who were microbiologists working at Harvard University. However, Mueller Hinton agar is made up of a couple of components, including beef extract, acid hydrolysate of casein, and starch, as well as agar to solidify the mixture. The composition of Mueller Hinton agar can vary depending on the manufacturer and the intended use, but the medium is generally nutrient-rich and free of inhibitors that could interfere with bacterial growth.
Mueller Hinton agar is commonly used in the disk diffusion method, which is a simple and widely used method for testing the susceptibility of bacterial isolates to antibiotics. In this method, small disks impregnated with different antibiotics are placed on the surface of the agar, and the zone of inhibition around each disk is measured to determine the susceptibility of the bacterial isolate to that antibiotic. Mueller Hinton agar is particularly useful for testing a wide range of antibiotics, as it has a low content of calcium and magnesium ions, which can interfere with the activity of certain antibiotics. For example, MH agar may be used in the laboratory for the rapid presumptive identification of C. albicans, as an alternative method for germ tube test (Mattie. As, 2014). The medium is also free of inhibitors that could interfere with bacterial growth, making it a reliable and consistent substrate for bacterial cultures.
The composition of Mueller Hinton agar can affect the growth characteristics of bacterial isolates, as well as their response to antibiotics. For example, variations in the pH of the medium can affect the activity of certain antibiotics, and the presence of certain nutrients can promote the growth of specific bacterial species. More so, careful selection and preparation of Mueller Hinton agar is important for accurate microbiological assays. The use of Mueller Hinton agar has been critical in the development of antibiotics and in the study of antibiotic resistance.
Mueller–Hinton agar is a microbiological growth medium that is commonly used for antibiotic susceptibility testing, specifically disk diffusion tests. It is also used to isolate and maintain Neisseria and Moraxella species.
It typically contains:
2.0 g beef extract
17.5 g casein hydrolysate
1.5 g starch
17.0 g agar
1 liter of distilled water.
pH adjusted to neutral at 25 °C.
Five percent sheep's blood and nicotinamide adenine dinucleotide may also be added when susceptibility testing is done on Streptococcus and Campylobacter species.
It has a few properties that make it excellent for antibiotic use. First of all, it is a nonselective, nondifferential medium. This means that almost all organisms plated on it will grow. Additionally, it contains starch. Starch is known to absorb toxins released from bacteria, so that they cannot interfere with the antibiotics. Second, it is a loose agar. This allows for better diffusion of the antibiotics than most other plates. A better diffusion leads to a truer zone of inhibition.
Mueller–Hinton agar was co-developed by a microbiologist John Howard Mueller and a veterinary scientist Jane Hinton at Harvard University as a culture for gonococcus and meningococcus. They co-published the method in 1941.
References
Microbiological media
Cell culture media | Mueller–Hinton agar | [
"Biology"
] | 744 | [
"Microbiological media",
"Microbiology equipment"
] |
7,817,455 | https://en.wikipedia.org/wiki/Pre-charge | Pre-charge of the powerline voltages in a high voltage DC application is a preliminary mode which limits the inrush current during the power up procedure.
A high-voltage system with a large capacitive load can be exposed to high electric current during initial turn-on. This current, if not limited, can cause considerable stress or damage to the system components. In some applications, the occasion to activate the system is a rare occurrence, such as in commercial utility power distribution. In other systems such as vehicle applications, pre-charge will occur with each use of the system, multiple times per day. Precharging is implemented to increase the lifespan of electronic components and increase reliability of the high voltage system.
Background: inrush currents into capacitors
Inrush currents into capacitive components are a key concern in power-up stress to components. When DC input power is applied to a capacitive load, the step response of the voltage input will cause the input capacitor to charge. The capacitor charging starts with an inrush current and ends with an exponential decay down to the steady state condition. When the magnitude of the inrush peak is very large compared to the maximum rating of the components, then component stress is to be expected.
The current into a capacitor is known to be : the peak inrush current will depend upon the capacitance C and the rate of change of the voltage (dV/dT). The inrush current will increase as the capacitance value increases, and the inrush current will increase as the voltage of the power source increases. This second parameter is of primary concern in high voltage power distribution systems. By their nature, high voltage power sources will deliver high voltage into the distribution system. Capacitive loads will then be subject to high inrush currents upon power-up. The stress to the components must be understood and minimized.
The objective of a pre-charge function is to limit the magnitude of the inrush current into capacitive loads during power-up. This may take several seconds depending on the system. In general, higher voltage systems benefit from longer pre-charge times during power-up.
Consider an example where a high voltage source powers up a typical electronics control unit which has an internal power supply with 11000 μF input capacitance. When powered from a 28 V source, the inrush current into the electronics unit would approach 31 amperes in 10 milliseconds. If that same circuit is activated by a 610 V source, then the inrush current would approach 670 A in 10 milliseconds. It is wise not to allow unlimited inrush currents from high voltage power distribution system activation into capacitive loads: instead the inrush current should be controlled to avoid power-up stress to components.
Definition of a pre-charge function
The functional requirement of the high voltage pre-charge circuit is to minimize the peak current out from the power source by slowing down the dV/dT of the input power voltage such that a new "pre-charge mode" is created. The inductive loads on the distribution system must be switched off during the pre-charge mode, due to the dI/dT dependency. While pre-charging, the system voltage will rise slowly and controllably with power-up current never exceeding the maximum allowed value. As the circuit voltage approaches near steady state, then the pre-charge function is complete. Normal operation of a pre-charge circuit is to terminate pre-charge mode when the circuit voltage is 90% or 95% of the operating voltage. Upon completion of pre-charging, the pre-charge resistance is switched out of the power supply circuit and returns to a low impedance power source for normal mode. The high voltage loads are then powered up sequentially.
The simplest inrush-current limiting system, used in many consumer electronics devices, is a NTC resistor. When cold, its high resistance allows a small current to pre-charge the reservoir capacitor. After it warms up, its low resistance more efficiently passes the working current.
Many active power factor correction systems also include soft start.
If the example circuit from before is used with a pre-charge circuit which limits the dV/dT to less than 600 volts per second, then the inrush current will be reduced from 670 amperes to 7 amperes. This is a "kinder and gentler" way to activate a high voltage DC power distribution system.
Benefits of pre-charging
The primary benefit of avoiding component stress during power-up is to realize a long system operating life due to reliable and long lasting components.
There are additional benefits: pre-charging reduces the electrical hazards which may occur when the system integrity is compromised due to hardware damage or failure. Activating the high voltage DC system into a short circuit or a ground fault or into unsuspecting personnel and their equipment can have undesired effects. Arc flash will be minimized if a pre-charge function slows down the activation time of a high voltage power-up. A slow pre-charge will also reduce the voltage into a faulty circuit which builds up while the system diagnostics come on-line. This allows a diagnostic shut down before the fault is fully realized in worst case proportions.
In cases where unlimited inrush current is large enough to trip the source circuit breaker, a slow precharge may even be required to avoid the nuisance trip.
Pre-charging is commonly used in battery electric vehicle applications. The current to the motor is regulated by a controller that employs large capacitors in its input circuit. Such systems typically have contactors (a high-current relay) to disable the system during inactive periods and to act as an emergency disconnect should the motor current regulator fail in an active state. Without pre-charge the high voltage across the contactors and inrush current can cause a brief arc which will cause pitting of the contacts. Pre-charging the controller input capacitors (typically to 90 to 95 percent of applied battery voltage) eliminates the pitting problem. The current to maintain the charge is so low that some systems apply the pre-charge at all times other than when charging batteries, while more complex systems apply pre-charge as part of the starting sequence and will defer main contactor closure until the pre-charge voltage level is detected as sufficiently high.
Applications in high voltage power systems
High-voltage direct current
Battery Electric Vehicles
Hybrid Vehicle
Future Combat System
Motorized bicycle
Electric power-assist system
References
Electrical engineering
Electronic engineering
Electric power transmission systems
Electrical power control | Pre-charge | [
"Technology",
"Engineering"
] | 1,357 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
7,819,800 | https://en.wikipedia.org/wiki/Kfar%20Monash%20Hoard | The Kfar Monash Hoard is a hoard of metal objects dated to the Early Bronze Age (the third millennium BCE) found in the spring of 1962 by the agriculturalist Zvi Yizhar in Kfar Monash, Israel. Kfar Monash is located 3.3 km south-east of Tel Hefer (Tell Ishbar) in the Plain of Sharon or in modern terms 9 km/6 mi northeast of Netanya, which is roughly located along the Israeli coast between Netanya and Haifa.
The Monash Hoard consists of:
The Crescentic Axehead was found about 5 years later at about 200m distance.
As of June 2006, the Kfar Monash Hoard was on display in the Israel Museum.
Identification of the 800 Copper Plates
There has been conflicting ideas to the purpose of the 800 copper plates. Although they have been assumed to be scales of armor from an Egyptian army unit, as proposed by archaeologist Shmuel Yeivin, recent reevaluations have confuted this claim. Archaeologist William A. Ward proposed that the scales were means of barter or a reserve supply of metal from the Syro-Palestinian area. Ward arrived at this conclusion through several pieces of evidence: the scales were not attached to any jacket, body armor was generally not used by the Egyptians until the New Kingdom, copper was still very rare, and the plates were too thin for body armor.
2023 analysis
Several metal objects similar to those in the Kfar Monash hoard were found in this general area of the Levant. They were subject to metallurgical analysis, and generally dated to the Early Bronze Age. For example, objects from Ashkelon-Afridar, and from Tell esh-Shuna (the Jordan Valley) were seen as similar. Also the axes from early EB I Yiftah’el are seen as relevant.
Kfar Monash objects were also dated, based on typological considerations, to EB IB, similarly to the axes from Tel Beth Shean.
The study of Kfar Monash hoard indicated that some of them were made of unalloyed copper. The source of this unalloyed copper was found likely to be in Wadi Feynan, in southern Jordan. Such unalloyed copper was apparently mainly used for the production of tools.
Other objects were made using a CuAsNi alloy. This is the copper-arsenic-nickel alloy that is especially characteristic of Chalcolithic period Arslantepe in Eastern Anatolia (the upper Euphrates region). Nevertheless, the adzes that were made of this alloy were determined to be of "an Egyptian type".
Objects from Arslantepe using such polymetallic ores are mainly ascribed to Level VIA (3400–3000 BCE), dating to the Uruk period.
References
Archaeological sites in Israel
Treasure troves of Asia
Archaeometallurgy
he:כפר מונש#מטמון כפר מונאש | Kfar Monash Hoard | [
"Chemistry",
"Materials_science"
] | 618 | [
"Archaeometallurgy",
"Metallurgy"
] |
7,822,233 | https://en.wikipedia.org/wiki/Interface%20conditions%20for%20electromagnetic%20fields | Interface conditions describe the behaviour of electromagnetic fields; electric field, electric displacement field, and the magnetic field at the interface of two materials. The differential forms of these equations require that there is always an open neighbourhood around the point to which they are applied, otherwise the vector fields and H are not differentiable. In other words, the medium must be continuous[no need to be continuous][This paragraph need to be revised, the wrong concept of "continuous" need to be corrected]. On the interface of two different media with different values for electrical permittivity and magnetic permeability, that condition does not apply.
However, the interface conditions for the electromagnetic field vectors can be derived from the integral forms of Maxwell's equations.
Interface conditions for electric field vectors
Electric field strength
where:
is normal vector from medium 1 to medium 2.
Therefore, the tangential component of E is continuous across the interface.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Outline of proof from Faraday's law
|-
|We begin with the integral form of Faraday's law:
Choose as a small square across the interface. Then, have the sides perpendicular to the interface shrink to infinitesimal length. The area of integration now looks like a line, which has zero area. In other words:
Since remains finite in this limit, the whole right hand side goes to zero. All that is left is:
Two of our sides are infinitesimally small, leaving only
Assuming we made our square small enough that E is roughly constant, its magnitude can be pulled out of the integral. As the remaining sides to our original loop, the in each region run in opposite directions, so we define one of them as the tangent unit vector and the other as
After dividing by l, and rearranging,
This argument works for any tangential direction. The difference in electric field dotted into any tangential vector is zero, meaning only the components of parallel to the normal vector can change between mediums. Thus, the difference in electric field vector is parallel to the normal vector. Two parallel vectors always have a cross product of zero.
|}
Electric displacement field
is the unit normal vector from medium 1 to medium 2.
is the surface charge density between the media (unbounded charges only, not coming from polarization of the materials).
This can be deduced by using Gauss's law and similar reasoning as above.
Therefore, the normal component of D has a step of surface charge on the interface surface. If there is no surface charge on the interface, the normal component of D is continuous.
Interface conditions for magnetic field vectors
For magnetic flux density
where:
is normal vector from medium 1 to medium 2.
Therefore, the normal component of B is continuous across the interface (the same in both media). (The tangential components are in the ratio of the permeabilities.)
For magnetic field strength
where:
is the unit normal vector from medium 1 to medium 2.
is the surface current density between the two media (unbounded current only, not coming from polarisation of the materials).
Therefore, the tangential component of H is discontinuous across the interface by an amount equal to the magnitude of the surface current density. The normal components of H in the two media are in the ratio of the permeabilities.
Discussion according to the media beside the interface
If medium 1 & 2 are perfect dielectrics
There are no charges nor surface currents at the interface, and so the tangential component of H and the normal component of D are both continuous.
If medium 1 is a perfect dielectric and medium 2 is a perfect metal
There are charges and surface currents at the interface, and so the tangential component of H and the normal component of D are not continuous.
Boundary conditions
The boundary conditions must not be confused with the interface conditions. For numerical calculations, the space where the calculation of the electromagnetic field is achieved must be restricted to some boundaries. This is done by assuming conditions at the boundaries which are physically correct and numerically solvable in finite time. In some cases, the boundary conditions resume to a simple interface condition. The most usual and simple example is a fully reflecting (electric wall) boundary - the outer medium is considered as a perfect conductor. In some cases, it is more complicated: for example, the reflection-less (i.e. open) boundaries are simulated as perfectly matched layer or magnetic wall that do not resume to a single interface.
See also
Maxwell's equations
References
Sources
Electromagnetism concepts
Boundary conditions | Interface conditions for electromagnetic fields | [
"Physics"
] | 948 | [
"Electromagnetism concepts"
] |
7,824,361 | https://en.wikipedia.org/wiki/Hardware-in-the-loop%20simulation | Hardware-in-the-loop (HIL) simulation, also known by various acronyms such as HiL, HITL, and HWIL, is a technique that is used in the development and testing of complex real-time embedded systems. HIL simulation provides an effective testing platform by adding the complexity of the process-actuator system, known as a plant, to the test platform. The complexity of the plant under control is included in testing and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the "plant simulation". The embedded system to be tested interacts with this plant simulation.
How HIL works
HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation.
For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation:
Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw;
Dynamics of the brake system's hydraulic components;
Road characteristics.
Uses
In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and testing efficiency is typically a formula that includes the following factors:
1. Cost
2. Duration
3. Safety
4. Feasibility
The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for a planned product. Safety factor and development duration are typically equated to a cost measure. Specific conditions that warrant the use of HIL simulation include the following:
Enhancing the quality of testing
Tight development schedules
High-burden-rate plant
Early process human factor development
Enhancing the quality of testing
Usage of HILs enhances the quality of the testing by increasing the scope of the testing.
Ideally, an embedded system would be tested against the real plant, but most of the time the real plant itself imposes limitations in terms of the scope of the testing. For example, testing an engine control unit as a real plant can create the following dangerous conditions for the test engineer:
Testing at or beyond the range of the certain ECU parameters (e.g. Engine parameters etc.)
Testing and verification of the system at failure conditions
In the above-mentioned test scenarios, HIL provides the efficient control and safe environment where test or application engineer can focus on the functionality of the controller.
Tight development schedules
The tight development schedules associated with most new automotive, aerospace and defense programs do not allow embedded system testing to wait for a prototype to be available. In fact, most new development schedules assume that HIL simulation will be used in parallel with the development of the plant. For example, by the time a new automobile engine prototype is made available for control system testing, 95% of the engine controller testing will have been completed using HIL simulation.
The aerospace and defense industries are even more likely to impose a tight development schedule. Aircraft and land vehicle development programs are using desktop and HIL simulation to perform design, test, and integration in parallel.
High-burden-rate plant
In many cases, the plant is more expensive than a high fidelity, real-time simulator and therefore has a higher-burden rate. Therefore, it is more economical to develop and test while connected to a HIL simulator than the real plant. For jet engine manufacturers, HIL simulation is a fundamental part of engine development. The development of Full Authority Digital Engine Controllers (FADEC) for aircraft jet engines is an extreme example of a high-burden-rate plant. Each jet engine can cost millions of dollars. In contrast, a HIL simulator designed to test a jet engine manufacturer's complete line of engines may demand merely a tenth of the cost of a single engine.
Early process human factors development
HIL simulation is a key step in the process of developing human factors, a method of ensuring usability and system consistency using software ergonomics, human-factors research and design. For real-time technology, human-factors development is the task of collecting usability data from man-in-the-loop testing for components that will have a human interface.
An example of usability testing is the development of fly-by-wire flight controls. Fly-by-wire flight controls eliminate the mechanical linkages between the flight controls and the aircraft control surfaces. Sensors communicate the demanded flight response and then apply realistic force feedback to the fly-by-wire controls using motors. The behavior of fly-by-wire flight controls is defined by control algorithms. Changes in algorithm parameters can translate into more or less flight response from a given flight control input. Likewise, changes in the algorithm parameters can also translate into more or less force feedback for a given flight control input. The “correct” parameter values are a subjective measure. Therefore, it is important to get input from numerous man-in-the-loop tests to obtain optimal parameter values.
In the case of fly-by-wire flight controls development, HIL simulation is used to simulate human factors. The flight simulator includes plant simulations of aerodynamics, engine thrust, environmental conditions, flight control dynamics and more. Prototype fly-by-wire flight controls are connected to the simulator and test pilots evaluate flight performance given various algorithm parameters.
The alternative to HIL simulation for human factors and usability development is to place prototype flight controls in early aircraft prototypes and test for usability during flight test. This approach fails when measuring the four conditions listed above.
Cost: A flight test is extremely costly and therefore the goal is to minimize any development occurring with flight test.
Duration: Developing flight controls with flight test will extend the duration of an aircraft development program. Using HIL simulation, the flight controls may be developed well before a real aircraft is available.
Safety: Using flight test for the development of critical components such as flight controls has a major safety implication. Should errors be present in the design of the prototype flight controls, the result could be a crash landing.
Feasibility: It may not be possible to explore certain critical timings (e.g. sequences of user actions with millisecond precision) with real users operating a plant. Likewise for problematical points in parameter space that may not be easily reachable with a real plant but must be tested against the hardware in question.
Use in various disciplines
Automotive systems
In context of automotive applications "Hardware-in-the-loop simulation systems provide such a virtual vehicle for systems validation and verification." Since in-vehicle driving tests for evaluating performance and diagnostic functionalities of Engine Management Systems are often time-consuming, expensive and not reproducible, HIL simulators allow developers to validate new hardware and software automotive solutions, respecting quality requirements and time-to-market restrictions. In a typical HIL Simulator, a dedicated real-time processor executes mathematical models which emulate engine dynamics. In addition, an I/O unit allows the connection of vehicle sensors and actuators (which usually present high degree of non-linearity). Finally, the Electronic Control Unit (ECU) under test is connected to the system and stimulated by a set of vehicle maneuvers executed by the simulator. At this point, HIL simulation also offers a high degree of repeatability during testing phase.
In the literature, several HIL specific applications are reported and simplified HIL simulators were built according to some specific purpose. When testing a new ECU software release for example, experiments can be performed in open loop and therefore several engine dynamic models are no longer required. The strategy is restricted to the analysis of ECU outputs when excited by controlled inputs. In this case, a Micro HIL system (MHIL) offers a simpler and more economic solution. Since complexity of models processing is dumped, a full-size HIL system is reduced into a portable device composed of a signal generator, an I/O board, and a console containing the actuators (external loads) to be connected to the ECU.
Radar
HIL simulation for radar systems have evolved from radar-jamming. Digital Radio Frequency Memory (DRFM) systems are typically used to create false targets to confuse the radar in the battlefield, but these same systems can simulate a target in the laboratory. This configuration allows for the testing and evaluation of the radar system, reducing the need for flight trials (for airborne radar systems) and field tests (for search or tracking radars), and can give an early indication to the susceptibility of the radar to electronic warfare (EW) techniques.
Robotics
Techniques for HIL simulation have been recently applied to the automatic generation of complex controllers for robots. A robot uses its own real hardware to extract sensation and actuation data, then uses this data to infer a physical simulation (self-model) containing aspects such as its own morphology as well as characteristics of the environment. Algorithms such as Back-to-Reality (BTR) and Estimation Exploration (EEA) have been proposed in this context.
Power systems
In recent years, HIL for power systems has been used for verifying the stability, operation, and fault tolerance of large-scale electrical grids. Current-generation real-time processing platforms have the capability to model large-scale power systems in real-time. This includes systems with more than 10,000 buses with associated generators, loads, power-factor correction devices, and network interconnections. These types of simulation platforms enable the evaluation and testing of large-scale power systems in a realistic emulated environment. Moreover, HIL for power systems has been used for investigating the integration of distributed resources, next-generation SCADA systems and power management units, and static synchronous compensator devices.
Offshore systems
In offshore and marine engineering, control systems and mechanical structures are generally designed in parallel. Testing the control systems is only possible after integration. As a result, many errors are found that have to be solved during the commissioning, with the risks of personal injuries, damaging equipment and delays. To reduce these errors, HIL simulation is gaining widespread attention. This is reflected by the adoption of HIL simulation in the Det Norske Veritas rules.
References
External links
Introduction to Hardware-in-the-Loop Simulation.
Embedded systems | Hardware-in-the-loop simulation | [
"Technology",
"Engineering"
] | 2,174 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
15,217,532 | https://en.wikipedia.org/wiki/CKLF%20%28gene%29 | Chemokine-like factor (CKLF) is a member of the CKLF-like MARVEL transmembrane domain-containing family of proteins that in humans is encoded by the CKLF gene. This gene is located on band 22.1 in the long (i.e. "q") arm of chromosome 16.
Isoforms
Through the process of alternative splicing, the CKLF gene encodes 4 CKLF protein isoforms, i.e. proteins made from different areas of the same gene. These isoforms are 1) CKLF1 and CKLF3 proteins that consist of 99 and 67 amino acids, respectively, and are secreted from their parent cells and 2) CKLF2 (which is the full-length product of the CKLF gene) and CKLF4 proteins which consist of 152 and 120 amino acids, respectively, and are located in the membranes of their parent cells.
CKLF1
CKLF1 is the first member of the CKLF-like MARVEL transmembrane domain-containing family of proteins to be defined and the most investigated of its four isoforms. Studies conducted in freshly isolated cells, cultured cells, animals, and tissue samples indicate that CKLF1 is a chemokine-like chemotactic factor that acts through the CCR4 receptors on human CD4+ Th2 lymphocytes, neutrophils, monocytes, macrophages, dendritic cells, and perhaps other CCR4-receptor bearing cells. Preliminary findings suggest that the actions of CKLF1 on these CCR4-bearing cells may contribute to the maturation of various tissues such as blood cells and skeletal muscle from their precursor cells and the regulation of allergic (e.g. asthma), autoimmune (e.g. rheumatoid arthritis and the antiphospholipid syndrome), and inflammatory (e.g. acute respiratory distress syndrome) disorders. Other studies have found that: 1) the benign fibrous skin tumor, keloids, had higher levels of CKLF1 and CKLF1 mRNA than nearby normal skin tissues; 2) CKLF1 levels were higher in ovarian carcinoma tissues than nearby normal ovary tissues and patients with higher levels of CKLF1 in their ovarian cancer tissues had a more aggressive cancer than patients with lover levels of the protein in their ovarian cancer tissues; and 3) the levels of CKLF1 protein were higher in cancerous than nearby normal liver tissues in patients with hepatocellular carcinoma (HCC) and patients with higher HCC tissue levels of CKLF1 had poorer overall survival times than patients with lower levels of this protein in their HCC tissues. These results suggest that high levels of CKLF1 promote the development and/or progression of these three neoplasms although further studies are required to further define these relationships and to determine if CKLF1 can be used as a marker for their severity and/or a therapeutic target for treating them.
CKLF2–4
Relative little is known about the normal functions and pathological actions of the CKLF2, CKLF3, and CKLF4 isoforms.
References
Further reading
External links
Human proteins
Gene expression | CKLF (gene) | [
"Chemistry",
"Biology"
] | 666 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
15,217,608 | https://en.wikipedia.org/wiki/PHF20 | PHD finger protein 20 is a protein that in humans is encoded by the PHF20 gene.
References
Further reading
External links
Transcription factors | PHF20 | [
"Chemistry",
"Biology"
] | 28 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
15,218,518 | https://en.wikipedia.org/wiki/Institute%20of%20Environmental%20Sciences%20and%20Technology | The Institute of Environmental Sciences and Technology (IEST) is a non-profit, technical society where professionals who impact controlled environments connect, gain knowledge, receive advice, and work together to create industry best practices. The organization uniquely serves environmental test engineers, qualification engineers, cleanroom professionals, those who work in product testing and evaluation, and others who work across a variety of industries, including: acoustics, aerospace, automotive, biotechnology/bioscience, climatics, cleanroom operations/design/equipment/certification, dynamics, filtration, food processing, HVAC design, medical devices, nanotechnology, pharmaceutical, semiconductors/microelectronics, and shock/vibration. Information on ISO 14644 and ISO 14698 standards can be found through this organization.
Founded in 1953, the organization is headquartered in Schaumburg, Illinois. Its members are internationally recognized in the fields of environmental tests; contamination control; product reliability; and aerospace.
International standards
The organization is the Secretariat of ISO/TC 209: cleanroom and associated controlled environments. This committee writes the ISO 14644 standards. IEST is also a founding member of the ANSI-accredited US TAG to ISO/TC 229 - Nanotechnologies.
IEST has also revised such Federal Standards as FED-STD-209, MIL-STD-781, MIL-STD-810, and MIL-STD-1246 (now IEST-STD-1246E).
The IEST also distributes to the public all ISO 14644 and ISO 14698 standards.
Recommended practices
IEST publishes and disseminates up-to-date, reliable, technical information within each of its divisions known as IEST Recommended Practices. These Recommended Practices provide procedures based on peer-approved applications. These documents are then formulated by IEST Working Groups. IEST has also revised such Federal Standards as FED-STD-209E, MIL-STD-781, MIL-STD-810, and MIL-STD-1246 (now IEST-STD-CC1246).
Contamination control recommended practices
IEST-RP-CC001.6: HEPA and ULPA Filters
IEST-RP-CC002.4: Unidirectional Flow Clean-Air Devices
IEST-RP-CC003.4: Garment System Considerations in Cleanrooms and Other Controlled Environments
IEST-RP-CC004.3: Evaluating Wiping Materials Used in Cleanrooms and Other Controlled Environments
IEST-RP-CC005.4: Gloves and Finger Cots Used in Cleanrooms and Other Controlled Environments
IEST-RP-CC006.3: Testing Cleanrooms
IEST-RP-CC007.3: Testing ULPA Filters
IEST-RP-CC008.2: High-Efficiency Gas-phase Adsorber Cells
IEST-RP-CC011.2: A Glossary of Terms and Definitions Relating to Contamination Control
IEST-RP-CC012.3: Considerations in Cleanroom Design
IEST-RP-CC013.3: Calibration Procedures and Guidelines for Selecting Equipment Used in Testing Cleanrooms and Other Controlled Environments
IEST-RP-CC014.2: Calibration and Characterization of Optical Airborne Particle Counters
IEST-RP-CC016.2: The Rate of Deposition of Nonvolatile Residue in Cleanrooms
IEST-RP-CC018.4: Cleanroom Housekeeping: Operating and Monitoring Procedures
IEST-RP-CC019.1: Qualifications for Organizations Engaged in the Testing and Certification of Cleanrooms and Clean-Air Devices
IEST-RP-CC020.2: Substrates and Forms of Documentation in Cleanrooms
IEST-RP-CC021.4: Testing HEPA and ULPA Filter Media
IEST-RP-CC022.2: Electrostatic Charge in Cleanrooms and Other Controlled Environments
IEST-RP-CC023.2: Microorganisms in Cleanrooms
IEST-RP-CC024.1: Measuring and Reporting Vibration in Microelectronics Facilities
IEST-RP-CC026.2: Cleanroom Operations
IEST-RP-CC027.2: Personnel Practices and Procedures in Cleanrooms and Controlled Environments
IEST-RP-CC028.1: Minienvironments
IEST-RP-CC029.1: Automotive Paint-Spray Applications
IEST-RP-CC031.3: Method of Characterizing Outgassed Organic Compounds from Cleanroom Materials and Components
IEST-RP-CC032.1: Flexible Packaging Materials for Use in Cleanrooms and Other Controlled Environments
IEST-RP-CC034.4: HEPA and ULPA Filter Leak Testing
IEST-G-CC035.1: Design Considerations for AMC Filtration Systems in Cleanrooms
IEST-CC036.1: Testing Fan Filter Units
IEST-RP-CC042.1: Sizing and Counting of Submicrometer Liquid-borne Particles Using Optical Discrete-Particle Counters
IEST-RP-CC044.1: Vacuum Cleaning Systems for Use in Cleanrooms and Other Controlled Environments
IEST-RP-CC046.1: Controlled Environments (Aerospace, Non-cleanroom)
IEST-RP-CC049.1: Controlled Environments for Regulated Industries
IEST-STD-CC1246: Product Cleanliness Levels: Applications, Requirements, and Determination
Nanotechnology recommended practices
IEST-RP-NANO200.1: Planning of Nanoscale Science and Technology Facilities: Guidelines for Design, Construction, and Start-Up
IEST-RP-NANO205.1: Nanotechnology Safety: Application of Prevention Through Design Principles to Nanotechnology Facilities
Design, test, and evaluation recommended practices
IEST-RP-DTE009.1: Vibration Shaker System Selection
IEST-RP-DTE011.2: Mechanical Shock and Vibration Transducer Selection
IEST-RP-DTE012.2: Handbook for Dynamic Data Acquisition and Analysis
IEST-RP-DTE019.1: Vibration Controller Selection
IEST-RP-DTE022.1: Multi-shaker Test and Control
IEST-RP-DTE026.1: Using MIL-STD-810F, 519 Gunfire
IEST-RP-DTE032.2: Pyroshock Testing Techniques
IEST-RP-DTE040.1:; High-Intensity Acoustics Testing
IEST-RP-DTE046.1:; Terms Commonly Used in the Digital Analysis of Dynamic Data
The History and Rationale of MIL-STD-810
Product reliability recommended practices
IEST-RP-PR001.2: Management and Technical Guidelines for the ESS Process
IEST-RP-PR003.1: HALT and HASS
Journal of the IEST
The online Journal of the IEST publishes peer-reviewed technical papers, with a per-article fee, and free TechTalk articles related to the fields of contamination control; design, test, and evaluation; and product reliability. The online Journal provides never-before-available access to an entire decade of technical articles and peer-reviewed technical papers on simulation, testing, control, current research, and teaching of environmental sciences and technologies. The Journal of the IEST is the official publication of IEST, the Institute of Environmental Sciences and Technology, of archival quality with continuous publication since 1958.
References
External links
Environmental organizations based in the United States
Cleanroom technology
Environmental testing | Institute of Environmental Sciences and Technology | [
"Chemistry",
"Engineering"
] | 1,611 | [
"Environmental testing",
"Reliability engineering",
"Cleanroom technology"
] |
15,219,150 | https://en.wikipedia.org/wiki/TRIM33 | E3 ubiquitin-protein ligase TRIM33, also known as (ectodermin homolog and tripartite motif-containing 33) is a protein encoded in the human by the gene TRIM33, a member of the tripartite motif family.
TRIM33 is thought to be a transcriptional corepressor. However unlike the related TRIM24 and TRIM28 proteins, few transcription factors such as SMAD4 that interact with TRIM33 have been identified.
Structure
The protein is a member of the tripartite motif family. This motif includes three zinc-binding domains:
RING
B-box type 1 zinc finger
B-box type 2 zinc finger
and a coiled-coil region.
Three alternatively spliced transcript variants for this gene have been described, however, the full-length nature of one variant has not been determined.
Interactions
TRIM33 has been shown to interact with TRIM24.
Role in cancer
TRIM33 acts as a tumor suppressor gene preventing the development chronic myelomonocytic leukemia.
TRIM33 regulates also the TRIM28 receptor and promotes physiological aging of hematopoietic stem cells.
TRIM33 acts as an oncogene by preventing apoptosis in B-cell leukemias.
References
Further reading
External links
Gene expression
Transcription coregulators | TRIM33 | [
"Chemistry",
"Biology"
] | 260 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
15,220,286 | https://en.wikipedia.org/wiki/Yttria-stabilized%20zirconia | Yttria-stabilized zirconia (YSZ) is a ceramic in which the cubic crystal structure of zirconium dioxide is made stable at room temperature by an addition of yttrium oxide. These oxides are commonly called "zirconia" (ZrO2) and "yttria" (Y2O3), hence the name.
Stabilization
Pure zirconium dioxide undergoes a phase transformation from monoclinic (stable at room temperature) to tetragonal (at about 1173 °C) and then to cubic (at about 2370 °C), according to the scheme
monoclinic (1173 °C) ↔ tetragonal (2370 °C) ↔ cubic (2690 °C) ↔ melt.
During these transformations, zirconia can experience volume expansion of up to 5-6%. This change can induce internal stresses, leading to cracking or fracture in ceramic materials.
Obtaining stable sintered zirconia ceramic products is difficult because of the large volume change, about 5%, accompanying the transition from tetragonal to monoclinic. Stabilization of the cubic polymorph of zirconia over wider range of temperatures is accomplished by substitution of some of the Zr4+ ions (ionic radius of 0.82 Å, too small for ideal lattice of fluorite characteristic for the cubic zirconia) in the crystal lattice with slightly larger ions, e.g., those of Y3+ (ionic radius of 0.96 Å). The resulting doped zirconia materials are termed stabilized zirconias.
Materials related to YSZ include calcia-, magnesia-, ceria- or alumina-stabilized zirconias, or partially stabilized zirconias (PSZ). Hafnia-stabilized zirconia has about 25% lower thermal conductivity, making it more suitable for thermal barrier applications.
Although 8–9 mol% YSZ is known to not be completely stabilized in the pure cubic YSZ phase up to temperatures above 1000 °C.
Commonly used abbreviations in conjunction with yttria-stabilized zirconia are:
Partly stabilized zirconia ZrO2:
PSZ – partially stabilized zirconia
TZP – tetragonal zirconia polycrystal
4YSZ: with 4 mol% Y2O3 partially stabilized ZrO2, yttria-stabilized zirconia
Fully stabilized zirconias ZrO2:
FSZ – fully stabilized zirconia
CSZ – cubic stabilized zirconia
8YSZ – with 8 mol% Y2O3 fully stabilized ZrO2
8YDZ – 8–9 mol% Y2O3-doped ZrO2: the material is not completely stabilized and decomposes at high application temperatures, see next paragraphs)
Thermal expansion coefficient
The thermal expansion coefficients depends on the modification of zirconia as follows:
Monoclinic: 7·10−6/K
Tetragonal: 12·10−6/K
Y2O3 stabilized: 10,5·10−6/K
Ionic conductivity and degradation
By the addition of yttria to pure zirconia (e.g., fully stabilized YSZ) Y3+ ions replace Zr4+ on the cationic sublattice. Thereby, oxygen vacancies are generated due to charge neutrality:
meaning that two Y3+ ions generate one vacancy on the anionic sublattice. This facilitates moderate conductivity of yttrium-stabilized zirconia for O2− ions (and thus electrical conductivity) at elevated and high temperature. This ability to conduct O2− ions makes yttria-stabilized zirconia well suited for application as solid electrolyte in solid oxide fuel cells.
For low dopant concentrations, the ionic conductivity of the stabilized zirconias increases with increasing Y2O3 content. It has a maximum around 8–9 mol% almost independent of the temperature (800–1200 °C). Unfortunately, 8–9 mol% YSZ (8YSZ, 8YDZ) also turned out to be situated in the 2-phase field (c+t) of the YSZ phase diagram at these temperatures, which causes the material's decomposition into Y-enriched and depleted regions on the nanometre scale and, consequently, the electrical degradation during operation. The microstructural and chemical changes on the nanometre scale are accompanied by the drastic decrease of the oxygen-ion conductivity of 8YSZ (degradation of 8YSZ) of about 40% at 950 °C within 2500 hours. Traces of impurities like Ni, dissolved in the 8YSZ, e.g., due to fuel-cell fabrication, can have a severe impact on the decomposition rate (acceleration of inherent decomposition of the 8YSZ by orders of magnitude) such that the degradation of conductivity even becomes problematic at low operation temperatures in the range of 500–700 °C.
Nowadays, more complex ceramics like co-doped zirconia (e.g., with scandia) are in use as solid electrolytes.
Applications
YSZ has a number of applications:
For its hardness and chemical inertness (e.g., tooth crowns).
As a refractory (e.g., in jet engines).
As a thermal barrier coating in gas turbines.
As an electroceramic due to its ion-conducting properties (e.g., to determine oxygen content in exhaust gases, to measure pH in high-temperature water, in fuel cells).
Used in the production of a solid oxide fuel cell (SOFC). YSZ is used as the solid electrolyte, which enables oxygen ion conduction while blocking electronic conduction. In order to achieve sufficient ion conduction, an SOFC with a YSZ electrolyte must be operated at high temperatures (800–1000 °C). While it is advantageous that YSZ retains mechanical robustness at those temperatures, the high temperature necessary is often a disadvantage of SOFCs. The high density of YSZ is also necessary in order to physically separate the gaseous fuel from oxygen, or else the electrochemical system would produce no electrical power.
For its hardness and optical properties in monocrystal form (see "cubic zirconia"), it is used as jewelry.
As a material for non-metallic knife blades, produced by Boker and Kyocera companies.
In water-based pastes for do-it-yourself ceramics and cements. These contain microscopic YSZ milled fibers or sub-micrometer particles, often with potassium silicate and zirconium acetate binders (at mildly acidic pH). The cementation occurs on removal of water. The resulting ceramic material is suitable for very high-temperature applications.
YSZ doped with rare-earth materials can act as a thermographic phosphor and a luminescent material.
Historically used for glowing rods in Nernst lamps.
As a high-temperature coating, produced by ZYP Coatings, Inc.
As a high-precision alignment sleeve for optical fiber connector ferrules.
See also
References
Further reading
Zirconium dioxide
Refractory materials
Electrochemistry
Yttrium compounds
de:Zirconium(IV)-oxid#Stabilisierung | Yttria-stabilized zirconia | [
"Physics",
"Chemistry"
] | 1,538 | [
"Refractory materials",
"Electrochemistry",
"Materials",
"Matter"
] |
15,221,133 | https://en.wikipedia.org/wiki/Euler%20force | In classical mechanics, the Euler force is the fictitious tangential force
that appears when a non-uniformly rotating reference frame is used for analysis of motion and there is variation in the angular velocity of the reference frame's axes. The Euler acceleration (named for Leonhard Euler), also known as azimuthal acceleration or transverse acceleration, is that part of the absolute acceleration that is caused by the variation in the angular velocity of the reference frame.
Intuitive example
The Euler force will be felt by a person riding a merry-go-round. As the ride starts, the Euler force will be the apparent force pushing the person to the back of the horse; and as the ride comes to a stop, it will be the apparent force pushing the person towards the front of the horse. A person on a horse close to the perimeter of the merry-go-round will perceive a greater apparent force than a person on a horse closer to the axis of rotation.
Mathematical description
The direction and magnitude of the Euler acceleration is given, in the rotating reference frame, by:
where ω is the angular velocity of rotation of the reference frame and r is the vector position of the point in the reference frame. The Euler force on an object of mass m in the rotating reference frame is then
See also
Fictitious force
Coriolis effect
Centrifugal force
Rotating reference frame
Angular acceleration
Notes and references
Fictitious forces
Rotation | Euler force | [
"Physics"
] | 288 | [
"Physical phenomena",
"Force",
"Physical quantities",
"Classical mechanics stubs",
"Classical mechanics",
"Fictitious forces",
"Rotation",
"Motion (physics)"
] |
15,221,427 | https://en.wikipedia.org/wiki/Alternating%20current%20field%20measurement | Alternating current field measurement (ACFM) is an electromagnetic technique for non-destructive testing detection and sizing of surface breaking discontinuities. It was derived from the methods used in eddy-current testing and works on all metals, ferrous or non-ferrous. Since it doesn't require direct electrical contact with the surface it can work through thin coatings such as paint. This practice is intended for use on welds in any metallic material.
Use
The system was originally developed in the early 1990 for the sub-sea and topside inspections of offshore structures without the need to remove the item's protective coating. Since then it has been successfully applied to onshore process plants and has gained use for inspections of offshore assets. Applications include in-service inspection of welded items and crack detection in vessels. Its main purpose is to evaluate welds in the area of the toe for surface breaking discontinuities such as fatigue cracks.
Method
The ACFM probe induces a uniform alternating current in the area under test and detects magnetic field of the resulting current near the surface.
This current is undisturbed if the area is defect free. A crack redirects the current around the ends and faces of the crack. The ACFM instrument measures these disturbances in the field and uses mathematical modelling to estimate crack size. The lateral and vertical components of the magnetic field are analyzed; disturbances indicate a crack is present, and the size and depth of the crack can be calculated.
Capabilities
The method both detects cracks and estimates their size and length. It can inspect any electrically conductive material. Data is recorded electronically for off-line evaluation if necessary and provides a permanent record of indications. Tests can be repeated and compared over time for ongoing monitoring.
The method is non-invasive and can carry out inspection without removing any protective paint coating. With suitable probes, the method can be used on hot surfaces.
New technologies have now allowed ACFM to be carried out on subsea assets with the assistance of ROV's
Limitations
Not recommended for short sections or small items.
Locations of weld repairs and localised grinding can cause spurious indications.
Multiple defects reduce the ability to estimate defect depth.
May be more difficult to interpret signals
Can not determine the direction of propagation of defect into the parent metal
The probability of detection and false detection rate is generally good, but it is application dependent.
The alternating current field measurement method (ACFM) does not have available performance demonstration initiative results. Based upon hundreds of performance demonstration initiatives for MT, ET, and alternating current field measurement; probability of detection for ACFM is given below.
In refereed (PDI'S) based on fit-for-purpose in-service; the Oil and Gas and Drilling Contractors use 6 mm length and .5 mm depth and non-visual for width. Using MT based on API RP 2X, ET BSI 1711 and ACFM ASTM-A36 MT the probability of detection for MT and ET were 90% while ACFM was 76%.
Preparation
Non-adherent protection such as insulation must be removed. The system can operate through non-conductive adherent coatings, but there may be a need to remove heavy or loose scale and spatter.
References
Nondestructive testing
Casting (manufacturing)
Electromagnetism
Engineering mechanics | Alternating current field measurement | [
"Physics",
"Materials_science",
"Engineering"
] | 675 | [
"Electromagnetism",
"Physical phenomena",
"Nondestructive testing",
"Materials testing",
"Civil engineering",
"Fundamental interactions",
"Mechanical engineering",
"Engineering mechanics"
] |
15,221,812 | https://en.wikipedia.org/wiki/Virtual%20security%20switch | A virtual security switch is a software Ethernet switch with embedded security controls within it that runs within virtual environments such as VMware vSphere, Citrix XenDesktop, Microsoft Hyper-V and Virtual Iron. The primary purpose of a virtual security switch is to provide security measures such as isolation, control and content inspection between virtual machines.
Virtual machines within enterprise server environments began to gain popularity in 2005 and quickly started to become a standard in the way companies deploy servers and applications. In order to deploy these servers within a virtual environment, a virtual network needed to be formed. As a result, companies such as VMware created a resource called a virtual switch. The purpose of the virtual switch was to provide network connectivity within the virtual environment so that virtual machines and applications could communicate within the virtual network as well as with the physical network.
This concept of a virtual network introduced a number of problems, as it related to security within virtual environment, due to only having virtual switching technology within the environment and not security technologies. Unlike physical networks that have switches with access control lists (ACLs), firewalls, antivirus gateways, or intrusion prevention devices, the virtual network was wide open. The virtual security switch concept is one where switching and security have joined forces, so that security controls could be placed within the virtual switch and provide per-port inspection and isolation within the virtual environment. This concept allowed security to get as close as possible to the end points that it intends to protect, without having to reside on the end points (host-based on virtual machines) themselves.
By eliminating the need to deploy host-based security solutions on virtual machines, a significant performance improvement can be achieved when deploying security within the virtual environment. This is because virtual machines share computing resources (e.g. CPU time, memory or disk space) while physical servers that have dedicated resources. One way of understanding this, is to picture 20 virtual machines running on a dual-CPU server and each virtual server having its own host-based firewall running on them. This would make up 20 firewalls using the same resources that the 20 virtual machines are using. This defeats the purpose of virtualization, which is to apply those resources to virtual servers not security applications. Deploying security centrally within the virtual environment is in a sense one firewall versus 20 firewalls.
Limitations
Because switches are layer 2 devices that create a single broadcast domain, virtual security switches alone cannot fully replicate the network segmentation and isolation typically employed in a multi-tiered physical network. To address this limitation, a number of networking, security and virtualization vendors have begun to offer virtual firewalls, virtual routers and other network devices to allow virtual networks to offer more robust security and network organization solutions.
Problem example
Because virtual machines are essentially operating systems and applications packaged into a single file (called disk images), they have now become more mobile. For the first time in history, servers can be moved around, exchanged and shared just like MP3 files shared on the peer-to-peer networks. Administrators can now download pre-installed virtual servers via the Internet to speed up the deployment time of new servers. No longer is it required for an administrator to go through the lengthy software installation process, because these virtual disk images have pre-installed operating systems and applications. They are virtual appliances.
This mobility of server images has now created the potential problem that entire servers can become infected and passed around in the wild. Imagine downloading the latest Fedora Linux Server from a web site like ThoughtPolice.co.uk, installing it and later learning that there was a Trojan horse on that server that later took down your virtual network. This could be catastrophic.
While there is the trust factor that now needs to be taken in account when downloading virtual server images,
The Virtual Security Switch concept is one that monitors your trust decision by providing isolation and security monitoring between virtual machines. A Virtual Security Switch can isolate VM’s from each other, restrict what types of communication is allowed between each other as well as monitor for the spread of malicious content or denial of service attacks.
History
Reflex Security introduced the industry’s first 10 gigabit Network Security Switch which had a port density to support 80 physical servers connected to it. In 2008, Vyatta began to ship an open source network operating system designed to offer layer 3 services such as routing, firewall, network address translation (NAT), dynamic host configuration and virtual private network (VPN) within and between hypervisors. Since then, VMware, Cisco, Juniper and others have shipped virtual networking security products that incorporate layer 2 and layer 3 switching and routing.
References
Further reading
Virtualization
Ethernet | Virtual security switch | [
"Engineering"
] | 952 | [
"Computer networks engineering",
"Virtualization"
] |
2,434,383 | https://en.wikipedia.org/wiki/Local%20reference%20frame | In theoretical physics, a local reference frame (local frame) refers to a coordinate system or frame of reference that is only expected to function over a small region or a restricted region of space or spacetime.
The term is most often used in the context of the application of local inertial frames to small regions of a gravitational field. Although gravitational tidal forces will cause the background geometry to become noticeably non-Euclidean over larger regions, if we restrict ourselves to a sufficiently small region containing a cluster of objects falling together in an effectively uniform gravitational field, their physics can be described as the physics of that cluster in a space free from explicit background gravitational effects.
Equivalence principle
When constructing his general theory of relativity, Einstein made the following observation: a freely falling object in a gravitational field will not be able to detect the existence of the field by making local measurements ("a falling man feels no gravity"). Einstein was then able to complete his general theory by arguing that the physics of curved spacetime must reduce over small regions to the physics of simple inertial mechanics (in this case special relativity) for small freefalling regions.
Einstein referred to this as "the happiest idea of my life".
Laboratory frame
In physics, the laboratory frame of reference, or lab frame for short, is a frame of reference centered on the laboratory in which the experiment (either real or thought experiment) is done. This is the reference frame in which the laboratory is at rest. Also, this is usually the frame of reference in which measurements are made, since they are presumed (unless stated otherwise) to be made by laboratory instruments. An example of instruments in a lab frame, would be the particle detectors at the detection facility of a particle accelerator.
See also
Breit frame
Center-of-mass frame
Frame bundle
Inertial frame of reference
Local coordinates
Local spacetime structure
Lorentz covariance
Minkowski space
Normal coordinates
Frames of reference | Local reference frame | [
"Physics",
"Mathematics"
] | 389 | [
"Frames of reference",
"Classical mechanics",
"Theory of relativity",
"Relativity stubs",
"Coordinate systems"
] |
2,434,399 | https://en.wikipedia.org/wiki/Isaak%20Khalatnikov | Isaak Markovich Khalatnikov (, ; 17 October 1919 – 9 January 2021) was a leading Soviet theoretical physicist who made significant contributions to many areas of theoretical physics, including general relativity, quantum field theory, as well as the theory of quantum liquids. He is well known for his role in developing the Landau-Khalatnikov theory of superfluidity and the so-called BKL conjecture in the general theory of relativity.
Life and career
Isaak Khalatnikov was born into a Ukrainian Jewish family in Yekaterinoslav (now Dnipro, Ukraine) and graduated from Dnipropetrovsk State University with a degree in Physics in 1941. He had been a member of the Communist Party since 1944. He earned his doctorate in 1952. His wife Valentina was the daughter of Revolutionary hero Mykola Shchors.
Much of Khalatnikov's research was a collaboration with, or inspired by, Lev Landau, including the Landau-Khalatnikov theory of superfluidity.
During 1969 he briefly worked as a part-time professor of theoretical physics at Leiden University.
In 1970, inspired by the mixmaster model introduced by Charles W. Misner, then at Princeton University, Khalatnikov, together with Vladimir Belinski and Evgeny Lifshitz, introduced what has become known as the BKL conjecture, which is widely regarded as one of the most outstanding open problems in the classical theory of gravitation.
Khalatnikov directed the Landau Institute for Theoretical Physics in Moscow from 1965 to 1992. He was elected to the Academy of Sciences of the Soviet Union in 1984. He has been awarded the Landau Gold Medal, the Humboldt Prize, and the Marcel Grossmann Award. He was also a foreign member of the Royal Society of London.
He was portrayed by actor Georg Nikoloff in the film The Theory of Everything.
Khalatnikov died in Chernogolovka on 9 January 2021, aged 101.
Honours and awards
Order "For Merit to the Fatherland", 3rd class (1999)
Order of Alexander Nevsky (2020)
Order of the October Revolution (1986)
Order of the Patriotic War, 2nd class (1985)
Three Orders of the Red Banner of Labour (1954, 1956, 1975)
Order of Friendship of Peoples (1979)
Order of the Badge of Honour (1950)
Stalin Prize, 2nd class (1953)
Marcel Grossmann Award (2012) "For the discovery of a general solution of the Einstein equations with a cosmological singularity of an oscillatory chaotic character known as the BKL singularity"
Asteroid 468725 Khalat was named in his honor. The official was published by the Minor Planet Center on 18 May 2019 ().
Partial bibliography
Books
Selected academic works
See also
Fermi liquid theory
Landau pole
Quantum triviality
References
1919 births
2021 deaths
20th-century Russian physicists
21st-century Russian physicists
Scientists from Dnipro
Communist Party of the Soviet Union members
Foreign members of the Royal Society
Full Members of the Russian Academy of Sciences
Full Members of the USSR Academy of Sciences
Academic staff of the Moscow Institute of Physics and Technology
Oles Honchar Dnipro National University alumni
Recipients of the Order "For Merit to the Fatherland", 3rd class
Recipients of the Order of Alexander Nevsky
Recipients of the Order of Friendship of Peoples
Recipients of the Order of the Red Banner of Labour
Recipients of the Stalin Prize
Jewish Russian physicists
Russian men centenarians
Russian cosmologists
20th-century Russian memoirists
Soviet cosmologists
Soviet physicists
Superfluidity
Russian scientists
Jewish centenarians
Academic staff of Leiden University | Isaak Khalatnikov | [
"Physics",
"Chemistry",
"Materials_science"
] | 737 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Matter",
"Fluid dynamics"
] |
2,434,620 | https://en.wikipedia.org/wiki/Chicken%20gun | A chicken gun or flight impact simulator is a large-diameter, compressed-air gun used to fire bird carcasses at aircraft components in order to simulate high-speed bird strikes during the aircraft's flight. Jet engines and aircraft windshields are particularly vulnerable to damage from such strikes, and are the most common target in such tests. Although various species of bird are used in aircraft testing and certification, the device acquired the common name of "chicken gun" as chickens are the most commonly used 'ammunition' owing to their ready availability.
Context
Bird strikes are a significant hazard to flight safety, particularly around takeoff and landing where crew workload is highest and there is scant time for recovery before a potential impact with the ground. The speeds involved in a collision between a jet aircraft and a bird can be considerableoften around resulting in a large transfer of kinetic energy. A bird colliding with an aircraft windshield could penetrate or shatter it, injuring the flight crew or impairing their ability to see. At high altitudes such an event could cause uncontrolled decompression. A bird ingested by a jet engine can break the engine's compressor blades, potentially causing catastrophic damage.
Multiple measures are used to prevent bird strikes, such as the use of deterrent systems at airports to prevent birds from gathering, population control using birds of prey or firearms, and recently avian radar systems that track flocks of birds and give warnings to pilots and air traffic controllers.
Despite this, the risk of bird strikes is impossible to eliminate and therefore most government certification authorities such as the US Federal Aviation Administration and the European Aviation Safety Agency require that aircraft engines and airframes be resilient against bird strikes to a certain degree as part of the airworthiness certification process. In general, an engine should not suffer an uncontained failure (an event where rotating parts are ejected from the engine casing) after impact with a suitably-sized bird, and a bird strike to the airframe of a craft should not prevent "continued safe flight [and a] normal landing".
History
The first recorded chicken gun was built in 1942 by the US Civil Aeronautics Administration in collaboration with the Westinghouse Electric and Manufacturing Company. Built at Westinghouse's High Power Laboratory in Pittsburgh, it was capable of firing bird carcasses at up to , although most tests were conducted with muzzle velocities around . The gun used compressed air as its propellant, with a compressor storing air into an accumulator until the desired pressure was reached. To fire the gun, an operator triggered the opening of an electric quick-release valve, dumping the compressed air into the barrel. Different muzzle velocities were achieved by varying the pressure stored in the accumulator.
The tests conducted with this gun were the first of their kind, and showed that the glass used in the windshields of common passenger aircraft such as the Douglas DC-3 was extremely vulnerable to bird strikes; panels were penetrated completely by a bird traveling at only . Subsequent testing showed that laminate panels made of glass interleaved with polyvinyl chloride were far more resistant.
The gun was used at the High Power Laboratory until November 1943. In early 1945, it was moved to a CAA research & development location in Indianapolis, called the Indianapolis Experimental Station, where it was used to test components for various commercial aircraft manufacturers, before being retired at some point in 1947. A similar gun was independently developed by the De Havilland Aircraft Company in the United Kingdom in the mid-1950s. The UK's Royal Aircraft Establishment built a chicken gun in 1961, and in 1967 the Canadian National Research Council's Division of Mechanical Engineering used the RAE's design as a basis for their "Flight Impact Simulator Facility", a pneumatic gun based next to Ottawa airport. This gun remained in frequent use until 2016, at which point it was donated to the Canada Aviation and Space Museum and replaced by a pair of more modern guns. The replacements can accommodate different sized birds more easily through the use of a modular barrel.
In the 1970s, Goodyear Aerospace developed a chicken gun that stored compressed air behind a ceramic diaphragm and used a cardboard sabot to center and stabilize the chicken. When fired, a needle struck the diaphragm, rupturing the seal and allowing the air to propel the projectile down the barrel. A metal ring on the muzzle stopped the sabot, but allowed the chicken to escape the barrel.
The United States Air Force built the AEDC Ballistic Range S-3 at Arnold Engineering Development Complex in 1972 to test the canopies and windshields of military aircraft. Like previous chicken guns, S-3 used compressed air to launch its projectiles. The gun was later used in the development and certification of multiple US military aircraft, including the F-4, F-111 and A-10. the gun was still in operation.
Use in aircraft certification
Chicken guns are routinely used in the process of proving compliance with certification regulations. Given their complexity and the expertise required to operate them, an aircraft manufacturer will typically contract with a facility that operates a gun to perform a test against a given standard. The component to be tested is mounted securely on a frame, the gun fires a bird at it, and the results are examined for compliance with the relevant standards. Most tests are performed with the gun pressurized to around this results in a bird being launched at around , approximately the resultant velocity in a collision between a bird and an aircraft.
The FAA do not specify the species of bird that should be used for testing, but do state that the birds should not be frozen, as this would not accurately reflect the reality of a strike. Chickens are used as they are cheap, and readily available.
There have been efforts to develop artificial bird analogs for use in impact tests, to replace the use of carcasses. The motivations for this range from ensuring that results are easily reproducible across the industry, cost, and sensitivity to the views of animal rights activists. However, concerns have been expressed by some engineers that tests with artificial birds do not accurately represent the forces involved in real bird strikes as the analogs lack bones. Some go further and state that the farm-raised birds commonly used in tests are also unrepresentative owing to the lower density of their muscle tissue.
Notable uses
During the development of the Boeing 757 in the 1970s, the cockpit roof was subjected to a bird strike test wherein a chicken was fired at into a stationary cockpit. To the surprise of the Boeing engineers, the chicken penetrated the skin of the aircraft. As a result the cockpit of the 757, and that of the 767 which shared the same design, had to be reinforced. Several 767s were already in service, and had to be recalled for retrofitting of the reinforcements. Later in the 757's development process a bird strike test was conducted on the aircraft's windows, again by firing a chicken at them. The UK Civil Aviation Authority's certification requirements at the time were more stringent than the FAA's, and required the metal around the windows to also resist a bird strike. The 757 failed this test, requiring further re-engineering.
After the Space Shuttle Columbia disaster in 2003, the chicken gun at AEDC Ballistic Range S-3 was repurposed to test the resistance of various components of the Shuttle orbiter and launch fuel tanks to impacts from insulating foam. The intent was to discover the exact cause of the disaster, and establish whether any modifications to the Shuttle were required.
In popular culture
The comedy series Royal Canadian Air Farce has a recurring skit in which a "chicken cannon" is used to fire various objects, originally including a rubber chicken, at a picture of a well-known person, often a politician.
See also
Blade off testing
Blue Peacock#Chicken-powered nuclear bomb
References
Aviation safety
Aerospace engineering | Chicken gun | [
"Engineering"
] | 1,604 | [
"Aerospace engineering"
] |
2,436,311 | https://en.wikipedia.org/wiki/Spin%20coating | Spin coating is a procedure used to deposit uniform thin films onto flat substrates. Usually a small amount of coating material in liquid form is applied on the center of the substrate, which is either spinning at low speed or not spinning at all. The substrate is then rotated at speeds up to 10,000 rpm to spread the coating material by centrifugal force. A machine used for spin coating is called a spin coater, or simply spinner.
Rotation is continued while the fluid spins off the edges of the substrate, until the desired thickness of the film is achieved. The applied solvent is usually volatile, and simultaneously evaporates. The higher the angular speed of spinning, the thinner the film. The thickness of the film also depends on the viscosity and concentration of the solution, and the solvent. Pioneering theoretical analysis of spin coating was undertaken by Emslie et al., and has been extended by many subsequent authors (including Wilson et al., who studied the rate of spreading in spin coating; and Danglad-Flores et al., who found a universal description to predict the deposited film thickness).
Spin coating is widely used in microfabrication of functional oxide layers on glass or single crystal substrates using sol-gel precursors, where it can be used to create uniform thin films with nanoscale thicknesses. It is used intensively in photolithography, to deposit layers of photoresist about 1 micrometre thick. Photoresist is typically spun at 20 to 80 revolutions per second for 30 to 60 seconds. It is also widely used for the fabrication of planar photonic structures made of polymers.
One advantage to spin coating thin films is the uniformity of the film thickness. Owing to self-leveling, thicknesses do not vary more than 1%. The thickness of films produced in this manner may also affect the optical properties of such materials. This is important for electrochemical testing, specifically when recording absorbance readings from Ultraviolet-visible Spectroscopy, since thicker films have lower optical transmittance and typically do not allow light to shine through in comparison to thinner films allowing light to go through before the optical density of the film becomes too low. Additionally, films with lower absorbance quality are not as ideal of candidates for processes such as Cyclic Voltammetry because the low absorbance hinders electrochemical tuning of cations when in an electrochemical cell. Thinner films in this regard have more desirable optical properties that can be tuned for energy storage technologies because of their spin coated influenced properties. However, spin coating thicker films of polymers and photoresists can result in relatively large edge beads whose planarization has physical limits.
References
Further reading
S. Middleman and A.K. Hochberg. "Process Engineering Analysis in Semiconductor Device Fabrication". McGraw-Hill, p. 313 (1993)
External links
Spin Coating of Thin and Ultrathin Polymer Films
Deposition of polymer films by spin casting: A quantitative analysis
Industrial processes
Semiconductor device fabrication
Thin film deposition | Spin coating | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 604 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Planes (geometry)",
"Solid state engineering"
] |
2,437,021 | https://en.wikipedia.org/wiki/CASTEP | CASTEP is a shared-source academic and commercial software package which uses density functional theory with a plane wave basis set to calculate the electronic properties of crystalline solids, surfaces, molecules, liquids and amorphous materials from first principles. CASTEP permits geometry optimisation and finite temperature molecular dynamics with implicit symmetry and geometry constraints, as well as calculation of a wide variety of derived properties of the electronic configuration. Although CASTEP was originally a serial, Fortran 77-based program, it was completely redesigned and rewritten from 1999 to 2001 using Fortran 95 and MPI for use on parallel computers by researchers at the Universities of York, Durham, St. Andrews, Cambridge and Rutherford Labs.
History
CASTEP was created in the late 1980s and early 1990s in the TCM Group of the Cavendish Laboratory in Cambridge. It was then an academic code written in Fortran77, and the name was originally derived from CAmbridge Serial Total Energy Package. In the mid-1990s it was commercialised by licensing it to Molecular Simulations International (the company was later purchased by Accelrys, in turn purchased by Biovia) in an arrangement through which the University of Cambridge received a share of the royalties, and much of the development remained with the original academic authors. The code was then redesigned and completely rewritten from 1999–2001 to make use of the features of modern Fortran, enable parallelism throughout the code and improve its software sustainability. The name CASTEP was adopted by the new codebase, but without the implied former meaning since the code was now parallel, and capable of computing many quantities besides the total energy. By this point annual sales exceeded £1m. Despite its commercialisation, CASTEP and its source code remained free to UK academics.
In 2019 the free academic licence was extended to world-wide academic use (not just UK academia). Commercial users can purchase CASTEP as part of Biovia's Materials Studio package.
Theory and approximations
Starting from the many-body wave function, an adiabatic approximation is made with respect to the nuclear and electronic coordinates (the Born–Oppenheimer approximation). The code also makes use of Bloch's Theorem which means a wavefunction of a periodic system has a cell-periodic factor and a phase factor. The phase factor is represented by a plane wave. From the usage of Bloch's Theorem, it is ideal to write the wavefunction in plane waves for the cell-periodic factor and the phase factor. From this the basis functions are orthogonal and it is easy to perform a Fourier transform from real to reciprocal space and vice versa. Fast Fourier transforms are used throughout the CASTEP code, as is the Ewald summation method for Coulombic energies. Along with plane waves and iterative diagonalization methods (via conjugate gradient or blocked Davidson algorithms), pseudopotentials are essential to the CASTEP code for reducing the computational expense of the calculation. Pseudopotentials replace the atomic nucleus and the core electrons by an effective numeric potential.
Geometry optimisation
CASTEP is capable of optimising the atomic geometry of a system in several different ways. The default is BFGS, whereby an approximation to the Hessian matrix is built up over successive electronic minimisation steps and used to find a search direction at each. Damped molecular dynamics is also possible and often quick to converge, sometimes even faster than BFGS, due to wavefunction extrapolation. Damped MD is most often chosen over BFGS, however, due to the possibility for non-linear ion constraints. A further alternative is the FIRE scheme, which takes approximately the same approach as damped MD, but based on slightly different methodology.
See also
Quantum chemistry computer programs
References
External links
Source repository
Computational chemistry software
Physics software
Density functional theory software
Science and technology in Cambridgeshire | CASTEP | [
"Physics",
"Chemistry"
] | 774 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
2,437,128 | https://en.wikipedia.org/wiki/Chronoamperometry | In electrochemistry, chronoamperometry is an analytical technique in which the electric potential of the working electrode is stepped and the resulting current from faradaic processes occurring at the electrode (caused by the potential step) is monitored as a function of time. The functional relationship between current response and time is measured after applying single or double potential step to the working electrode of the electrochemical system. Limited information about the identity of the electrolyzed species can be obtained from the ratio of the peak oxidation current versus the peak reduction current. However, as with all pulsed techniques, chronoamperometry generates high charging currents, which decay exponentially with time as any RC circuit. The Faradaic current - which is due to electron transfer events and is most often the current component of interest - decays as described in the Cottrell equation. In most electrochemical cells, this decay is much slower than the charging decay-cells with no supporting electrolyte are notable exceptions. Most commonly a three-electrode system is used. Since the current is integrated over relatively longer time intervals, chronoamperometry gives a better signal-to-noise ratio in comparison to other amperometric techniques.
There are two types of chronoamperometry that are commonly used: controlled-potential chronoamperometry and controlled-current chronoamperometry. Before running controlled-potential chronoamperometry, cyclic voltammetries are run to determine the reduction potential of the analytes. Generally, chronoamperometry uses fixed-area electrodes, which are suitable for studying electrode processes of coupled chemical reactions, especially the reaction mechanism of organic electrochemistry.
Example
Anthracene in deoxygenated dimethylformamide (DMF) will be reduced (An + e− -> An−) at the electrode surface that is at a certain negative potential. The reduction will be diffusion-limited, thereby causing the current to drop in time (proportional to the diffusion gradient that is formed by diffusion).
You can do this experiment several times increasing electrode potentials from low to high. (In between the experiments, the solution should be stirred.) When you measure the current i(t) at a certain fixed time point τ after applying the voltage, you will see that at a certain moment the current i(τ) does not rise anymore; you have reached the mass-transfer-limited region. This means that anthracene arrives as fast as diffusion can bring it to the electrode.
History
In 1902, F. G. Cottrell deduced the linear diffusion on a planar electrode according to the diffusion law and Laplace transform, and obtained the Cottrell equation:
where
is the current in amps;
is the number of electrons;
is the Faraday constant;
is the area of the planar electrode in cm2;
is the initial concentration of the analyte in mol/cm3;
is the diffusion coefficient for species in cm2/s;
is the time in seconds.
Under controlled-diffusion circumstances, the current-time plot reflects the concentration gradient of the solution near the electrode surface. The current is directly proportional to the concentration at the electrode surface.
In 1922, Jaroslav Heyrovský reiterated the chronoamperometric method when he invented the polarographic method. It can use the basic circuit of the polarograph. To connect the fast recorder or oscilloscope, the dropping mercury electrode is not used, instead, the static electrodes such as suspended mercury, mercury poll or platinum, gold and graphite are used. In addition, the solution is not stirred. In the presence of the inert electrolytes, the mass transfer process is mainly diffusion. Jarroslav Herovsky derived the chronopotentiometric method from the Cottrell equation. Chronopotentiometry is an electrochemical method that can generate a stable current that can flow between two different electrodes.
Application
Controlled-potential (bulk) electrolysis
One of the application of chronoamperometry is controlled-potential (bulk) electrolysis, which is also known as potentiostatic coulometry. During this process, a constant potential is applied to the working electrode and current is monitored over time. The analyte in one oxidation state will be oxidized or reduced to another oxidation state. The current will decrease to the base line (approaching zero) as the analyte is consumed. This process shows the total charge (in coulomb) that flows in the reaction. Total charge (n value) is calculated by integration of area under the current plot and the application of the Faraday's law.
The cell for controlled-potential (bulk) electrolysis is usually a two-compartment (divided) cell, contained a carbon rod auxiliary anode and is separated from the cathode compartment by a coarse glass frit and methyl cellulose solvent electrolyte plug. The reason for the two compartment cell is to separate cathodic and anodic reaction. The working electrode for bulk electrolysis could be a RVC disk, which has larger surface area to increase the rate of the reaction.
Controlled-potential electrolysis is normally utilized with cyclic voltammetry. Cyclic voltammetry is capable to analyse the electrochemical behavior of the analyte or the reaction. For instance, cyclic voltammetry could tell us the cathodic potential of an analyte. Since the cathodic potential of this analyte is obtained, controlled-potential electrolysis could hold this constant potential for the reaction to happen.
Double potential step chronoamperometry
Double potential step chronoamperometry (DPSCA) is the technique whose working electrode is applied by the potential stepping forward for a certain period of time and backward for a period of time. The current is monitored and plotted with respect to time. This method starts with an induction period. In this period, several initial conditions will be applied to the electrochemical cell so that cell is able to equilibrate to those conditions. The working electrode potential will be held at the initial potential under these conditions for a specified period (i.e. usually 3 seconds). When the induction period is over, the working cells switch to another potential for a certain amount of time. After the first step is completed, the working electrode's potential is stepped back, usually to the potential prior to the forward step. The whole experiment ends with a relaxation period. Under this period, the default condition involves holding the working electrode potential of initial state for another approximate 1 seconds. When the relaxation period is over, the post experiment idle conditions will be applied to the cell so that the instrument can return to the idle state1. After plotting the current as a function of time, a chronoamperogram will occur and it can also be used to generate Cottrell plots.
Two methods from chronoanalysis
Chronopotentiometry
The application of chronopotentiometry could be derived into two parts. As an analytical method, the range of analysis is normally in the range of 10−4 mol/L to 10−2 mol/L, and sometimes it will be as accurate as 10−5 mol/L. When the analysis is in the extreme lower range of concentration, lower current density could be used. Also, to get the accurate concentration determination, the transition time could be extended. In this area of analysis determination, chronopotentiometry is similar to polarography. Waves that are separable in polarography is also separable in chronopotentiometry.
Chronopotentiometry is an effective method to study electrode mechanism. Different electrode will have different relationship between E and t in the chronopotentiometry graph. In this situation, E is the electrode potential in voltage and t is the reaction time in seconds. By the method of studying the relationship between E and t in the chronopotentiometry graph, we can get the information of mechanisms of electrode reactions, such as the electrode reaction of hydrogen peroxide and oxalic acid. The chronopotentiometry experiment could be done in a very short time period, so it is a good method to study the adsorption behavior at the electrode surface. By studying the chronopotentiometry graph of electrode after adsorption of iron ions, it is proved that the adsorption of platinum on iron ions exists. By studying the chronopotentiometry graph of platinum electrode adsorbing iodine, it is proved that the adsorption of iodine occurs in the form of iodine molecules, not iodine atoms.
Chronocoulometry
Chronocoulometry is an analytical method that has similar principle with chronoamperometry, but it monitors the relationship between charge and time instead of current and time. Chronocoulometry has the following differences with chronoamperometry: the signal increases over time instead of decreasing; the act of integration minimizes noise, resulting in a smooth hyperbolic response curve; and contributions from double-layer charging and absorbed species are easily observed.
See also
Electroanalytical methods
Electrochemical skin conductance
Potentiometric titration
Voltammetry
References
Electroanalytical methods | Chronoamperometry | [
"Chemistry"
] | 1,918 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
2,437,401 | https://en.wikipedia.org/wiki/Solar%20conjunction | Solar conjunction generally occurs when a planet or other Solar System object is on the opposite side of the Sun from the Earth. From an Earth reference, the Sun will pass between the Earth and the object. Communication with any spacecraft in solar conjunction will be severely limited due to the Sun's interference on radio transmissions from the spacecraft.
The term can also refer to the passage of the line of sight to an interior planet (Mercury or Venus) or comet being very close to the solar disk. If the planet passes directly in front of the Sun, a solar transit occurs.
Spacecraft-related issues
There is also a risk that an antenna equipped with auto-tracking will begin following the Sun's movements instead of the satellite once they are no longer inline with each other. This is because the Sun acts as a large electromagnetic noise generator which creates a signal much stronger than the satellite's tracking signal.
One example of limitations caused by the solar conjunction occurred when the NASA-JPL team put the Curiosity rover on Mars' surface in autonomous operation mode for 25 days during the conjunction. In autonomous mode Curiosity suspends all movements and active science operations but retains communication-independent experiments (e.g. record atmospheric and radiation data). A more recent example occurred with the Mars rover Perseverance in October of 2021.
See also
Conjunction (astronomy and astrology)
List of conjunctions (astronomy)
Opposition (astronomy)
References
Astrological aspects
Conjunctions (astronomy and astrology)
Satellite broadcasting
Spaceflight concepts | Solar conjunction | [
"Engineering"
] | 300 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
2,437,593 | https://en.wikipedia.org/wiki/Hybrid%20integrated%20circuit | A hybrid integrated circuit (HIC), hybrid microcircuit, hybrid circuit or simply hybrid is a miniaturized electronic circuit constructed of individual devices, such as semiconductor devices (e.g. transistors, diodes or monolithic ICs) and passive components (e.g. resistors, inductors, transformers, and capacitors), bonded to a substrate or printed circuit board (PCB). A PCB having components on a Printed wiring board (PWB) is not considered a true hybrid circuit according to the definition of MIL-PRF-38534.
Overview
"Integrated circuit", as the term is currently used, usually refers to a monolithic IC which differs notably from a HIC in that a HIC is fabricated by inter-connecting a number of components on a substrate whereas an IC's (monolithic) components are fabricated in a series of steps entirely on a single wafer which is then diced into chips. Some hybrid circuits may contain monolithic ICs, particularly Multi-chip module (MCM) hybrid circuits.
Hybrid circuits could be encapsulated in epoxy, as shown in the photo, or in military and space applications, a lid was soldered onto the package. A hybrid circuit serves as a component on a PCB in the same way as a monolithic integrated circuit; the difference between the two types of devices is in how they are constructed and manufactured. The advantage of hybrid circuits is that components which cannot be included in a monolithic IC can be used, e.g., capacitors of large value, wound components, crystals, inductors. In military and space applications, numerous integrated circuits, transistors and diodes, in their die form, would be placed on either a ceramic or beryllium substrate. Either gold or aluminum wire would be bonded from the pads of the IC, transistor, or diode to the substrate.
Thick film technology is often used as the interconnecting medium for hybrid integrated circuits. The use of screen printed thick film interconnect provides advantages of versatility over thin film although feature sizes may be larger and deposited resistors wider in tolerance. Multi-layer thick film is a technique for further improvements in integration using a screen printed insulating dielectric to ensure connections between layers are made only where required. One key advantage for the circuit designer is complete freedom in the choice of resistor value in thick film technology. Planar resistors are also screen printed and included in the thick film interconnect design. The composition and dimensions of resistors can be selected to provide the desired values. The final resistor value is determined by design and can be adjusted by laser trimming. Once the hybrid circuit is fully populated with components, fine tuning prior to final test may be achieved by active laser trimming.
Thin film technology was also employed in the 1960s. Ultra Electronics manufactured circuits using a silica glass substrate. A film of tantalum was deposited by sputtering followed by a layer of gold by evaporation. The gold layer was first etched following the application of a photoresist to form solder compatible connection pads. Resistive networks were formed, also by a photoresist and etching process. These were trimmed to a high precision by selective adonization of the film. Capacitors and semiconductors were in the form of LID (Leadless Inverted Devices) soldered to the surface by selectively heating the substrate from the underside. Completed circuits were potted in a diallyl phthalate resin. Several customized passive networks were made using these techniques as were some amplifiers and other specialized circuits. It is believed that some passive networks were used in the engine control units manufactured by Ultra Electronics for Concorde.
Some modern hybrid circuit technologies, such as LTCC-substrate hybrids, allow for embedding of components within the layers of a multi-layer substrate in addition to components placed on the surface of the substrate. This technology produces a circuit that is, to some degree, three-dimensional.
Hybrid ICs are especially suitable for analog signals. They were used in some early digital computers but were replaced therein by monolithic ICs which offered higher performance.
Other electronic hybrids
In the early days of telephones, separate modules containing transformers and resistors were called hybrids or hybrid coils; they have been replaced by semiconductor integrated circuits.
In the early days of transistors the term hybrid circuit was used to describe circuits with both transistors and vacuum tubes; e.g., an audio amplifier with transistors used for voltage amplification followed by a vacuum tube power output stage, as suitable power transistors were not available. This usage, and the devices, are obsolete, however amplifiers that use a tube preamplifier stage coupled with a solid state output stage are still in production, and are called hybrid amplifiers in reference to this.
See also
Chip on board, aka black blobs
System in a package
Multi-chip module (MCM)
Monolithic microwave integrated circuit (MMIC)
Solid Logic Technology (SLT)
MIL-PRF-38534
Printed circuit board (PCB)
Printed Electronic Circuit - Ancestor of the Hybrid IC
References
External links
Electronic circuits | Hybrid integrated circuit | [
"Engineering"
] | 1,073 | [
"Electronic engineering",
"Electronic circuits"
] |
2,438,400 | https://en.wikipedia.org/wiki/Longest%20uncrossed%20knight%27s%20path | The longest uncrossed (or nonintersecting) knight's path is a mathematical problem involving a knight on the standard 8×8 chessboard or, more generally, on a square n×n board. The problem is to find the longest path the knight can take on the given board, such that the path does not intersect itself. A further distinction can be made between a closed path, which ends on the same field as where it begins, and an open path, which ends on a different field from where it begins.
Known solutions
The longest open paths on an n×n board are known only for n ≤ 9. Their lengths for n = 1, 2, …, 9 are:
0, 0, 2, 5, 10, 17, 24, 35, 47
The longest closed paths are known only for n ≤ 10. Their lengths for n = 1, 2, …, 10 are:
0, 0, 0, 4, 8, 12, 24, 32, 42, 54
Generalizations
The problem can be further generalized to rectangular n×m boards, or even to boards in the shape of any polyomino. The problem for n×m boards, where n doesn't exceed 8 and m might be very large, was given at 2018 ICPC World Finals. The solution used dynamic programming and uses the fact that the solution should exhibit a cyclic behavior.
Other standard chess pieces than the knight are less interesting, but fairy chess pieces like the camel ((3,1)-leaper), giraffe ((4,1)-leaper) and zebra ((3,2)-leaper) lead to problems of comparable complexity.
See also
A knight's tour is a self-intersecting knight's path visiting all fields of the board.
TwixT, a board game based on uncrossed knight's paths.
References
George Jelliss, Non-Intersecting Paths
Non-crossing knight tours
2018 ICPC World Finals solutions (Problem J)
External links
Uncrossed knight's tours
Mathematical chess problems
Computational problems in graph theory | Longest uncrossed knight's path | [
"Mathematics"
] | 427 | [
"Computational problems in graph theory",
"Mathematical chess problems",
"Recreational mathematics",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems"
] |
2,438,415 | https://en.wikipedia.org/wiki/Matrix%20isolation | Matrix isolation is an experimental technique used in chemistry and physics. It generally involves a material being trapped within an unreactive matrix. A host matrix is a continuous solid phase in which guest particles (atoms, molecules, ions, etc.) are embedded. The guest is said to be isolated within the host matrix. Initially the term matrix-isolation was used to describe the placing of a chemical species in any unreactive material, often polymers or resins, but more recently has referred specifically to gases in low-temperature solids. A typical matrix isolation experiment involves a guest sample being diluted in the gas phase with the host material, usually a noble gas or nitrogen. This mixture is then deposited on a window that is cooled to below the melting point of the host gas. The sample may then be studied using various spectroscopic procedures.
Experimental setup
The transparent window, on to which the sample is deposited, is usually cooled using a compressed helium or similar refrigerant. Experiments must be performed under a high vacuum to prevent contaminants from unwanted gases freezing to the cold window. Lower temperatures are preferred, due to the improved rigidity and "glassiness" of the matrix material. Noble gases such as argon are used not just because of their unreactivity but also because of their broad optical transparency in the solid state. Mono-atomic gases have relatively simple face-centered cubic (fcc) crystal structure, which can make interpretations of the site occupancy and crystal-field splitting of the guest easier. In some cases a reactive material, for example, methane, hydrogen or ammonia, may be used as the host material so that the reaction of the host with the guest species may be studied.
Using the matrix isolation technique, short-lived, highly-reactive species such as radical ions and reaction intermediates may be observed and identified by spectroscopic means. For example, the solid noble gas krypton can be used to form an inert matrix within which a reactive F3− ion can sit in chemical isolation. The reactive species can either be generated outside (before deposition) the apparatus and then be condensed, inside the matrix (after deposition) by irradiating or heating a precursor, or by bringing together two reactants on the growing matrix surface. For the deposition of two species it can be crucial to control the contact time and temperature. In twin jet deposition the two species have a much shorter contact time (and lower temperature) than in merged jet. With concentric jet the contact time is adjustable.
Spectroscopy
Within the host matrix, the rotation and translation of the guest particle is usually inhibited. Therefore, the matrix isolation technique may be used to simulate a spectrum of a species in the gas phase without rotational and translational interference. The low temperatures also help to produce simpler spectra, since only the lower electronic and vibrational quantum states are populated.
Especially infrared (IR) spectroscopy, which is used to investigate molecular vibration, benefits from the matrix isolation technique. For example, in the gas-phase IR spectrum of fluoroethane some spectral regions are very difficult to interpret, as vibrational quantum states heavily overlap with multiple rotational-vibrational quantum states. When fluoroethane is isolated in argon or neon matrices at low temperatures, the rotation of the fluoroethane molecule is inhibited. Because rotational-vibrational quantum states are quenched in the matrix isolation IR spectrum of fluoroethane, all vibrational quantum states can be identified. This is especially useful for the validation of simulated infrared spectra that can be obtained from computational chemistry.
History
Matrix isolation has its origins in the first half of the 20th century with the experiments by photo-chemists and physicists freezing samples in liquefied gases. The earliest isolation experiments involved the freezing of species in transparent, low temperature organic glasses, such as EPA (ether/isopentane/ethanol 5:5:2). The modern matrix isolation technique was developed extensively during the 1950s, in particular by George C. Pimentel. He initially used higher-boiling inert gases like xenon and nitrogen as the host material, and is often said to be the "father of matrix isolation".
Laser vaporization in matrix isolation spectroscopy was first brought about in 1969 by Schaeffer and Pearson using a yttrium aluminum garnet (YAG) laser to vaporize carbon which reacted with hydrogen to produce acetylene. They also showed that laser-vaporized boron would react with HCl to create BCl. In the 1970s, Koerner von Gustorf's lab used the technique to produce free metal atoms which were then deposited with organic substrates for use in organometallic chemistry. Spectroscopic studies were done on reactive intermediates in around the early 1980s by Bell Labs. They used laser-induced fluorescence to characterize multiple molecules like SnBi and SiC. Smalley's group employed the use of this method with time-of-flight mass spectrometry by analyzing Al clusters. With the work of chemists like these, laser-vaporization in matrix isolation spectroscopy rose in popularity due to its ability to generate transients involving metals, alloys and semi-conductor molecules and clusters.
See also
Host–guest chemistry
Inert gas
Van der Waals interactions
Radicals
References
Further reading
Ball, David W., Zakya H. Kafafi, et al., A Bibliography of Matrix Isolation Spectroscopy, 1954-1985, Rice University Press, Houston, 1988
Spectroscopy
Physical chemistry
Reaction mechanisms | Matrix isolation | [
"Physics",
"Chemistry"
] | 1,117 | [
"Reaction mechanisms",
"Applied and interdisciplinary physics",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy",
"Physical organic chemistry",
"nan",
"Chemical kinetics",
"Physical chemistry"
] |
2,438,898 | https://en.wikipedia.org/wiki/Stuck-at%20fault | A stuck-at fault is a particular fault model used by fault simulators and automatic test pattern generation (ATPG) tools to mimic a manufacturing defect within an integrated circuit. Individual signals and pins are assumed to be stuck at Logical '1', '0' and 'X'. For example, an input is tied to a logical 1 state during test generation to assure that a manufacturing defect with that type of behavior can be found with a specific test pattern. Likewise the input could be tied to a logical 0 to model the behavior of a defective circuit that cannot switch its output pin. Not all faults can be analyzed using the stuck-at fault model. Compensation for static hazards, namely branching signals, can render a circuit untestable using this model. Also, redundant circuits cannot be tested using this model, since by design there is no change in any output as a result of a single fault.
Single stuck at line
Single stuck line is a fault model used in digital circuits. It is used for post manufacturing testing, not design testing. The model assumes one line or node in the digital circuit is stuck at logic high or logic low. When a line is stuck, it is called a fault.
Digital circuits can be divided into:
Gate level or combinational circuits which contain no storage (latches and/or flip flops) but only gates like NAND, OR, XOR, etc.
Sequential circuits which contain storage.
This fault model applies to gate level circuits, or a block of a sequential circuit which can be separated from the storage elements. Ideally a gate-level circuit would be completely tested by applying all possible inputs and checking that they gave the right outputs, but this is completely impractical: an adder to add two 32-bit numbers would require 264 = 1.8*1019 tests, taking 58 years at 0.1 ns/test.
The stuck at fault model assumes that only one input on one gate will be faulty at a time, assuming that if more are faulty, a test that can detect any single fault, should easily find multiple faults.
To use this fault model, each input pin on each gate in turn, is assumed to be grounded, and a test vector is developed to indicate the circuit is faulty. The test vector is a collection of bits to apply to the circuit's inputs, and a collection of bits expected at the circuit's output. If the gate pin under consideration is grounded, and this test vector is applied to the circuit, at least one of the output bits will not agree with the corresponding output bit in the test vector. After obtaining the test vectors for grounded pins, each pin is connected in turn to a logic one and another set of test vectors is used to find faults occurring under these conditions. Each of these faults is called a single stuck-at-0 (s-a-0) or a single stuck-at-1 (s-a-1) fault, respectively.
This model worked so well for transistor-transistor logic (TTL), which was the logic of choice during the 1970s and 1980s, that manufacturers advertised how well they tested their circuits by a number called "stuck-at fault coverage", which represented the percentage of all possible stuck-at faults that their testing process could find. While the same testing model works moderately well for CMOS, it is not able to detect all possible CMOS faults. This is because CMOS may experience a failure mode known as a stuck-open fault, which cannot be reliably detected with one test vector and requires that two vectors be applied sequentially. The model also fails to detect bridging faults between adjacent signal lines, occurring in pins that drive bus connections and array structures. Nevertheless, the concept of single stuck-at faults is widely used, and with some additional tests has allowed industry to ship an acceptable low number of bad circuits.
The testing based on this model is aided by several things:
A test developed for a single stuck-at fault often finds a large number of other stuck-at faults.
A series of tests for stuck-at faults will often, purely by serendipity, find a large number of other faults, such as stuck-open faults. This is sometimes called "windfall" fault coverage.
Another type of testing called IDDQ testing measures the way the power supply current of a CMOS integrated circuit changes when a small number of slowly changing test vectors are applied. Since CMOS draws a very low current when its inputs are static, any increase in that current indicates a potential problem.
See also
Design for test
Digital electronics
Electronic design automation | Stuck-at fault | [
"Engineering"
] | 938 | [
"Electronic engineering",
"Digital electronics"
] |
5,952,732 | https://en.wikipedia.org/wiki/Therapeutic%20angiogenesis | Therapeutic angiogenesis is an experimental area in the treatment of ischemia, the condition associated with decrease in blood supply to certain organs, tissues, or body parts. This is usually caused by constriction or obstruction of the blood vessels. Angiogenesis is the natural healing process by which new blood vessels are formed to supply the organ or part in deficit with oxygen-rich blood. The goal of therapeutic angiogenesis is to stimulate the creation of new blood vessels in ischemic organs, tissues, or parts with the hope of increasing the level of oxygen-rich blood reaching these areas.
See also
Vascular endothelial growth factor
References
1. Isner JM. Therapeutic angiogenesis: a new frontier for vascular therapy.
Vasc Med. 1996 1: 79–87.
2. Ferrara N, Kerbel RS. Angiogenesis as a therapeutic target.
Nature. 2005 Dec 15;438(7070): 967–74.
3. Losordo DW, Dimmeler S. Therapeutic angiogenesis and vasculogenesis
for ischemic disease. Part I: angiogenic cytokines.
Circulation. 2004 109: 2487-2491
4. Cao L, Mooney DJ. Spatiotemporal control over growth factor signaling for therapeutic neovascularization.
Adv Drug Deliv Rev. 2007 Nov 10;59(13):1340-50.
Angiogenesis
Vascular procedures | Therapeutic angiogenesis | [
"Biology"
] | 299 | [
"Angiogenesis"
] |
5,953,219 | https://en.wikipedia.org/wiki/Refeeding%20syndrome | Refeeding syndrome (RFS) is a metabolic disturbance which occurs as a result of reinstitution of nutrition in people who are starved, severely malnourished, or metabolically stressed because of severe illness. When too much food or liquid nutrition supplement is consumed during the initial four to seven days following a malnutrition event, the production of glycogen, fat and protein in cells may cause low serum concentrations of potassium, magnesium and phosphate. The electrolyte imbalance may cause neurologic, pulmonary, cardiac, neuromuscular, and hematologic symptoms—many of which, if severe enough, may result in death.
Cause
Any individual who has had a negligible nutrient intake for many consecutive days and/or is metabolically stressed from a critical illness or major surgery is at risk of refeeding syndrome. Refeeding syndrome usually occurs within four days of starting to re-feed. Patients can develop fluid and electrolyte imbalance, especially hypophosphatemia, along with neurologic, pulmonary, cardiac, neuromuscular, and hematologic complications.
During fasting, the body switches its main fuel source from carbohydrates to fat tissue fatty acids and it is contended that amino acids from protein sources such muscle as the main energy sources. This timing of protein use is contested: that at first the body practices autophagy to source amino acids rather than being simultaneously used with fat. That the body only uses protein as fuel source when all fat has been depleted. The spleen decreases its rate of red blood cell breakdown thus conserving red blood cells. Many intracellular minerals become severely depleted during this period, although serum levels remain normal. Importantly, insulin secretion is suppressed in this fasting state, and glucagon secretion is increased.
During refeeding, insulin secretion resumes in response to increased blood sugar, resulting in increased glycogen, fat, and protein synthesis. Refeeding increases the basal metabolic rate. The process requires phosphates, magnesium and potassium which are already depleted, and the stores rapidly become used up. Formation of phosphorylated carbohydrate compounds in the liver and skeletal muscle depletes intracellular ATP and 2,3-diphosphoglycerate in red blood cells, leading to cellular dysfunction and inadequate oxygen delivery to the body's organs. Intracellular movement of electrolytes occurs along with a fall in the serum electrolytes, including phosphate and magnesium. Levels of serum glucose may rise, and B1 vitamin (thiamine) may fall. Abnormal heart rhythms are the most common cause of death from refeeding syndrome, with other significant risks including confusion, coma and convulsions and cardiac failure.
Anorectics
An anorectic is a drug which reduces appetite, resulting in lower food consumption, leading to weight loss.
Examples of anorectics includes stimulants like amphetamines, methylphenidate, and cocaine, along with opiates. Abusing them can lead to prolonged periods of inadequate calorie intake, mimicking anorexia nervosa. If someone misuses these substances and then starts eating normally again, they may be at increased risk of refeeding syndrome.
Clinical situations
The syndrome can occur at the beginning of treatment for eating disorders when patients have an increase in calorie intake and can be fatal. It can also occur when someone does not eat for several days at a time usually beginning after 4–5 days with no food. It can also occur after the onset of a severe illness or major surgery. The shifting of electrolytes and fluid balance increases cardiac workload and heart rate. This can lead to acute heart failure. Oxygen consumption is increased which strains the respiratory system and can make weaning from ventilation more difficult.
Signs and Symptoms
The signs and symptoms of refeeding syndrome can vary based on the severity of electrolyte disturbances, including weakness, arrhythmias, and respiratory difficulty. Hypophosphatemia, a key feature of refeeding syndrome, may lead to muscle weakness, heart failure, and impaired diaphragmatic function, while hypokalemia and hypomagnesemia can result in cardiac arrhythmias, seizures, and other severe complications.
Diagnosis
Refeeding syndrome can be fatal if not recognized and treated properly. The electrolyte disturbances of the refeeding syndrome can occur within the first few days of refeeding. Close monitoring of blood biochemistry is therefore necessary in the early refeeding period.
The National Institute for Health and Clinical Excellence identifies the following criteria for individuals at high risk for refeeding syndrome:
Either the patient has one or more of the following:
Body mass index (kg/m2) <16
Unintentional weight loss >15% in the past three to six months
Little or no nutritional intake for >10 days
Low levels of potassium, phosphate, or magnesium before feeding
Or the patient has two or more of the following:
Body mass index <18.5
Unintentional weight loss >10% in the past three to six months
Little or no nutritional intake for >5 days
History of alcohol misuse or drugs, including insulin, chemotherapy, antacids, or diuretics
Treatment
In critically ill patients admitted to an intensive care unit, if phosphate drops to below 0.65 mmol/L (2.0 mg/dL) from a previously normal level within three days of starting enteral or parenteral nutrition, caloric intake should be reduced to 480 kcals per day for at least two days while electrolytes are replaced. Daily doses of NADH/CoQ10/Thiamine, Vitamin B complex (strong) and a multivitamin and mineral preparation are strongly recommended. Blood biochemistry should be monitored regularly until it is stable. Although clinical trials are lacking in patients other than those admitted to intensive care, it is commonly recommended that energy intake should remain lower than that normally required for the first 3–5 days of treatment of refeeding syndrome for all patients.
History
In his 5th century BC work "On Fleshes" (De Carnibus), Hippocrates writes, "if a person goes seven days without eating or drinking anything, in this period most die; but there are some who survive that time but still die, and others are persuaded not to starve themselves to death but to eat and drink: however, the cavity no longer admits anything because the jejunum (nêstis) has grown together in that many days, and these people too die." Although Hippocrates misidentifies the cause of death, this passage likely represents an early description of refeeding syndrome. The Roman historian Flavius Josephus writing in the 1st century AD described classic symptoms of the syndrome among survivors of the siege of Jerusalem. He described the death of those who overindulged in food after the famine, whereas those who ate at a more restrained pace survived. The Shincho Koki chronicle also describes a similar outcome when starved soldiers were fed after the surrender at the siege of Tottori castle on October 25, 1581.
There were numerous cases of refeeding syndrome in the Siege of Leningrad during World War II, with Soviet civilians trapped in the city having become malnourished due to the German blockade.
A common error, repeated in multiple papers, is that "The syndrome was first described after World War II in Americans who, held by the Japanese as prisoners of war, had become malnourished during captivity and who were then released to the care of United States personnel in the Philippines."
However, closer inspection of the 1951 paper by Schnitker reveals the prisoners under study were not American POWs but Japanese soldiers who, already malnourished, surrendered in the Philippines during 1945, after the war was over.
Refeeding syndrome has also been documented among survivors of the Ebensee concentration camp upon their liberation by the United States Army in May 1945. After liberation, the inmates were fed rich soup; the stomachs of a few presumably could not handle the sudden caloric intake and digestion, and they died.
It is difficult to ascertain when the syndrome was first discovered and named, but it is likely the associated electrolyte disturbances were identified perhaps in Holland, the Netherlands during the so-called Hunger Winter, spanning the closing months of World War II.
See also
Minnesota Starvation Experiment
F-100 and F-75
References
Bibliography
Shils, M.E., Shike, M., Ross, A.C., Caballero, B. & Cousins, R.J. (2006). Modern nutrition in health and disease, 10th ed. Lippincott, Williams & Wilkins. Baltimore, MD.
Mahan, L.K. & Escott-Stump, S.E. (2004) Krause's Food, Nutrition, & Diet Therapy, 11th ed. Saunders, Philadelphia, PA.
Web page with link to full guideline CG32.
Mehanna, H. M., Moledina, J., & Travis, J. (2008). Refeeding syndrome: what it is, and how to prevent and treat it. BMJ (Clinical research ed.), 336(7659), 1495–1498. https://doi.org/10.1136/bmj.a301
External links
Nutrition
Metabolic disorders
Intensive care medicine
Syndromes | Refeeding syndrome | [
"Chemistry"
] | 1,954 | [
"Metabolic disorders",
"Metabolism"
] |
5,953,552 | https://en.wikipedia.org/wiki/Solar%20transit | In astronomy, a solar transit is a movement of any object passing between the Sun and the Earth. This includes the planets Mercury and Venus (see Transit of Mercury and Transit of Venus). A solar eclipse is also a solar transit of the Moon, but technically only if it does not cover the entire disc of the Sun (an annular eclipse), as "transit" counts only objects that are smaller than what they are passing in front of. Solar transit is only one of several types of astronomical transit
A solar transit (also called a solar outage, sometimes solar fade, sun outage, or sun fade) also occurs to communications satellites, which pass in front of the Sun for several minutes each day for several days straight for a period in the months around the equinoxes, the exact dates depending on where the satellite is in the sky relative to its earth station. Because the Sun also produces a great deal of microwave radiation in addition to sunlight, it overwhelms the microwave radio signals coming from the satellite's transponders. This enormous electromagnetic interference causes interruptions in fixed satellite services that use satellite dishes, including TV networks and radio networks, as well as VSAT and DBS.
Only downlinks from the satellite are affected, uplinks from the Earth are normally not, as the planet "shades" the Earth station when viewed from the satellite. Satellites in geosynchronous orbit are irregularly affected based on their inclination. Reception from satellites in other orbits are frequently but only momentarily affected by this, and by their nature the same signal is usually repeated or relayed on another satellite, if a tracking dish is used at all. Satellite radio and other services like GPS are not affected, as they use no receiving dish, and therefore do not concentrate the interference. (GPS and certain satellite radio systems use non-geosynchronous satellites.)
Solar transit begins with only a brief degradation in signal quality for a few moments. At the same time each day, for the next several days, it gets longer and gets worse, until finally gradually improving after several more days. For digital satellite services, the cliff effect will eliminate reception entirely at a given threshold. Reception is typically lost for only a few minutes on the worst day, but the beam width of the dish can affect this. Signal strength also affects this, as does the bandwidth of the signal. If the power is concentrated into a narrower band, there is a higher signal-to-noise ratio. If the same signal is spread wider, the receiver also gets a wider swath of noise, degrading reception.
The exact days and times of solar transit outages, for each satellite and for each receiving point (Earth station) on the Earth, are available at various websites. For broadcast networks, the network feed must be pre-recorded, replaced with local programming, fed via another satellite in a different orbital position, or fed via another method entirely during these times.
In the Northern Hemisphere, solar transit is usually in early March and October. In the Southern Hemisphere, solar transit is usually in early September and April. The time of day varies mainly with the longitude of the satellite and receiving station, while the exact days vary mainly with the station's latitude. Stations along the equator will experience solar transit right at the equinoxes, as that is where geostationary satellites are located directly over.
See also
Sun outage
Transit of Mercury
Transit of Venus
References
External links
NPR/PRSS explanation and sample charts
C/Ku diagram and charts for various satellites
Planetary science
Satellite broadcasting
Transit | Solar transit | [
"Astronomy",
"Engineering"
] | 724 | [
"Planetary science",
"Telecommunications engineering",
"Satellite broadcasting",
"Astronomical sub-disciplines"
] |
5,954,006 | https://en.wikipedia.org/wiki/Enyne%20metathesis | An enyne metathesis is an organic reaction taking place between an alkyne and an alkene with a metal carbene catalyst forming a butadiene. This reaction is a variation of olefin metathesis.
The general scheme is given by scheme 1:
When the reaction is intramolecular (in an enyne) it is called ring-closing enyne metathesis or RCEYM (scheme 2):
with Y representing oxygen or nitrogen and n an integer.
The reaction was first described in 1985 with the conversion of biphenyl 3.1 to a phenanthrene in scheme 3:
The carbene is a tungsten carbonyl when used in stoichiometric amounts (1 equivalent) yields 41% of the phenanthrene 3.2 and when used in catalytic amounts phenanthrene 3.3. The stereoselectivity of this reaction is large with the metal atom exclusively adding to one of the alkyne carbon atoms in the initial reaction step.
Reaction mechanism
The reaction mechanism for this reaction is outlined in scheme 4:
In the first catalytic cycle the alkyne group of enyne 4.1 forms a metallacyclobutene intermediate 4.3 with carbene 4.2 with R' and R' ' any organic group required to stabilized it. In the next step the metathesis step is reversed with formation of a new double bond and a new carbenic center in 4.4. The ring-closing step takes place when this center reacts with the alkene group to a metallacyclobutane 4.5 as in a regular olefin metathesis reaction. The butadiene group forms in the last step with expulsion of a new methylene carbene, initiating the next cycle but now with R' = H and R' ' = H.
This is the proposed "yne-then-ene" mechanism. Evidence for an "ene-then-yne" pathway is beginning to emerge, especially for ruthenium based catalytic systems.
The driving force for this conversion is the formation of a thermodynamically stable conjugated butadiene.
Scope
Enyne metathesis reactions are accelerated by ethylene as is demonstrated in the reaction displayed in scheme 5:
In this reaction with the Hoveyda–Grubbs catalyst, ethylene converts the alkyne group to the corresponding diene group before the reaction with the alkene group.
References
Carbon-carbon bond forming reactions
Rearrangement reactions | Enyne metathesis | [
"Chemistry"
] | 522 | [
"Carbon-carbon bond forming reactions",
"Rearrangement reactions",
"Organic reactions"
] |
5,954,264 | https://en.wikipedia.org/wiki/Mock%20modular%20form | In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, and a mock theta function is essentially a mock modular form of weight . The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his lost notebook. Sander Zwegers discovered that adding certain non-holomorphic functions to them turns them into harmonic weak Maass forms.
History
Ramanujan's 12 January 1920 letter to Hardy listed 17 examples of functions that he called mock theta functions, and his lost notebook contained several more examples. (Ramanujan used the term "theta function" for what today would be called a modular form.) Ramanujan pointed out that they have an asymptotic expansion at the cusps, similar to that of modular forms of weight , possibly with poles at cusps, but cannot be expressed in terms of "ordinary" theta functions. He called functions with similar properties "mock theta functions". Zwegers later discovered the connection of the mock theta function with weak Maass forms.
Ramanujan associated an order to his mock theta functions, which was not clearly defined. Before the work of Zwegers, the orders of known mock theta functions included
3, 5, 6, 7, 8, 10.
Ramanujan's notion of order later turned out to correspond to the conductor of the Nebentypus character of the weight harmonic Maass forms which admit Ramanujan's mock theta functions as their holomorphic projections.
In the next few decades, Ramanujan's mock theta functions were studied by Watson, Andrews, Selberg, Hickerson, Choi, McIntosh, and others, who proved Ramanujan's statements about them and found several more examples and identities. (Most of the "new" identities and examples were already known to Ramanujan and reappeared in his lost notebook.) In 1936, Watson found that under the action of elements of the modular group, the order 3 mock theta functions almost transform like modular forms of weight (multiplied by suitable powers of q), except that there are "error terms" in the functional equations, usually given as explicit integrals. However, for many years there was no good definition of a mock theta function. This changed in 2001 when Zwegers discovered the relation with non-holomorphic modular forms, Lerch sums, and indefinite theta series. Zwegers showed, using the previous work of Watson and Andrews, that the mock theta functions of orders 3, 5, and 7 can be written as the sum of a weak Maass form of weight and a function that is bounded along geodesics ending at cusps. The weak Maass form has eigenvalue under the hyperbolic Laplacian (the same value as holomorphic modular forms of weight ); however, it increases exponentially fast near cusps, so it does not satisfy the usual growth condition for Maass wave forms. Zwegers proved this result in three different ways, by relating the mock theta functions to Hecke's theta functions of indefinite lattices of dimension 2, and to Appell–Lerch sums, and to meromorphic Jacobi forms.
Zwegers's fundamental result shows that mock theta functions are the "holomorphic parts" of real analytic modular forms of weight . This allows one to extend many results about modular forms to mock theta functions. In particular, like modular forms, mock theta functions all lie in certain explicit finite-dimensional spaces, which reduces the long and hard proofs of many identities between them to routine linear algebra. For the first time it became possible to produce infinite number of examples of mock theta functions; before this work there were only about 50 examples known (most of which were first found by Ramanujan). As further applications of Zwegers's ideas, Kathrin Bringmann and Ken Ono showed that certain q-series arising from the Rogers–Fine basic hypergeometric series are related to holomorphic parts of weight harmonic weak Maass forms and showed that the asymptotic series for coefficients of the order 3 mock theta function f(q) studied by George Andrews and Leila Dragonette converges to the coefficients. In particular Mock theta functions have asymptotic expansions at cusps of the modular group, acting on the upper half-plane, that resemble those of modular forms of weight with poles at the cusps.
Definition
A mock modular form will be defined as the "holomorphic part" of a harmonic weak Maass form.
Fix a weight k, usually with 2k integral.
Fix a subgroup Γ of SL2(Z) (or of the metaplectic group if k is half-integral) and a character ρ of Γ. A modular form f for this character and this group Γ transforms under elements of Γ by
A weak Maass form of weight k is a continuous function on the upper half plane that transforms like a modular form of weight k and is an eigenfunction of the weight k Laplacian operator, and is called harmonic if its eigenvalue is . This is the eigenvalue of holomorphic weight k modular forms, so these are all examples of harmonic weak Maass forms. (A Maass form is a weak Maass form that decreases rapidly at cusps.)
So a harmonic weak Maass form is annihilated by the differential operator
If F is any harmonic weak Maass form then the function g given by
is holomorphic and transforms like a modular form of weight k, though it may not be holomorphic at cusps. If we can find any other function g* with the same image g, then F − g* will be holomorphic. Such a function is given by inverting the differential operator by integration; for example we can define
where
is essentially the incomplete gamma function.
The integral converges whenever g has a zero at the cusp i∞, and the incomplete gamma function can be extended by analytic continuation, so this formula can be used to define the holomorphic part g* of F even in the case when g is meromorphic at i∞, though this requires some care if k is 1 or not integral or if n = 0. The inverse of the differential operator is far from unique as we can add any homomorphic function to g* without affecting its image, and as a result the function g* need not be invariant under the group Γ. The function h = F − g* is called the holomorphic part of F.
A mock modular form is defined to be the holomorphic part h of some harmonic weak Maass form F. So there is an isomorphism from the space of mock modular forms h to a subspace of the harmonic weak Maass forms.
The mock modular form h is holomorphic but not quite modular, while h + g* is modular but not quite holomorphic. The space of mock modular forms of weight k contains the space of nearly modular forms ("modular forms that may be meromorphic at cusps") of weight k as a subspace. The quotient is (antilinearly) isomorphic to the space of holomorphic modular forms of weight 2 − k. The weight-(2 − k) modular form g corresponding to a mock modular form h is called its shadow. It is quite common for different mock theta functions to have the same shadow. For example, the 10 mock theta functions of order 5 found by Ramanujan fall into two groups of 5, where all the functions in each group have the same shadow (up to multiplication by a constant).
Don Zagier defines a mock theta function as a rational power of q = e2i times a mock modular form of weight whose shadow is a theta series of the form
for a positive rational κ and an odd periodic function ε. (Any such theta series is a modular form of weight ). The rational power of q is a historical accident.
Most mock modular forms and weak Maass forms have rapid growth at cusps. It is common to impose the condition that they grow at most exponentially fast at cusps (which for mock modular forms means they are "meromorphic" at cusps). The space of mock modular forms (of given weight and group) whose growth is bounded by some fixed exponential function at cusps is finite-dimensional.
Appell–Lerch sums
Appell–Lerch sums, a generalization of Lambert series, were first studied by Paul Émile Appell and Mathias Lerch. Watson studied the order 3 mock theta functions by expressing them in terms of Appell–Lerch sums, and Zwegers used them to show that mock theta functions are essentially mock modular forms.
The Appell–Lerch series is
where
and
The modified series
where
and y = Im() and
satisfies the following transformation properties
In other words, the modified Appell–Lerch series transforms like a modular form with respect to . Since mock theta functions can be expressed in terms of Appell–Lerch series this means that mock theta functions transform like modular forms if they have a certain non-analytic series added to them.
Indefinite theta series
George Andrews showed that several of Ramanujan's fifth order mock theta functions are equal to quotients where θ() is a modular form of weight and Θ() is a theta function of an indefinite binary quadratic form, and Dean Hickerson proved similar results for seventh order mock theta functions. Zwegers showed how to complete the indefinite theta functions to produce real analytic modular forms, and used this to give another proof of the relation between mock theta functions and weak Maass wave forms.
Meromorphic Jacobi forms
George Andrews observed that some of Ramanujan's fifth order mock theta functions could be expressed in terms of quotients of Jacobi's theta functions. Zwegers used this idea to express mock theta functions as Fourier coefficients of meromorphic Jacobi forms.
Applications
Ruth Lawrence and Don Zagier related mock theta functions to quantum invariants of 3-manifolds.
A. M. Semikhatov, A. Taormina, and I. Yu Tipunin related mock theta functions to infinite-dimensional Lie superalgebras and two-dimensional conformal field theory.
J. Troost showed that the modular completions of mock modular forms arise as elliptic genera of conformal field theories with continuous spectrum.
Mock theta functions appear in the theory of umbral moonshine.
Atish Dabholkar, Sameer Murthy, and Don Zagier showed that mock modular forms are related to the degeneracies of quantum black holes in N=4 string theories.
Examples
Any modular form of weight k (possibly only meromorphic at cusps) is a mock modular form of weight k with shadow 0.
The quasimodular Eisenstein series
of weight 2 and level 1 is a mock modular form of weight 2, with shadow a constant. This means that
transforms like a modular form of weight 2 (where = x + iy).
The function studied by Don Zagier with Fourier coefficients that are Hurwitz class numbers H(N) of imaginary quadratic fields is a mock modular form of weight , level 4 and shadow Σ q n2. The corresponding weak Maass wave form is
where
and y = Im(), q = e2i .
Mock theta functions are mock modular forms of weight whose shadow is a unary theta function, multiplied by a rational power of q (for historical reasons). Before the work of Zwegers led to a general method for constructing them, most examples were given as basic hypergeometric functions, but this is largely a historical accident, and most mock theta functions have no known simple expression in terms of such functions.
The "trivial" mock theta functions are the (holomorphic) modular forms of weight , which were classified by Serre and Stark, who showed that they could all be written in terms of theta functions of 1-dimensional lattices.
The following examples use the q-Pochhammer symbols (a;q)n which are defined as:
Order 2
Some order 2 mock theta functions were studied by McIntosh.
The function μ was found by Ramanujan in his lost notebook.
These are related to the functions listed in the section on order-8 functions by
Order 3
Ramanujan mentioned four order-3 mock theta functions in his letter to Hardy, and listed a further three in his lost notebook, which were rediscovered by G. N. Watson. The latter proved the relations between them stated by Ramanujan and also found their transformations under elements of the modular group by expressing them as Appell–Lerch sums. Dragonette described the asymptotic expansion of their coefficients. Zwegers related them to harmonic weak Maass forms. See also the monograph by Nathan Fine.
The seven order-3 mock theta functions given by Ramanujan are
, .
.
.
.
.
.
.
The first four of these form a group with the same shadow (up to a constant), and so do the last three. More precisely, the functions satisfy the following relations (found by Ramanujan and proved by Watson):
Order 5
Ramanujan wrote down ten mock theta functions of order 5 in his 1920 letter to Hardy, and stated some relations between them that were proved by Watson. In his lost notebook he stated some further identities relating these functions, equivalent to the mock theta conjectures, that were proved by Hickerson. Andrews found representations of many of these functions as the quotient of an indefinite theta series by modular forms of weight .
Order 6
Ramanujan wrote down seven mock theta functions of order 6 in his lost notebook, and stated 11 identities between them, which were proved by Andrews and Hickerson. Two of Ramanujan's identities relate φ and ψ at various arguments, four of them express φ and ψ in terms of Appell–Lerch series, and the last five identities express
the remaining five sixth-order mock theta functions in terms of φ and ψ. Berndt and Chan discovered two more sixth-order functions.
The order 6 mock theta functions are:
Order 7
Ramanujan gave three mock theta functions of order 7 in his 1920 letter to Hardy. They were studied by Selberg, who found asymptotic expansion for their coefficients, and by Andrews. Hickerson found representations of many of these functions as the quotients of indefinite theta series by modular forms of weight . Zwegers described their modular transformation properties.
These three mock theta functions have different shadows, so unlike the case of Ramanujan's order-3 and order-5 functions, there are no linear relations between them and ordinary modular forms.
The corresponding weak Maass forms are
where
and
is more or less the complementary error function.
Under the metaplectic group, these three functions transform according to a certain 3-dimensional representation of the metaplectic group as follows
In other words, they are the components of a level 1 vector-valued harmonic weak Maass form of weight .
Order 8
Gordon and McIntosh found eight mock theta functions of order 8. They found five linear relations involving them, and expressed four of the functions as Appell–Lerch sums, and described their transformations under the modular group.
The two functions V1 and U0 were found earlier by Ramanujan in his lost notebook.
Order 10
Ramanujan listed four order-10 mock theta functions in his lost notebook, and stated some relations between them, which were proved by Choi.
Notes
References
, reprinted in volume I of his collected works
Further reading
External links
International Conference: Mock theta functions and applications 2009
Papers on mock theta functions by George Andrews
Papers on mock theta functions by Kathrin Bringmann
Papers on mock theta functions by Ken Ono
Papers on mock theta functions by Sander Zwegers
Modular forms
Q-analogs
Srinivasa Ramanujan | Mock modular form | [
"Mathematics"
] | 3,299 | [
"Modular forms",
"Q-analogs",
"Number theory",
"Combinatorics"
] |
5,955,500 | https://en.wikipedia.org/wiki/Straight-nine%20engine | The straight-nine engine (also referred to as an inline-nine engine; abbreviated I9 or L9) is a piston engine with nine cylinders arranged in a straight line along the crankshaft. The most common application is for large diesel engines used by ships.
Examples of straight-nine engines include:
Rolls-Royce Bergen B, C and K series
Wärtsilä RT-flex60C-B, RT-flex82C, RTA84T-D, RTA84C, RTA96C, 20, 26, 32, Wasa32LN, 38, 46 and 46F series
References
Straight-09
Nine-cylinder engines
09 | Straight-nine engine | [
"Engineering"
] | 138 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
5,956,526 | https://en.wikipedia.org/wiki/Water%20stagnation | Water stagnation or still water occurs when water stops flowing for a long period of time. Stagnant water can be a significant environmental hazard.
Dangers
Malaria and dengue are among the main dangers of still water, which can become a breeding ground for the mosquitoes that transmit these diseases.
Stagnant water can be dangerous because it provides a better incubator than running water for many kinds of infectious pathogens. Stagnant water can be contaminated with human and animal feces, particularly in deserts or other areas of low rainfall. Water stagnation for as little as six days can completely change bacterial community composition and increase cell count.
Stagnant water may be classified into the following basic, although overlapping, types:
Water body stagnation (stagnation in swamp, lake, lagoon, river, etc.)
Surface and ground water stagnation
Trapped water stagnation. The water may be trapped in human artifacts (discarded cans, plant pots, tires, dug-outs, roofs, etc.), as well as in natural containers, such as hollow tree trunks, leaf sheaths, etc.
To avoid ground and surface water stagnation, the drainage of surface and subsoil is advised. Areas with a shallow water table are more susceptible to ground water stagnation due to the lower availability of natural soil drainage.
Life that may thrive in stagnant water
Some plants prefer flowing water, while others, such as lotuses, prefer stagnant water.
Various anaerobic bacteria are commonly found in stagnant water. For this reason, pools of stagnant water have historically been used in processing hemp and some other fiber crops, as well as linden bark used for making bast shoes. Several weeks of soaking makes bast fibers easily separable due to bacterial and fermentative processes known as retting.
Denitrifying bacteria
Leptospira
Purple bacteria (both sulfur and non-sulfur)
Brain-eating amoeba
Fish
Asian swamp eel
Lepisosteidae (commonly known as the gar)
Northern snakehead
Pygmy gourami
Spotted barb
Walking catfish
Insects
Stagnant water is the favorite breeding ground for a number of insects.
Dragonfly nymphs
Fly maggots
Mosquito larvae
Nepidae (water scorpions)
Other
Algae
Biofilm
A number of species of frogs prefer stagnant water.
Some species of turtles, such as the mata mata.
See also
Anoxic waters
Eutrophication (excessive enrichment by nutrients and minerals)
Slough (hydrology)
Wetland
Residence time distribution
Water pollution
References
Environmental soil science
Water pollution
Liquid water
Aquifers
Aquatic ecology
Water supply
Waterborne diseases
Wetlands | Water stagnation | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 548 | [
"Hydrology",
"Environmental soil science",
"Wetlands",
"Water pollution",
"Aquifers",
"Ecosystems",
"Environmental engineering",
"Aquatic ecology",
"Water supply"
] |
10,096,050 | https://en.wikipedia.org/wiki/RIKEN%20MDGRAPE-3 | MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the Riken research institute in Japan. It is a special purpose system built for molecular dynamics simulations, especially protein structure prediction.
MDGRAPE-3 consists of 201 units of 24 custom MDGRAPE-3 chips (4,824 total), plus additional dual-core Intel Xeon processors (codename "Dempsey") which serve as host machines.
In June 2006 Riken announced its completion, achieving the petaFLOPS level of floating point arithmetic performance. This was more than three times faster than the 2006 version of the IBM Blue Gene/L system, which then led the TOP500 list of supercomputers at 0.28 petaFLOPS. Because it's not a general-purpose machine capable of running the LINPACK benchmarks, MDGRAPE-3 does not qualify for the TOP500 list.
See also
Supercomputing in Japan
MDGRAPE-4
References
Makoto Taiji, "MDGRAPE-3 chip: A 165-Gflops application-specific LSI for Molecular Dynamics Simulations", 16th IEEE Hot Chips Symposium, August 2004.
External links
MD-GRAPE Project@IBM
High-Performance Molecular Simulation Team@Riken
Peta Computing Institute
Tetsu Narumi's MDGRAPE page
Molecular Dynamics Machine using MDGRAPE-2
Computer-related introductions in 2006
Petascale computers
Riken
Molecular dynamics
Supercomputers
Supercomputing in Japan | RIKEN MDGRAPE-3 | [
"Physics",
"Chemistry",
"Technology"
] | 312 | [
"Supercomputers",
"Molecular physics",
"Supercomputing",
"Computational physics",
"Molecular dynamics",
"Computational chemistry"
] |
10,096,234 | https://en.wikipedia.org/wiki/Eyespot%20apparatus | The eyespot apparatus (or stigma) is a photoreceptive organelle found in the flagellate or (motile) cells of green algae and other unicellular photosynthetic organisms such as euglenids. It allows the cells to sense light direction and intensity and respond to it, prompting the organism to either swim towards the light (positive phototaxis), or away from it (negative phototaxis). A related response ("photoshock" or photophobic response) occurs when cells are briefly exposed to high light intensity, causing the cell to stop, briefly swim backwards, then change swimming direction. Eyespot-mediated light perception helps the cells in finding an environment with optimal light conditions for photosynthesis. Eyespots are the simplest and most common "eyes" found in nature, composed of photoreceptors and areas of bright orange-red red pigment granules. Signals relayed from the eyespot photoreceptors result in alteration of the beating pattern of the flagella, generating a phototactic response.
Microscopic structure
Under the light microscope, eyespots appear as dark, orange-reddish spots or stigmata. They get their color from carotenoid pigments contained in bodies called pigment granules. The photoreceptors are found in the plasma membrane overlaying the pigmented bodies.
The eyespot apparatus of Euglena comprises the paraflagellar body connecting the eyespot to the flagellum. In electron microscopy, the eyespot apparatus appears as a highly ordered lamellar structure formed by membranous rods in a helical arrangement.
In Chlamydomonas, the eyespot is part of the chloroplast and takes on the appearance of a membranous sandwich structure. It is assembled from chloroplast membranes (outer, inner, and thylakoid membranes) and carotenoid-filled granules overlaid by plasma membrane. The stacks of granules act as a quarter-wave plate, reflecting incoming photons back to the overlying photoreceptors, while shielding the photoreceptors from light coming from other directions. It disassembles during cell division and reforms in the daughter cells in an asymmetric fashion in relation to the cytoskeleton. This asymmetric positioning of the eyespot in the cell is essential for proper phototaxis.
Eyespot proteins
The most critical eyespot proteins are the photoreceptor proteins that sense light. The photoreceptors found in unicellular organisms fall into two main groups: flavoproteins and retinylidene proteins (rhodopsins). Flavoproteins are characterized by containing flavin molecules as chromophores, whereas retinylidene proteins contain retinal. The photoreceptor protein in Euglena is likely a flavoprotein. In contrast, Chlamydomonas phototaxis is mediated by archaeal-type rhodopsins.
Besides photoreceptor proteins, eyespots contain a large number of structural, metabolic and signaling proteins. The eyespot proteome of Chlamydomonas cells consists of roughly 200 different proteins.
Photoreception and signal transduction
The Euglena photoreceptor was identified as a blue-light-activated adenylyl cyclase. Excitation of this receptor protein results in the formation of cyclic adenosine monophosphate (cAMP) as a second messenger. Chemical signal transduction ultimately triggers changes in flagellar beat patterns and cell movement.
The archaeal-type rhodopsins of Chlamydomonas contain an all-trans retinylidene chromatophore which undergoes photoisomerization to a 13-cis isomer. This activates a photoreceptor channel, leading to a change in membrane potential and cellular calcium ion concentration. Photoelectric signal transduction ultimately triggers changes in flagellar strokes and thus cell movement.
See also
Evolution of the eye
Ocelloid
References
Sensory receptors
Signal transduction
Pigments
Integral membrane proteins
Organelles
Molecular biology | Eyespot apparatus | [
"Chemistry",
"Biology"
] | 875 | [
"Biochemistry",
"Neurochemistry",
"Molecular biology",
"Signal transduction"
] |
10,101,991 | https://en.wikipedia.org/wiki/Newton%27s%20inequalities | In mathematics, the Newton inequalities are named after Isaac Newton. Suppose a1, a2, ..., an are non-negative real numbers and let denote the kth elementary symmetric polynomial in a1, a2, ..., an. Then the elementary symmetric means, given by
satisfy the inequality
Equality holds if and only if all the numbers ai are equal.
It can be seen that S1 is the arithmetic mean, and Sn is the n-th power of the geometric mean.
See also
Maclaurin's inequality
References
D.S. Bernstein Matrix Mathematics: Theory, Facts, and Formulas (2009 Princeton) p. 55
Isaac Newton
Inequalities
Symmetric functions | Newton's inequalities | [
"Physics",
"Mathematics"
] | 141 | [
"Mathematical theorems",
"Algebra",
"Binary relations",
"Symmetric functions",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Symmetry"
] |
10,104,622 | https://en.wikipedia.org/wiki/Kuratowski%27s%20closure-complement%20problem | In point-set topology, Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space. The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. It gained additional exposure in Kuratowski's fundamental monograph Topologie (first published in French in 1933; the first English translation appeared in 1966) before achieving fame as a textbook exercise in John L. Kelley's 1955 classic, General Topology.
Proof
Letting denote an arbitrary subset of a topological space, write for the closure of , and for the complement of . The following three identities imply that no more than 14 distinct sets are obtainable:
. (The closure operation is idempotent.)
. (The complement operation is an involution.)
. (Or equivalently , using identity (2)).
The first two are trivial. The third follows from the identity where is the interior of which is equal to the complement of the closure of the complement of , . (The operation is idempotent.)
A subset realizing the maximum of 14 is called a 14-set. The space of real numbers under the usual topology contains 14-sets. Here is one example:
where denotes an open interval and denotes a closed interval. Let denote this set. Then the following 14 sets are accessible:
, the set shown above.
Further results
Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology.
The closure-complement operations yield a monoid that can be used to classify topological spaces.
References
External links
The Kuratowski Closure-Complement Theorem by B. J. Gardner and Marcel Jackson
The Kuratowski Closure-Complement Problem by Mark Bowron
Topology
Mathematical problems | Kuratowski's closure-complement problem | [
"Physics",
"Mathematics"
] | 412 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Mathematical problems"
] |
10,105,237 | https://en.wikipedia.org/wiki/Sylvester%20equation | In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form:
It is named after English mathematician James Joseph Sylvester. Then given matrices A, B, and C, the problem is to find the possible matrices X that obey this equation. All matrices are assumed to have coefficients in the complex numbers. For the equation to make sense, the matrices must have appropriate sizes, for example they could all be square matrices of the same size. But more generally, A and B must be square matrices of sizes n and m respectively, and then X and C both have n rows and m columns.
A Sylvester equation has a unique solution for X exactly when there are no common eigenvalues of A and −B.
More generally, the equation AX + XB = C has been considered as an equation of bounded operators on a (possibly infinite-dimensional) Banach space. In this case, the condition for the uniqueness of a solution X is almost the same: There exists a unique solution X exactly when the spectra of A and −B are disjoint.
Existence and uniqueness of the solutions
Using the Kronecker product notation and the vectorization operator , we can rewrite Sylvester's equation in the form
where is of dimension , is of dimension , of dimension and is the identity matrix. In this form, the equation can be seen as a linear system of dimension .
Theorem.
Given matrices and , the Sylvester equation has a unique solution for any if and only if and do not share any eigenvalue.
Proof.
The equation is a linear system with unknowns and the same number of equations. Hence it is uniquely solvable for any given if and only if the homogeneous equation
admits only the trivial solution .
(i) Assume that and do not share any eigenvalue. Let be a solution to the abovementioned homogeneous equation. Then , which can be lifted to
for each
by mathematical induction. Consequently,
for any polynomial . In particular, let be the characteristic polynomial of . Then
due to the Cayley–Hamilton theorem; meanwhile, the spectral mapping theorem tells us
where denotes the spectrum of a matrix. Since and do not share any eigenvalue, does not contain zero, and hence is nonsingular. Thus as desired. This proves the "if" part of the theorem.
(ii) Now assume that and share an eigenvalue . Let be a corresponding right eigenvector for , be a corresponding left eigenvector for , and . Then , and
Hence is a nontrivial solution to the aforesaid homogeneous equation, justifying the "only if" part of the theorem. Q.E.D.
As an alternative to the spectral mapping theorem, the nonsingularity of in part (i) of the proof can also be demonstrated by the Bézout's identity for coprime polynomials.
Let be the characteristic polynomial of . Since and do not share any eigenvalue, and are coprime. Hence there exist polynomials and such that . By the Cayley–Hamilton theorem, . Thus , implying that is nonsingular.
The theorem remains true for real matrices with the caveat that one considers their complex eigenvalues. The proof for the "if" part is still applicable; for the "only if" part, note that both and satisfy the homogenous equation , and they cannot be zero simultaneously.
Roth's removal rule
Given two square complex matrices A and B, of size n and m, and a matrix C of size n by m, then one can ask when the following two square matrices of size n + m are similar to each other: and . The answer is that these two matrices are similar exactly when there exists a matrix X such that AX − XB = C. In other words, X is a solution to a Sylvester equation. This is known as Roth's removal rule.
One easily checks one direction: If AX − XB = C then
Roth's removal rule does not generalize to infinite-dimensional bounded operators on a Banach space. Nevertheless, Roth's removal rule generalizes to the systems of Sylvester equations.
Numerical solutions
A classical algorithm for the numerical solution of the Sylvester equation is the Bartels–Stewart algorithm, which consists of transforming and into Schur form by a QR algorithm, and then solving the resulting triangular system via back-substitution. This algorithm, whose computational cost is arithmetical operations, is used, among others, by LAPACK and the lyap function in GNU Octave. See also the sylvester function in that language. In some specific image processing applications, the derived Sylvester equation has a closed form solution.
See also
Lyapunov equation, a special case of the Sylvester equation
Algebraic Riccati equation
Notes
References
External links
Online solver for arbitrary sized matrices.
Mathematica function to solve the Sylvester equation
MATLAB function to solve the Sylvester equation
Matrices
Control theory | Sylvester equation | [
"Mathematics"
] | 1,010 | [
"Applied mathematics",
"Control theory",
"Mathematical objects",
"Matrices (mathematics)",
"Dynamical systems"
] |
10,106,425 | https://en.wikipedia.org/wiki/Orr%E2%80%93Sommerfeld%20equation | The Orr–Sommerfeld equation, in fluid dynamics, is an eigenvalue equation describing the linear two-dimensional modes of disturbance to a viscous parallel flow. The solution to the Navier–Stokes equations for a parallel, laminar flow can become unstable if certain conditions on the flow are satisfied, and the Orr–Sommerfeld equation determines precisely what the conditions for hydrodynamic stability are.
The equation is named after William McFadden Orr and Arnold Sommerfeld, who derived it at the beginning of the 20th century.
Formulation
The equation is derived by solving a linearized version of the Navier–Stokes equation for the perturbation velocity field
,
where is the unperturbed or basic flow. The perturbation velocity has the wave-like solution (real part understood). Using this knowledge, and the streamfunction representation for the flow, the following dimensional form of the Orr–Sommerfeld equation is obtained:
,
where is the dynamic viscosity of the fluid, is its density, and is the potential or stream function. In the case of zero viscosity (), the equation reduces to Rayleigh's equation. The equation can be written in non-dimensional form by measuring velocities according to a scale set by some characteristic velocity , and by measuring lengths according to channel depth . Then the equation takes the form
,
where
is the Reynolds number of the base flow. The relevant boundary conditions are the no-slip boundary conditions at the channel top and bottom and ,
at and in the case where is the potential function.
Or:
at and in the case where is the stream function.
The eigenvalue parameter of the problem is and the eigenvector is . If the imaginary part of the wave speed is positive, then the base flow is unstable, and the small perturbation introduced to the system is amplified in time.
The equation can also be derived for three-dimensional disturbances of the form
,
with (real part understood). Any solution to the three-dimensional equation can be mapped back to a more unstable (lower Reynolds number) solution of the two-dimensional equation above due to Squire's theorem. It is therefore sufficient to study only two-dimensional disturbances when dealing with the linear stability of a parallel flow.
Solutions
For all but the simplest of velocity profiles , numerical or asymptotic methods are required to calculate solutions. Some typical flow profiles are discussed below. In general, the spectrum of the equation is discrete and infinite for a bounded flow, while for unbounded flows (such as boundary-layer flow), the spectrum contains both continuous and discrete parts.
For plane Poiseuille flow, it has been shown that the flow is unstable (i.e. one or more eigenvalues has a positive imaginary part) for some when and the neutrally stable mode at having , . To see the stability properties of the system, it is customary to plot a dispersion curve, that is, a plot of the growth rate as a function of the wavenumber .
The first figure shows the spectrum of the Orr–Sommerfeld equation at the critical values listed above. This is a plot of the eigenvalues (in the form ) in the complex plane. The rightmost eigenvalue is the most unstable one. At the critical values of Reynolds number and wavenumber, the rightmost eigenvalue is exactly zero. For higher (lower) values of Reynolds number, the rightmost eigenvalue shifts into the positive (negative) half of the complex plane. Then, a fuller picture of the stability properties is given by a plot exhibiting the functional dependence of this eigenvalue; this is shown in the second figure. The third figure shows the neutral stability curve which divides the -plane into the region where the flow is linearly stable and the region where the flow is linearly unstable.
On the other hand, the spectrum of eigenvalues for Couette flow indicates stability, at all Reynolds numbers. However, in experiments, Couette flow is found to be unstable to small, but finite, perturbations for which the linear theory, and the Orr–Sommerfeld equation do not apply. It has been argued that the non-normality of the eigenvalue problem associated with Couette (and indeed, Poiseuille) flow might explain that observed instability. That is, the eigenfunctions of the Orr–Sommerfeld operator are complete but non-orthogonal. Then, the energy of the disturbance contains contributions from all eigenfunctions of the Orr–Sommerfeld equation. Even if the energy associated with each eigenvalue considered separately is decaying exponentially in time (as predicted by the Orr–Sommerfeld analysis for the Couette flow), the cross terms arising from the non-orthogonality of the eigenvalues can increase transiently. Thus, the total energy increases transiently (before tending asymptotically to zero). The argument is that if the magnitude of this transient growth is sufficiently large, it destabilizes the laminar flow, however this argument has not been universally accepted.
A nonlinear theory explaining transition, has also been proposed. Although that theory does include linear transient growth, the focus is on 3D nonlinear processes that are strongly suspected to underlie transition to turbulence in shear flows. The theory has led to the construction of so-called complete 3D steady states, traveling waves and time-periodic solutions of the Navier-Stokes equations that capture many of the key features of transition and coherent structures observed in the near wall region of turbulent shear flows. Even though "solution" usually implies the existence of an analytical result, it is common practice in fluid mechanics to refer to numerical results as "solutions" - regardless of whether the approximated solutions satisfy the Navier-Stokes equations in a mathematically satisfactory way or not. It is postulated that transition to turbulence involves the dynamic state of the fluid evolving from one solution to the next. The theory is thus predicated upon the actual existence of such solutions (many of which have yet to be observed in a physical experimental setup). This relaxation on the requirement of exact solutions allows a great deal of flexibility, since exact solutions are extremely difficult to obtain (contrary to numerical solutions), at the expense of rigor and (possibly) correctness. Thus, even though not as rigorous as previous approaches to transition, it has gained immense popularity.
An extension of the Orr–Sommerfeld equation to the flow in porous media has been recently suggested.
Mathematical methods for free-surface flows
For Couette flow, it is possible to make mathematical progress in the solution of the Orr–Sommerfeld equation. In this section, a demonstration of this method is given for the case of free-surface flow, that is, when the upper lid of the channel is replaced by a free surface. Note first of all that it is necessary to modify upper boundary conditions to take account of the free surface. In non-dimensional form, these conditions now read
at ,
,
at .
The first free-surface condition is the statement of continuity of tangential stress, while the second condition relates the normal stress to the surface tension. Here
are the Froude and Weber numbers respectively.
For Couette flow , the four linearly independent solutions to the non-dimensional Orr–Sommerfeld equation are,
,
where is the Airy function of the first kind. Substitution of the superposition solution into the four boundary conditions gives four equations in the four unknown constants . For the equations to have a non-trivial solution, the determinant condition
must be satisfied. This is a single equation in the unknown c, which can be solved numerically or by asymptotic methods. It can be shown that for a range of wavenumbers and for sufficiently large Reynolds numbers, the growth rate is positive.
See also
Gravitational comet-asteroid forcing events
Gravity wave
Lee waves
Rogue wave
References
Further reading
Fluid dynamics
Equations of fluid dynamics
Fluid dynamic instabilities
de:Lineare Stabilitätstheorie#Orr-Sommerfeld-Gleichung | Orr–Sommerfeld equation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,675 | [
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamic instabilities",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
10,107,406 | https://en.wikipedia.org/wiki/Double%20inverted%20pendulum | A double inverted pendulum is the combination of the inverted pendulum and the double pendulum. The double inverted pendulum is unstable, meaning that it will fall down unless it is controlled in some way. The two main methods of controlling a double inverted pendulum are moving the base, as with the inverted pendulum, or by applying a torque at the pivot point between the two pendulums.
See also
Inverted pendulum
Inertia wheel pendulum
Furuta pendulum
Tuned mass damper
References
External links
A dynamical simulation of a double inverted pendulum on an oscillatory base
Pendulums
Control engineering | Double inverted pendulum | [
"Physics",
"Engineering"
] | 115 | [
"Control engineering",
"Classical mechanics stubs",
"Classical mechanics"
] |
11,542,343 | https://en.wikipedia.org/wiki/Transcription%20factor%20II%20A | Transcription factor TFIIA is a nuclear protein involved in the RNA polymerase II-dependent transcription of DNA. TFIIA is one of several general (basal) transcription factors (GTFs) that are required for all transcription events that use RNA polymerase II. Other GTFs include TFIID, a complex composed of the TATA binding protein TBP and TBP-associated factors (TAFs), as well as the factors TFIIB, TFIIE, TFIIF, and TFIIH. Together, these factors are responsible for promoter recognition and the formation of a transcription preinitiation complex (PIC) capable of initiating RNA synthesis from a DNA template.
Functions
TFIIA interacts with the TBP subunit of TFIID and aids in the binding of TBP to TATA-box containing promoter DNA. Interaction of TFIIA with TBP facilitates formation of and stabilizes the preinitiation complex. Interaction of TFIIA with TBP also results in the exclusion of negative (repressive) factors that might otherwise bind to TBP and interfere with PIC formation. TFIIA also acts as a coactivator for some transcriptional activators, assisting with their ability to increase, or activate, transcription. The requirement for TFIIA in vitro transcription systems has been variable, and it can be considered either as a GTF and/or a loosely associated TAF-like coactivator. Genetic analysis in yeast has shown that TFIIA is essential for viability.
Structure
TFIIA is a heterodimer with two subunits: one large unprocessed (subunit 1, or alpha/beta; gene name ) and one small (subunit 2, or gamma; gene name ). It was originally believed to be a heterotrimer of an alpha (p35), a beta (p19) and a gamma subunit (p12). In humans, the sizes of the encoded proteins are approximately 55 kD and 12 kD. Both genes are present in species ranging from humans to yeast, and their protein products interact to form a complex composed of a beta barrel domain and an alpha helical bundle domain. It is the N-terminal and C-terminal regions of the large subunit that participate in interactions with the small subunit. These regions are separated by another domain whose sequence is always present in large subunits from various species but whose size varies and whose sequence is poorly conserved. A second gene encoding a large TFIIA subunit has been found in some higher eukaryotes. This gene, ALF/TFIIAtau (gene name ) is expressed only in oocytes and spermatocytes, suggesting it has a TFIIA-like regulatory role for gene expression only in germ cells.
References
External links
Gene expression
Transcription factors | Transcription factor II A | [
"Chemistry",
"Biology"
] | 574 | [
"Gene expression",
"Signal transduction",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Transcription factors"
] |
709,137 | https://en.wikipedia.org/wiki/Cytochrome%20P450 | Cytochromes P450 (P450s or CYPs) are a superfamily of enzymes containing heme as a cofactor that mostly, but not exclusively, function as monooxygenases. However, they are not omnipresent; for example, they have not been found in Escherichia coli. In mammals, these enzymes oxidize steroids, fatty acids, xenobiotics, and participate in many biosyntheses. By hydroxylation, CYP450 enzymes convert xenobiotics into hydrophilic derivatives, which are more readily excreted.
P450s are, in general, the terminal oxidase enzymes in electron transfer chains, broadly categorized as P450-containing systems. The term "P450" is derived from the spectrophotometric peak at the wavelength of the absorption maximum of the enzyme (450 nm) when it is in the reduced state and complexed with carbon monoxide. Most P450s require a protein partner to deliver one or more electrons to reduce the iron (and eventually molecular oxygen).
Nomenclature
Genes encoding P450 enzymes, and the enzymes themselves, are designated with the root symbol CYP for the superfamily, followed by a number indicating the gene family, a capital letter indicating the subfamily, and another numeral for the individual gene. The convention is to italicize the name when referring to the gene. For example, CYP2E1 is the gene that encodes the enzyme CYP2E1—one of the enzymes involved in paracetamol (acetaminophen) metabolism. The CYP nomenclature is the official naming convention, although occasionally CYP450 or CYP450 is used synonymously. These names should never be used as according to the nomenclature convention (as they denote a P450 in family number 450). However, some gene or enzyme names for P450s are also referred to by historical names (e.g. P450BM3 for CYP102A1) or functional names, denoting the catalytic activity and the name of the compound used as substrate. Examples include CYP5A1, thromboxane A2 synthase, abbreviated to TBXAS1 (ThromBoXane A2 Synthase 1), and CYP51A1, lanosterol 14-α-demethylase, sometimes unofficially abbreviated to LDM according to its substrate (Lanosterol) and activity (DeMethylation).
The current nomenclature guidelines suggest that members of new CYP families share at least 40% amino-acid identity, while members of subfamilies must share at least 55% amino-acid identity. Nomenclature committees assign and track both base gene names (Cytochrome P450 Homepage ) and allele names (CYP Allele Nomenclature Committee).
Classification
Based on the nature of the electron transfer proteins, P450s can be classified into several groups:
Microsomal P450 systems in which electrons are transferred from NADPH via cytochrome P450 reductase (variously CPR, POR, or CYPOR). Cytochrome b5 (cyb5) can also contribute reducing power to this system after being reduced by cytochrome b5 reductase (CYB5R).
Mitochondrial P450 systems which employ adrenodoxin reductase and adrenodoxin to transfer electrons from NADPH to P450.
Bacterial P450 systems which employ a ferredoxin reductase and a ferredoxin to transfer electrons to P450.
CYB5R/cyb5/P450 systems in which both electrons required by the CYP come from cytochrome b5.
FMN/Fd/P450 systems originally found in Rhodococcus species, in which a FMN-domain-containing reductase is fused to the CYP.
P450 only systems which do not require external reducing power. Notable ones include thromboxane synthase (CYP5), prostacyclin synthase (CYP8), and CYP74A (allene oxide synthase).
The most common reaction catalyzed by cytochromes P450 is a monooxygenase reaction, e.g., insertion of one atom of oxygen into the aliphatic position of an organic substrate (RH), while the other oxygen atom is reduced to water:
Related hydroxylation enzymes
Many hydroxylation reactions (insertion of hydroxyl groups) use CYP enzymes, but many other hydroxylases exist. Alpha-ketoglutarate-dependent hydroxylases also rely on an Fe=O intermediate but lack hemes. Methane monooxygenase, which converts methane to methanol, are non-heme iron-and iron-copper-based enzymes.
Mechanism
Structure
The active site of cytochrome P450 contains a heme-iron center. The iron is tethered to the protein via a cysteine thiolate ligand. This cysteine and several flanking residues are highly conserved in known P450s, and have the formal PROSITE signature consensus pattern [FW] - [SGNH] - x - [GD] - {F} - [RKHPT] - {P} - C - [LIVMFAP] - [GAD]. In general, the P450 catalytic cycle proceeds as follows:
Catalytic cycle
Substrate binds in proximity to the heme group, on the side opposite to the axial thiolate. Substrate binding induces a change in the conformation of the active site, often displacing a water molecule from the distal axial coordination position of the heme iron, and changing the state of the heme iron from low-spin to high-spin.
Substrate binding induces electron transfer from NAD(P)H via cytochrome P450 reductase or another associated reductase, converting Fe(III) to Fe(II).
Molecular oxygen binds to the resulting ferrous heme center at the distal axial coordination position, initially giving a dioxygen adduct similar to oxy-myoglobin.
A second electron is transferred, from either cytochrome P450 reductase, ferredoxins, or cytochrome b5, reducing the Fe-O2 adduct to give a short-lived peroxo state.
The peroxo group formed in step 4 is rapidly protonated twice, releasing one molecule of water and forming the highly reactive species referred to as P450 Compound 1 (or just Compound I). This highly reactive intermediate was isolated in 2010, P450 Compound 1 is an iron(IV) oxo (or ferryl) species with an additional oxidizing equivalent delocalized over the porphyrin and thiolate ligands. Evidence for the alternative perferryl iron(V)-oxo is lacking.
Depending on the substrate and enzyme involved, P450 enzymes can catalyze any of a wide variety of reactions. A hypothetical hydroxylation is illustrated. After the hydroxylated product has been released from the active site, the enzyme returns to its original state, with a water molecule returning to occupy the distal coordination position of the iron nucleus.
An alternative route for mono-oxygenation is via the "peroxide shunt" (path "S" in figure). This pathway entails oxidation of the ferric-substrate complex with oxygen-atom donors such as peroxides and hypochlorites. A hypothetical peroxide "XOOH" is shown in the diagram.
Mechanistic details, including the oxygen rebound mechanism, have been investigated with synthetic analogues, consisting of iron oxo heme complexes.
Spectroscopy
Binding of substrate is reflected in the spectral properties of the enzyme, with an increase in absorbance at 390 nm and a decrease at 420 nm. This can be measured by difference spectroscopies and is referred to as the "type I" difference spectrum (see inset graph in figure). Some substrates cause an opposite change in spectral properties, a "reverse type I" spectrum, by processes that are as yet unclear. Inhibitors and certain substrates that bind directly to the heme iron give rise to the type II difference spectrum, with a maximum at 430 nm and a minimum at 390 nm (see inset graph in figure). If no reducing equivalents are available, this complex may remain stable, allowing the degree of binding to be determined from absorbance measurements in vitro
C: If carbon monoxide (CO) binds to reduced P450, the catalytic cycle is interrupted. This reaction yields the classic CO difference spectrum with a maximum at 450 nm. However, the interruptive and inhibitory effects of CO varies upon different CYPs such that the CYP3A family is relatively less affected.
See also
Cytochrome P450 oxidoreductase deficiency
Cytochrome P450 engineering
Further reading
References
EC 1.14
Pharmacokinetics
Metabolism
Integral membrane proteins | Cytochrome P450 | [
"Chemistry",
"Biology"
] | 1,881 | [
"Pharmacology",
"Pharmacokinetics",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
709,237 | https://en.wikipedia.org/wiki/Royal%20Academy%20of%20Engineering | The Royal Academy of Engineering (RAEng) is the United Kingdom's national academy of engineering.
The Academy was founded in June 1976 as the Fellowship of Engineering with support from Prince Philip, Duke of Edinburgh, who became the first senior fellow and remained so until his death. The Fellowship was incorporated and granted a royal charter on 17 May 1983 and became the Royal Academy of Engineering on 16 March 1992. It is governed according to the charter and associated statutes and regulations (as amended from time to time). In June 2024 His Majesty the King became Patron of the Academy.
History
Conceived in the late 1960s, during the Apollo space program and Harold Wilson's espousal of "white heat of technology", the Fellowship of Engineering was born in the year of Concorde's first commercial flight.
The Fellowship's first meeting, at Buckingham Palace on 11 June 1976, enrolled 126 of the UK's leading engineers. The first fellows included Air Commodore Sir Frank Whittle, the jet engine developer, the structural engineer Sir Ove Arup, radar pioneer Sir George G. MacFarlane, the inventor of the bouncing bomb, Sir Barnes Wallis, Francis Thomas Bacon, the inventor of the alkaline fuel cell, and father of the UK computer industry Sir Maurice Wilkes. The Fellowship's first president, Christopher Hinton, had driven the UK's supremacy in nuclear power.
The Fellowship focused on championing excellence in all fields of engineering. Activities began in earnest in the mid-1970s with the Distinction lecture series, now known as the Hinton lectures. The Fellowship was asked to advise the Department of Industry for the first time, and the Academy became host and presenter of the MacRobert Award.
In the 1980s, the Fellowship received its own royal charter along with its first government grant-in-aid. At the same time, it also received significant industrial funding, initiated its research programme to build bridges between academia and industry, and opened its doors to international and honorary fellows.
In 1990, the Academy launched its first major initiative in education, Engineering Education Continuum, which evolved into the BEST Programme and Shape the Future and Tomorrow's Engineers.
The Academy's increasing level of influence – in policy, research and education – was recognized when it was granted a royal title and became The Royal Academy of Engineering in 1992. In 2014 the academy launched its annual Africa Prize.
The Academy's current logo is inspired by the Neolithic hand axe, humans' first technological advance, which was taken to be a symbol appropriate to the Academy, supposedly representative of the ever-changing relationship between humanity and technology.
Location
The Academy's premises, 3–4 Carlton House Terrace, are in a Grade I listed building overlooking St James's Park, designed by architect John Nash and owned by the Crown Estate. The Academy shares the Terrace with two of its sister academies, the British Academy and the Royal Society as well as other institutes.
The building was renamed Prince Philip House, after renovation works were completed in 2012.
Activities
The Academy is instrumental in two policy alliances set up in 2009 to provide coherent advice on engineering education and policy across the profession: Education for Engineering and Engineering the Future.
The Academy is one of four agencies that receive funding from the UK's Department for Business, Innovation and Skills for activities that support government policy on public understanding of science and engineering.
As part of its programme to communicate the benefits and value of engineering to society, the Academy publishes a quarterly magazine, Ingenia . The Academy says that Ingenia is written for a non-specialist audience and is "aimed at all those with an interest in engineering, whether working in business and industry, government, academia or the financial community". The Academy also makes Ingenia available to A-Level students in 3,000 schools in the UK.
Presidents
The president of the Royal Academy of Engineering, the elected officer of the Academy, presides over meetings of the council. The president is elected for a single term of not more than five years.
Fellows
The Fellowship currently includes over 1,500 engineers from all sectors and disciplines of engineering. The fellows, distinguished by the title Fellow of The Royal Academy of Engineering and the post-nominal designation FREng, lead, guide, and contribute to the Academy's work and provide expertise.
The Royal Fellows of the Academy are the Duke of Kent and the Princess Royal.
Diversity
The Academy strives to ensure that the pool of candidates for election to the Fellowship better reflects the diverse make-up of society as a whole. It set up the Proactive Membership Committee in 2008 to identify and support the nomination of candidates from underrepresented areas, with the aim of boosting the number of women candidates, engineers from industry and small and medium enterprises, those from emerging technologies and ethnically diverse backgrounds.
Awards and prizes
With the support of the Worshipful Company of Engineers, the Academy manages the annual Royal Academy of Engineering MacRobert Award, the premier prize for UK innovation in engineering. First presented in 1969, the award honours the winning company with a gold medal and the team members with a prize of £50,000.
The Academy oversees the awarding of the Queen Elizabeth Prize for Engineering (QEPrize). The QEPrize is an international, £1 million engineering prize that "rewards and celebrates the engineers responsible for a ground-breaking innovation that has been of global benefit to humanity". The objective of the prize is to "raise the public profile of engineering and to inspire young people to become engineers".
The Academy's Sir George Macfarlane Medal is an annual award that "recognizes a UK engineer who has demonstrated excellence in the early stage of their career".
The President's Medal
The Prince Philip Medal, named after Prince Philip, Duke of Edinburgh, and "awarded periodically to an engineer of any nationality who has made an exceptional contribution to engineering as a whole through practice, management or education."
The Africa Prize For Engineering Innovation has been awarded annually since 2014. In its first ten years it provided a million pounds of finance in addition to assistance with legal, IT and networking. The 2015 prize was won by Tanzanian Askwar Hilonga who had devised a novel water filter
Chair in Emerging Technologies, a scheme providing long-term support to visionary researchers in developing technologies with potential to deliver benefits to the United Kingdom.
Engineering Leadership Scholarship, awarded to engineering undergraduate engineering students in the UK with high potential for leadership in the sector. Awardees undertake an accelerated personal development programme to assist awardees in becoming future leaders.
See also
Engineering Development Trust
Engineering
Glossary of engineering
Royal Academy of Engineering International Medal
UK Young Academy
References
Scientific organizations established in 1976
Engineering societies based in the United Kingdom
United Kingdom
1976 establishments in the United Kingdom
Organisations based in the City of Westminster | Royal Academy of Engineering | [
"Engineering"
] | 1,358 | [
"Royal Academy of Engineering",
"National academies of engineering"
] |
709,427 | https://en.wikipedia.org/wiki/Micro%20black%20hole | Micro black holes, also called mini black holes or quantum mechanical black holes, are hypothetical tiny (<1 ) black holes, for which quantum mechanical effects play an important role. The concept that black holes may exist that are smaller than stellar mass was introduced in 1971 by Stephen Hawking.
It is possible that such black holes were created in the high-density environment of the early Universe (or Big Bang), or possibly through subsequent phase transitions (referred to as primordial black holes). They might be observed by astrophysicists through the particles they are expected to emit by Hawking radiation.
Some hypotheses involving additional space dimensions predict that micro black holes could be formed at energies as low as the TeV range, which are available in particle accelerators such as the Large Hadron Collider. Popular concerns have then been raised over end-of-the-world scenarios (see Safety of particle collisions at the Large Hadron Collider). However, such quantum black holes would instantly evaporate, either totally or leaving only a very weakly interacting residue. Beside the theoretical arguments, cosmic rays hitting the Earth do not produce any damage, although they reach energies in the range of hundreds of TeV.
Minimum mass of a black hole
In an early speculation, Stephen Hawking conjectured that a black hole would not form with a mass below about (roughly the Planck mass). To make a black hole, one must concentrate mass or energy sufficiently that the escape velocity from the region in which it is concentrated exceeds the speed of light.
Some extensions of present physics posit the existence of extra dimensions of space. In higher-dimensional spacetime, the strength of gravity increases more rapidly with decreasing distance than in three dimensions. With certain special configurations of the extra dimensions, this effect can lower the Planck scale to the TeV range. Examples of such extensions include large extra dimensions, special cases of the Randall–Sundrum model, and string theory configurations like the GKP solutions. In such scenarios, black hole production could possibly be an important and observable effect at the Large Hadron Collider (LHC).
It would also be a common natural phenomenon induced by cosmic rays.
All this assumes that the theory of general relativity remains valid at these small distances. If it does not, then other, currently unknown, effects might limit the minimum size of a black hole. Elementary particles are equipped with a quantum-mechanical, intrinsic angular momentum (spin). The correct conservation law for the total (orbital plus spin) angular momentum of matter in curved spacetime requires that spacetime is equipped with torsion. The simplest and most natural theory of gravity with torsion is the Einstein–Cartan theory. Torsion modifies the Dirac equation in the presence of the gravitational field and causes fermion particles to be spatially extended. In this case the spatial extension of fermions limits the minimum mass of a black hole to be on the order of , showing that micro black holes may not exist. The energy necessary to produce such a black hole is 39 orders of magnitude greater than the energies available at the Large Hadron Collider, indicating that the LHC cannot produce mini black holes. But if black holes are produced, then the theory of general relativity is proven wrong and does not exist at these small distances. The rules of general relativity would be broken, as is consistent with theories of how matter, space, and time break down around the event horizon of a black hole. This would prove the spatial extensions of the fermion limits to be incorrect as well. The fermion limits assume a minimum mass needed to sustain a black hole, as opposed to the opposite, the minimum mass needed to start a black hole, which in theory is achievable in the LHC under some conditions.
Stability
Hawking radiation
In 1975, Stephen Hawking argued that, due to quantum effects, black holes "evaporate" by a process now referred to as Hawking radiation in which elementary particles (such as photons, electrons, quarks and gluons) are emitted. His calculations showed that the smaller the size of the black hole, the faster the evaporation rate, resulting in a sudden burst of particles as the micro black hole suddenly explodes.
Any primordial black hole of sufficiently low mass will evaporate to near the Planck mass within the lifetime of the Universe. In this process, these small black holes radiate away matter. A rough picture of this is that pairs of virtual particles emerge from the vacuum near the event horizon, with one member of a pair being captured, and the other escaping the vicinity of the black hole. The net result is the black hole loses mass (due to conservation of energy). According to the formulae of black hole thermodynamics, the more the black hole loses mass, the hotter it becomes, and the faster it evaporates, until it approaches the Planck mass. At this stage, a black hole would have a Hawking temperature of (), which means an emitted Hawking particle would have an energy comparable to the mass of the black hole. Thus, a thermodynamic description breaks down. Such a micro black hole would also have an entropy of only 4 nats, approximately the minimum possible value. At this point then, the object can no longer be described as a classical black hole, and Hawking's calculations also break down.
While Hawking radiation is sometimes questioned, Leonard Susskind summarizes an expert perspective in his book The Black Hole War: "Every so often, a physics paper will appear claiming that black holes don't evaporate. Such papers quickly disappear into the infinite junk heap of fringe ideas."
Conjectures for the final state
Conjectures for the final fate of the black hole include total evaporation and production of a Planck-mass-sized black hole remnant. Such Planck-mass black holes may in effect be stable objects if the quantized gaps between their allowed energy levels bar them from emitting Hawking particles or absorbing energy gravitationally like a classical black hole. In such case, they would be weakly interacting massive particles; this could explain dark matter.
Primordial black holes
Formation in the early Universe
Production of a black hole requires concentration of mass or energy within the corresponding Schwarzschild radius. It was hypothesized by Zel'dovich and Novikov first and independently by Hawking that, shortly after the Big Bang, the Universe was dense enough for any given region of space to fit within its own Schwarzschild radius. Even so, at that time, the Universe was not able to collapse into a singularity due to its uniform mass distribution and rapid growth. This, however, does not fully exclude the possibility that black holes of various sizes may have emerged locally. A black hole formed in this way is called a primordial black hole and is the most widely accepted hypothesis for the possible creation of micro black holes. Computer simulations suggest that the probability of formation of a primordial black hole is inversely proportional to its mass. Thus, the most likely outcome would be micro black holes.
Expected observable effects
A primordial black hole with an initial mass of around would be completing its evaporation today; a less massive primordial black hole would have already evaporated. Under optimal conditions, the Fermi Gamma-ray Space Telescope satellite, launched in June 2008, might detect experimental evidence for evaporation of nearby black holes by observing gamma ray bursts. It is unlikely that a collision between a microscopic black hole and an object such as a star or a planet would be noticeable. The small radius and high density of the black hole would allow it to pass straight through any object consisting of normal atoms, interacting with only few of its atoms while doing so. It has, however, been suggested that a small black hole of sufficient mass passing through the Earth would produce a detectable acoustic or seismic signal.
On the moon, it may leave a distinct type of crater, still visible after billions of years.
Human-made micro black holes
Feasibility of production
In familiar three-dimensional gravity, the minimum energy of a microscopic black hole is (equivalent to 1.6 GJ or 444 kWh), which would have to be condensed into a region on the order of the Planck length. This is far beyond the limits of any current technology. It is estimated that to collide two particles to within a distance of a Planck length with currently achievable magnetic field strengths would require a ring accelerator about 1,000 light years in diameter to keep the particles on track.
However, in some scenarios involving extra dimensions of space, the Planck mass can be as low as the TeV range. The Large Hadron Collider (LHC) has a design energy of for proton–proton collisions and 1,150 TeV for Pb–Pb collisions. It was argued in 2001 that, in these circumstances, black hole production could be an important and observable effect at the LHC or future higher-energy colliders. Such quantum black holes should decay emitting sprays of particles that could be seen by detectors at these facilities. A paper by Choptuik and Pretorius, published in 2010 in Physical Review Letters, presented a computer-generated proof that micro black holes must form from two colliding particles with sufficient energy, which might be allowable at the energies of the LHC if additional dimensions are present other than the customary four (three spatial, one temporal).
Safety arguments
Hawking's calculation and more general quantum mechanical arguments predict that micro black holes evaporate almost instantaneously. Additional safety arguments beyond those based on Hawking radiation were given in the paper, which showed that in hypothetical scenarios with stable micro black holes massive enough to destroy Earth, such black holes would have been produced by cosmic rays and would have likely already destroyed astronomical objects such as planets, stars, or stellar remnants such as neutron stars and white dwarfs.
Black holes in quantum theories of gravity
It is possible, in some theories of quantum gravity, to calculate the quantum corrections to ordinary, classical black holes. Contrarily to conventional black holes, which are solutions of gravitational field equations of the general theory of relativity, quantum gravity black holes incorporate quantum gravity effects in the vicinity of the origin, where classically a curvature singularity occurs. According to the theory employed to model quantum gravity effects, there are different kinds of quantum gravity black holes, namely loop quantum black holes, non-commutative black holes, and asymptotically safe black holes. In these approaches, black holes are singularity-free.
Virtual micro black holes were proposed by Stephen Hawking in 1995 and by Fabio Scardigli in 1999 as part of a Grand Unified Theory as a quantum gravity candidate.
See also
Black hole electron
Black hole starship
Black holes in fiction
ER=EPR
Kugelblitz (astrophysics)
Strangelet
Notes
References
Bibliography
A. Barrau et al., Astron. Astrophys. 388 (2002) 676, Astron. Astrophys. 398 (2003) 403, Astrophys. J. 630 (2005) 1015 : experimental searches for primordial black holes thanks to the emitted antimatter
A. Barrau & G. Boudoul, Review talk given at the International Conference on Theoretical Physics TH2002 : cosmology with primordial black holes
A. Barrau & J. Grain, Phys. Lett. B 584 (2004) 114 : searches for new physics (quantum gravity) with primordial black holes
P. Kanti, Int. J. Mod. Phys. A19 (2004) 4899 : evaporating black holes and extra dimensions
D. Ida, K.-y. Oda & S.C.Park, : determination of black hole's life and extra dimensions
Sabine Hossenfelder: What Black Holes Can Teach Us, hep-ph/0412265
L. Modesto, PhysRevD.70.124009: Disappearance of Black Hole Singularity in Quantum Gravity
P. Nicolini, A. Smailacic, E. Spallucci, j.physletb.2005.11.004: Noncommutative geometry inspired Schwarzschild black hole
A. Bonanno, M. Reuter, PhysRevD.73.083005: Spacetime Structure of an Evaporating Black Hole in Quantum Gravity
: X-ray astronomy in the laboratory with a miniature compact object produced by laser-driven implosion
Harrison, B. K.; Thorne, K. S.; Wakano, M.; Wheeler, J. A. Gravitation Theory and Gravitational Collapse, Chicago: University of Chicago Press, 1965 pp. 80–81
External links
Astrophysical implications of hypothetical stable TeV-scale black holes
Mini Black Holes Might Reveal 5th Dimension – Ker Than. Space.com June 26, 2006 10:42am ET
Doomsday Machine Large Hadron Collider? – A scientific essay about energies, dimensions, black holes, and the associated public attention to CERN, by Norbert Frischauf (also available as Podcast)
+
Hypothetical astronomical objects
Hypothetical particles | Micro black hole | [
"Physics",
"Astronomy"
] | 2,720 | [
"Hypothetical particles",
"Matter",
"Physical phenomena",
"Black holes",
"Physical quantities",
"Astronomical hypotheses",
"Unsolved problems in physics",
"Astronomical myths",
"Astrophysics",
"Hypothetical astronomical objects",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Phy... |
709,783 | https://en.wikipedia.org/wiki/Strategy-stealing%20argument | In combinatorial game theory, the strategy-stealing argument is a general argument that shows, for many two-player games, that the second player cannot have a guaranteed winning strategy. The strategy-stealing argument applies to any symmetric game (one in which either player has the same set of available moves with the same results, so that the first player can "use" the second player's strategy) in which an extra move can never be a disadvantage. A key property of a strategy-stealing argument is that it proves that the first player can win (or possibly draw) the game without actually constructing such a strategy. So, although it might prove the existence of a winning strategy, the proof gives no information about what that strategy is.
The argument works by obtaining a contradiction. A winning strategy is assumed to exist for the second player, who is using it. But then, roughly speaking, after making an arbitrary first move – which by the conditions above is not a disadvantage – the first player may then also play according to this winning strategy. The result is that both players are guaranteed to win – which is absurd, thus contradicting the assumption that such a strategy exists.
Strategy-stealing was invented by John Nash in the 1940s to show that the game of hex is always a first-player win, as ties are not possible in this game. However, Nash did not publish this method, and József Beck credits its first publication to Alfred W. Hales and Robert I. Jewett, in the 1963 paper on tic-tac-toe in which they also proved the Hales–Jewett theorem. Other examples of games to which the argument applies include the m,n,k-games such as gomoku. In the game of Chomp strategy stealing shows that the first player has a winning strategy in any rectangular board (other than 1x1). In the game of Sylver coinage, strategy stealing has been used to show that the first player can win in certain positions called "enders". In all of these examples the proof reveals nothing about the actual strategy.
Example
A strategy-stealing argument can be used on the example of the game of tic-tac-toe, for a board and winning rows of any size. Suppose that the second player (P2) is using a strategy S which guarantees a win. The first player (P1) places an X in an arbitrary position. P2 responds by placing an O according to S. But if P1 ignores the first random X, P1 is now in the same situation as P2 on P2's first move: a single enemy piece on the board. P1 may therefore make a move according to S – that is, unless S calls for another X to be placed where the ignored X is already placed. But in this case, P1 may simply place an X in some other random position on the board, the net effect of which will be that one X is in the position demanded by S, while another is in a random position, and becomes the new ignored piece, leaving the situation as before. Continuing in this way, S is, by hypothesis, guaranteed to produce a winning position (with an additional ignored X of no consequence). But then P2 has lost – contradicting the supposition that P2 had a guaranteed winning strategy. Such a winning strategy for P2, therefore, does not exist, and tic-tac-toe is either a forced win for P1 or a tie. (Further analysis shows it is in fact a tie.)
The same proof holds for any strong positional game.
Chess
There is a class of chess positions called Zugzwang in which the player obligated to move would prefer to "pass" if this were allowed. Because of this, the strategy-stealing argument cannot be applied to chess. It is not currently known whether White or Black can force a win with optimal play, or if both players can force a draw. However, virtually all students of chess consider White's first move to be an advantage and statistics from modern high-level games have White's winning percentage about 10% higher than Black's.
Go
In Go passing is allowed. When the starting position is symmetrical (empty board, neither player has any points), this means that the first player could steal the second player's winning strategy simply by giving up the first move. Since the 1930s, however, the second player is typically awarded some compensation points, which makes the starting position asymmetrical, and the strategy-stealing argument will no longer work.
An elementary strategy in the game is "mirror go", where the second player performs moves which are diagonally opposite those of this opponent. This approach may be defeated using ladder tactics, ko fights, or successfully competing for control of the board's central point.
Constructivity
The strategy-stealing argument shows that the second player cannot win, by means of deriving a contradiction from any hypothetical winning strategy for the second player. The argument is commonly employed in games where there can be no draw, by means of the law of the excluded middle. However, it does not provide an explicit strategy for the first player, and because of this it has been called non-constructive. This raises the question of how to actually compute a winning strategy.
For games with a finite number of reachable positions, such as chomp, a winning strategy can be found by exhaustive search. However, this might be impractical if the number of positions is large.
In 2019, Greg Bodwin and Ofer Grossman proved that the problem of finding a winning strategy is PSPACE-hard in two kinds of games in which strategy-stealing arguments were used: the minimum poset game and the symmetric Maker-Maker game.
References
Game theory
Mathematical games
Arguments | Strategy-stealing argument | [
"Mathematics"
] | 1,191 | [
"Mathematical games",
"Recreational mathematics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
709,796 | https://en.wikipedia.org/wiki/Hydrogen%20peroxide%E2%80%93urea | Hydrogen peroxide–urea (also called Hyperol, artizone, urea hydrogen peroxide, and UHP) is a white crystalline solid chemical compound composed of equal amounts of hydrogen peroxide and urea. It contains solid and water-free hydrogen peroxide, which offers a higher stability and better controllability than liquid hydrogen peroxide when used as an oxidizing agent. Often called carbamide peroxide in dentistry, it is used as a source of hydrogen peroxide when dissolved in water for bleaching, disinfection and oxidation.
Production
For the preparation of the complex, urea is dissolved in 30% hydrogen peroxide (molar ratio 2:3) at temperatures below 60 °C. upon cooling this solution, hydrogen peroxide–urea precipitates out in the form of small platelets.
Akin to water of crystallization, hydrogen peroxide cocrystallizes with urea with the stoichiometry of 1:1. The compound is simply produced (on a scale of several hundred tonnes a year) by the dissolution of urea in excess concentrated hydrogen peroxide solution, followed by crystallization. The laboratory synthesis is analogous.
Structure and properties
The solid state structure of this adduct has been determined by neutron diffraction.
Hydrogen peroxide–urea is a readily water-soluble, odorless, crystalline solid, which is available as white powder or colorless needles or platelets. Upon dissolving in various solvents, the 1:1 complex dissociates back to urea and hydrogen peroxide. So just like hydrogen peroxide, the (erroneously) so-called adduct is an oxidizer but the release at room temperature in the presence of catalysts proceeds in a controlled manner. Thus the compound is suitable as a safer substitute for the unstable aqueous solution of hydrogen peroxide. Because of the tendency for thermal decomposition, which accelerates at temperatures above 82 °C, it should not be heated above 60 °C, particularly in pure form.
The solubility of commercial samples varies from 0.05 g/mL to more than 0.6 g/mL.
Applications
Disinfectant and bleaching agent
Hydrogen peroxide–urea is mainly used as a disinfecting and bleaching agent in cosmetics and pharmaceuticals. As a drug, this compound is used in some preparations for the whitening of teeth. It is also used to relieve minor inflammation of gums, oral mucosal surfaces and lips including canker sores and dental irritation, and to emulsify and disperse earwax.
Carbamide peroxide is also suitable as a disinfectant, e.g. for germ reduction on contact lens surfaces or as an antiseptic for mouthwashes, ear drops or for superficial wounds and ulcers.
Reagent in organic synthesis
In the laboratory, it is used as a more easily handled replacement for hydrogen peroxide. It has proven to be a stable, easy-to-handle and effective oxidizing agent which is readily controllable by a suitable choice of the reaction conditions. It delivers oxidation products in an environmentally friendly manner and often in high yields especially in the presence of organic catalysts such as cis-butenedioic anhydride or inorganic catalysts such as sodium tungstate.
It converts thiols selectively to disulfides, secondary alcohols to ketones, sulfides to sulfoxides and sulfones, nitriles to amides, and N-heterocycles to amine oxides.
Hydroxybenzaldehydes are converted to dihydroxybenzenes (Dakin reaction) and give, under suitable conditions, the corresponding benzoic acids.
It oxidizes ketones to esters, in particular cyclic ketones, such as substituted cyclohexanones or cyclobutanones to give lactones (Baeyer–Villiger oxidation).
The epoxidation of various alkenes in the presence of benzonitrile yields oxiranes in yields of 79 to 96%.
The oxygen atom transferred to the alkene originates from the peroxoimide acid formed intermediately from benzonitrile. The resulting imidic acid tautomerizes to the benzamide.
Safety
The compound acts as a strong oxidizing agent and can cause skin irritation and severe eye damage. Urea–hydrogen peroxide was also found to be an insensitive high explosive, capable of detonation by strong impulse under heavy confinement.
See also
Sodium percarbonate
Peroxide-based bleach
References
External links
Bleaches
Antiseptics
Cleaning product components
Ureas
Peroxides
Oxidizing agents
Hydrogen peroxide
Explosive chemicals | Hydrogen peroxide–urea | [
"Chemistry",
"Technology"
] | 985 | [
"Redox",
"Cleaning product components",
"Oxidizing agents",
"Organic compounds",
"Explosive chemicals",
"Components",
"Ureas"
] |
710,045 | https://en.wikipedia.org/wiki/Pulsed%20laser%20deposition | Pulsed laser deposition (PLD) is a physical vapor deposition (PVD) technique where a high-power pulsed laser beam is focused inside a vacuum chamber to strike a target of the material that is to be deposited. This material is vaporized from the target (in a plasma plume) which deposits it as a thin film on a substrate (such as a silicon wafer facing the target). This process can occur in ultra high vacuum or in the presence of a background gas, such as oxygen which is commonly used when depositing oxides to fully oxygenate the deposited films.
While the basic setup is simple relative to many other deposition techniques, the physical phenomena of laser-target interaction and film growth are quite complex (see Process below). When the laser pulse is absorbed by the target, energy is first converted to electronic excitation and then into thermal, chemical and mechanical energy resulting in evaporation, ablation, plasma formation and even exfoliation. The ejected species expand into the surrounding vacuum in the form of a plume containing many energetic species including atoms, molecules, electrons, ions, clusters, particulates and molten globules, before depositing on the typically hot substrate.
Process
The detailed mechanisms of PLD are very complex including the ablation process of the target material by the laser irradiation, the development of a plasma plume with high energetic ions, electrons as well as neutrals and the crystalline growth of the film itself on the heated substrate. The process of PLD can generally be divided into four stages:
Laser absorption on the target surface and laser ablation of the target material and creation of a plasma
Dynamic of the plasma
Deposition of the ablation material on the substrate
Nucleation and growth of the film on the substrate surface
Each of these steps is crucial for the crystallinity, uniformity and stoichiometry of the resulting film.
Laser ablation of the target material and creation of a plasma
The ablation of the target material upon laser irradiation and the creation of plasma are very complex processes. The removal of atoms from the bulk material is done by vaporization of the bulk at the surface region in a state of non-equilibrium. In this the incident laser pulse penetrates into the surface of the material within the penetration depth. This dimension is dependent on the laser wavelength and the index of refraction of the target material at the applied laser wavelength and is typically in the region of 10 nm for most materials. The strong electrical field generated by the laser light is sufficiently strong to remove the electrons from the bulk material of the penetrated volume. This process occurs within 10 ps of a ns laser pulse and is caused by non-linear processes such as multiphoton ionization which are enhanced by microscopic cracks at the surface, voids, and nodules, which increase the electric field. The free electrons oscillate within the electromagnetic field of the laser light and can collide with the atoms of the bulk material thus transferring some of their energy to the lattice of the target material within the surface region. The surface of the target is then heated up and the material is vaporized.
Dynamic of the plasma
In the second stage the material expands in a plasma parallel to the normal vector of the target surface towards the substrate due to Coulomb repulsion and recoil from the target surface. The spatial distribution of the plume is dependent on the background pressure inside the PLD chamber. The density of the plume can be described by a cosn(x) law with a shape similar to a Gaussian curve. The dependency of the plume shape on the pressure can be described in three stages:
The vacuum stage, where the plume is very narrow and forward directed; almost no scattering occurs with the background gases.
The intermediate region where a splitting of the high energetic ions from the less energetic species can be observed. The time-of-flight (TOF) data can be fitted to a shock wave model; however, other models could also be possible.
High pressure region where we find a more diffusion-like expansion of the ablated material. Naturally this scattering is also dependent on the mass of the background gas and can influence the stoichiometry of the deposited film.
The most important consequence of increasing the background pressure is the slowing down of the high energetic species in the expanding plasma plume. It has been shown that particles with kinetic energies around 50 eV can resputter the film already deposited on the substrate. This results in a lower deposition rate and can furthermore result in a change in the stoichiometry of the film.
Deposition of the ablation material on the substrate
The third stage is important to determine the quality of the deposited films. The high energetic species ablated from the target are bombarding the substrate surface and may cause damage to the surface by sputtering off atoms from the surface but also by causing defect formation in the deposited film. The sputtered species from the substrate and the particles emitted from the target form a collision region, which serves as a source for condensation of particles. When the condensation rate is high enough, a thermal equilibrium can be reached and the film grows on the substrate surface at the expense of the direct flow of ablation particles and the thermal equilibrium obtained.
Nucleation and growth of the film on the substrate surface
The nucleation process and growth kinetics of the film depend on several growth parameters including:
Laser parameters – several factors such as the laser fluence [Joule/cm2], laser energy, and ionization degree of the ablated material will affect the film quality, the stoichiometry, and the deposition flux. Generally, the nucleation density increases when the deposition flux is increased.
Surface temperature – The surface temperature has a large effect on the nucleation density. Generally, the nucleation density decreases as the temperature is increased. Heating of the surface can involve a heating plate or the use of a CO2 laser.
Substrate surface – The nucleation and growth can be affected by the surface preparation (such as chemical etching), the miscut of the substrate, as well as the roughness of the substrate.
Background pressure – Common in oxide deposition, an oxygen background is needed to ensure stoichiometric transfer from the target to the film. If, for example, the oxygen background is too low, the film will grow off stoichiometry which will affect the nucleation density and film quality.
In PLD, a large supersaturation occurs on the substrate during the pulse duration. The pulse lasts around 10–40 microseconds depending on the laser parameters. This high supersaturation causes a very large nucleation density on the surface as compared to molecular beam epitaxy or sputtering deposition. This nucleation density increases the smoothness of the deposited film.
In PLD, [depending on the deposition parameters above] three growth modes are possible:
Step-flow growth – All substrates have a miscut associated with the crystal. These miscuts give rise to atomic steps on the surface. In step-flow growth, atoms land on the surface and diffuse to a step edge before they have a chance to nucleated a surface island. The growing surface is viewed as steps traveling across the surface. This growth mode is obtained by deposition on a high miscut substrate, or depositing at elevated temperatures
Layer-by-layer growth – In this growth mode, islands nucleate on the surface until a critical island density is reached. As more material is added, the islands continue to grow until the islands begin to run into each other. This is known as coalescence. Once coalescence is reached, the surface has a large density of pits. When additional material is added to the surface the atoms diffuse into these pits to complete the layer. This process is repeated for each subsequent layer.
3D growth – This mode is similar to the layer-by-layer growth, except that once an island is formed an additional island will nucleate on top of the 1st island. Therefore, the growth does not persist in a layer by layer fashion, and the surface roughens each time material is added.
History
Pulsed laser deposition is only one of many thin film deposition techniques. Other methods include molecular beam epitaxy (MBE), chemical vapor deposition (CVD), sputter deposition (RF, magnetron, and ion beam). The history of laser-assisted film growth started soon after the technical realization of the first laser in 1960 by Maiman. Smith and Turner utilized a ruby laser to deposit the first thin films in 1965, three years after Breech and Cross studied the laser-vaporization and excitation of atoms from solid surfaces. However, the deposited films were still inferior to those obtained by other techniques such as chemical vapor deposition and molecular beam epitaxy. In the early 1980s, a few research groups (mainly in the former USSR) achieved remarkable results on manufacturing of thin film structures utilizing laser technology. The breakthrough came in 1987 when D. Dijkkamp, Xindi Wu and T. Venkatesan were able to laser deposit a thin film of YBa2Cu3O7, a high temperature superconductive material, which was of superior quality to that of films deposited with alternative techniques. Since then, the technique of pulsed laser deposition has been utilized to fabricate high quality crystalline films, such as doped garnet thin films for use as planar waveguide lasers. The deposition of ceramic oxides, nitride films, ferromagnetic films, metallic multilayers and various superlattices has been demonstrated. In the 1990s the development of new laser technology, such as lasers with high repetition rate and short pulse durations, made PLD a very competitive tool for the growth of thin, well defined films with complex stoichiometry.
Technical aspects
There are many different arrangements to build a deposition chamber for PLD. The target material which is evaporated by the laser is normally found as a rotating disc attached to a support. However, it can also be sintered into a cylindrical rod with rotational motion and a translational up and down movement along its axis. This special configuration allows not only the utilization of a synchronized reactive gas pulse but also of a multicomponent target rod with which films of different multilayers can be created.
Some factors that influence the deposition rate:
Target material
Pulse energy of laser
Repetition rate of the laser
Temperature of the substrate
Distance from target to substrate
Type of gas and pressure in chamber (oxygen, argon, etc.)
References
External links
Introduction to Pulsed Laser Deposition Introduction to Pulsed laser deposition
Laser-MBE: Pulsed Laser Deposition under Ultra-High Vacuum
A Brief Overview of Pulse Laser Deposition System
Physical vapor deposition techniques
Semiconductor device fabrication
Thin film deposition
Laser machining
Laser applications | Pulsed laser deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 2,196 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Planes (geometry)",
"Solid state engineering"
] |
710,115 | https://en.wikipedia.org/wiki/Ultrasonic%20welding | Ultrasonic welding is an industrial process whereby high-frequency ultrasonic acoustic vibrations are locally applied to work pieces being held together under pressure to create a solid-state weld. It is commonly used for plastics and metals, and especially for joining dissimilar materials. In ultrasonic welding, there are no connective bolts, nails, soldering materials, or adhesives necessary to bind the materials together. When used to join metals, the temperature stays well below the melting point of the involved materials, preventing any unwanted properties which may arise from high temperature exposure of the metal.
History
Practical application of ultrasonic welding for rigid plastics was completed in the 1960s. At this point only hard plastics could be welded. The patent for the ultrasonic method for welding rigid thermoplastic parts was awarded to Robert Soloff and Seymour Linsley in 1965. Soloff, the founder of Sonics & Materials Inc., was a lab manager at Branson Instruments where thin plastic films were welded into bags and tubes using ultrasonic probes. He unintentionally moved the probe close to a plastic tape dispenser and observed that the halves of the dispenser welded together. He realized that the probe did not need to be manually moved around the part, but that the ultrasonic energy could travel through and around rigid plastics and weld an entire joint. He went on to develop the first ultrasonic press. The first application of this new technology was in the toy industry.
The first car made entirely out of plastic was assembled using ultrasonic welding in 1969. The automotive industry has used it regularly since the 1980s, and it is now used for a multitude of applications.
Process
For joining complex injection molded thermoplastic parts, ultrasonic welding equipment can be customized to fit the exact specifications of the parts being welded. The parts are sandwiched between a fixed shaped nest (anvil) and a sonotrode (horn) connected to a transducer, and a ~20-70kHz low-amplitude acoustic vibration is emitted. When welding plastics, the interface of the two parts is specially designed to concentrate the melting process. One of the materials usually has a spiked or rounded energy director which contacts the second plastic part. The ultrasonic energy melts the point contact between the parts, creating a joint. Ultrasonic welding of thermoplastics causes local melting of the plastic due to absorption of vibrational energy along the joint to be welded. In metals, welding occurs due to high-pressure dispersion of surface oxides and local motion of the materials. Although there is heating, it is not enough to melt the base materials.
Ultrasonic welding can be used for both hard and soft plastics, such as semicrystalline plastics, and metals. The understanding of ultrasonic welding has increased with research and testing. The invention of more sophisticated and inexpensive equipment and increased demand for plastic and electronic components has led to a growing knowledge of the fundamental process. However, many aspects of ultrasonic welding still require more study, such as the relationship of weld quality to process parameters.
Scientists from the Institute of Materials Science and Engineering (WKK) of University of Kaiserslautern, with the support from the German Research Foundation (Deutsche Forschungsgemeinschaft), have succeeded in proving that using ultrasonic welding processes can lead to highly durable bonds between light metals and carbon-fiber-reinforced polymer (CFRP) sheets.
A benefit of ultrasonic welding is that there is no drying time as with conventional adhesives or solvents, so the workpieces do not need to remain in a fixture for longer than it takes for the weld to cool. The welding can easily be automated, making clean and precise joints; the site of the weld is very clean and rarely requires any touch-up work. The low thermal impact on the materials involved enables a greater number of materials to be welded together. The process is a good automated alternative to glue, screws or snap-fit designs.
Ultrasonic welding is typically used with small parts (e.g. cell phones, consumer electronics, disposable medical tools, toys, etc.) but it can be used on parts as large as a small automotive instrument cluster. Ultrasonics can also be used to weld metals, but are typically limited to small welds of thin, malleable metals such as aluminum, copper, and nickel. Ultrasonics would not be used in welding the chassis of an automobile or in welding pieces of a bicycle together, due to the power levels required.
Components
All ultrasonic welding systems are composed of the same basic elements:
A press, usually with a pneumatic or electric drive, to assemble two parts under pressure
A nest or anvil or fixture where the parts are placed and allowing the high frequency vibration to be directed to the interfaces
An ultrasonic stack composed of a converter or piezoelectric transducer, an optional booster and a Horn. All three elements of the stack are specifically tuned to resonate at the same exact ultrasonic frequency (Typically 15, 20, 30, 35 or 40 kHz)
Converter: Converts the electrical signal into a mechanical vibration using piezo electric effect
Booster: Modifies the amplitude of the vibration mechanically. It is also used in standard systems to clamp the stack in the press.
Horn: Takes the shape of the part, also modifies the amplitude mechanically and applies the mechanical vibration to the parts to be welded.
An electronic ultrasonic generator (US: Power supply) delivering a high power electric signal with frequency matching the resonance frequency of the stack.
A controller controlling the movement of the press and the delivery of the ultrasonic energy.
Applications
The applications of ultrasonic welding are extensive and are found in many industries including electrical and computer, automotive and aerospace, medical, and packaging. Whether two items can be ultrasonically welded is determined by their thickness. If they are too thick this process will not join them. This is the main obstacle in the welding of metals. However, wires, microcircuit connections, sheet metal, foils, ribbons and meshes are often joined using ultrasonic welding. Ultrasonic welding is a very popular technique for bonding thermoplastics. It is fast and easily automated with weld times often below one second and there is no ventilation system required to remove heat or exhaust. This type of welding is often used to build assemblies that are too small, too complex, or too delicate for more common welding techniques.
Computer and electrical industries
In the electrical and computer industry ultrasonic welding is often used to join wired connections and to create connections in small, delicate circuits. Junctions of wire harnesses are often joined using ultrasonic welding. Wire harnesses are large groupings of wires used to distribute electrical signals and power. Electric motors, field coils, transformers and capacitors may also be assembled with ultrasonic welding. It is also often preferred in the assembly of storage media such as flash drives and computer disks because of the high volumes required. Ultrasonic welding of computer disks has been found to have cycle times of less than 300 ms.
One of the areas in which ultrasonic welding is most used and where new research and experimentation is centered is microcircuits. This process is ideal for microcircuits since it creates reliable bonds without introducing impurities or thermal distortion into components. Semiconductor devices, transistors and diodes are often connected by thin aluminum and gold wires using ultrasonic welding. It is also used for bonding wiring and ribbons as well as entire chips to microcircuits. An example of where microcircuits are used is in medical sensors used to monitor the human heart in bypass patients.
One difference between ultrasonic welding and traditional welding is the ability of ultrasonic welding to join dissimilar materials. The assembly of battery components is a good example of where this ability is utilized. When creating battery and fuel cell components, thin gauge copper, nickel and aluminium connections, foil layers and metal meshes are often ultrasonically welded together. Multiple layers of foil or mesh can often be applied in a single weld eliminating steps and costs.
Aerospace and automotive industries
For automobiles, ultrasonic welding tends to be used to assemble large plastic and electrical components such as instrument panels, door panels, lamps, air ducts, steering wheels, upholstery and engine components. As plastics have continued to replace other materials in the design and manufacture of automobiles, the assembly and joining of plastic components has increasingly become a critical issue. Some of the advantages for ultrasonic welding are low cycle times, automation, low capital costs, and flexibility. Ultrasonic welding does not damage surface finish because the high-frequency vibrations prevent marks from being generated, which is a crucial consideration for many car manufacturers, .
Ultrasonic welding is generally utilized in the aerospace industry when joining thin sheet gauge metals and other lightweight materials. Aluminum is a difficult metal to weld using traditional techniques because of its high thermal conductivity. However, it is one of the easier materials to weld using ultrasonic welding because it is a softer metal and thus a solid-state weld is simple to achieve. Since aluminum is so widely used in the aerospace industry, it follows that ultrasonic welding is an important manufacturing process. With the advent of new composite materials, ultrasonic welding is becoming even more prevalent. It has been used in the bonding of the popular composite material carbon fiber. Numerous studies have been done to find the optimum parameters that will produce quality welds for this material.
Medical industry
In the medical industry ultrasonic welding is often used because it does not introduce contaminants or degradation into the weld and the machines can be specialized for use in clean rooms. The process can also be highly automated, provides strict control over dimensional tolerances and does not interfere with the biocompatibility of parts. Therefore, it increases part quality and decreases production costs. Items such as arterial filters, anesthesia filters, blood filters, IV catheters, dialysis tubes, pipettes, cardiometry reservoirs, blood/gas filters, face masks and IV spike/filters can all be made using ultrasonic welding. Another important application in the medical industry for ultrasonic welding is textiles. Items like hospital gowns, sterile garments, masks, transdermal patches and textiles for clean rooms can be sealed and sewn using ultrasonic welding. This prevents contamination and dust production and reduces the risk of infection.
Packaging industry
Ultrasonic welding is often used in packaging applications. Many common items are either created or packaged using ultrasonic welding. Sealing containers, tubes and blister packs are common applications.
Ultrasonic welding is also applied in the packaging of dangerous materials, such as explosives, fireworks and other reactive chemicals. These items tend to require hermetic sealing, but cannot be subjected to high temperatures. One example is a butane lighter. This container weld must be able to withstand high pressure and stress and must be airtight to contain the butane. Another example is the packaging of ammunition and propellants. These packages must be able to withstand high pressure and stress to protect the consumer from the contents.
The food industry finds ultrasonic welding preferable to traditional joining techniques, because it is fast, sanitary and can produce hermetic seals. Milk and juice containers are examples of products often sealed using ultrasonic welding. The paper parts to be sealed are coated with plastic, generally polypropylene or polyethylene, and then welded together to create an airtight seal. The main obstacle to overcome in this process is the setting of the parameters. For example, if over-welding occurs, then the concentration of plastic in the weld zone may be too low and cause the seal to break. If it is under-welded, the seal is incomplete. Variations in the thicknesses of materials can cause variations in weld quality. Some other food items sealed using ultrasonic welding include candy bar wrappers, frozen food packages and beverage containers.
Experimental
"Sonic agglomeration", a combination of ultrasonic welding and molding, is used to produce compact food ration bars for the US Army's Close Combat Assault Ration project without the use of binders. Dried food is pressed into a mold and welded for an hour, during which food particles become stuck together.
Safety
Hazards of ultrasonic welding include exposure to high temperatures and voltages. This equipment should be operated using the safety guidelines provided by the manufacturer to avoid injury. For instance, operators must never place hands or arms near the welding tip when the machine is activated. Also, operators should be provided with hearing protection and safety glasses. Operators should be informed of government agency regulations for the ultrasonic welding equipment and these regulations should be enforced.
Ultrasonic welding machines require routine maintenance and inspection. Panel doors, housing covers and protective guards may need to be removed for maintenance. This should be done when the power to the equipment is off and only by the trained professional servicing the machine.
Sub-harmonic vibrations, which can create annoying audible noise, may be caused in larger parts near the machine due to the ultrasonic welding frequency. This noise can be damped by clamping these large parts at one or more locations. Also, high-powered welders with frequencies of 15 kHz and 20 kHz typically emit a potentially damaging high-pitched squeal in the range of human hearing. Shielding this radiating sound can be done using an acoustic enclosure.
See also
Thermosonic bonding
References
Notes
Bibliography
American Welding Society (1997). Jefferson’s Welding Encyclopedia. American Welding Society. .
American Welding Society (2001). Welding Handbook: Welding Science and Technology. American Welding Society. .
Ahmed, Nasir (Ed.), (2005). New Developments in Advanced Welding. Boca Raton, Florida: CRC Press LLC. .
Grewell, David A.; Benatar, Avraham; & Park, Joon B. (Eds), (2003). Plastics and Composites Welding Handbook. Cincinnati, Ohio: Hanser Gardner Publications, Inc. .
Plastics Design Library (1997). Handbook of Plastics Joining: A Practical Guide. Norwich, New York: Plastics Design Library. .
Further reading
Tres, Paul A., "Designing Plastic Parts for Assembly", 6th ed., 2006,
Crawford, Lance, "Port Sealing: An Effective Heat Sealing Solution". Plastic Decorating Magazine. January/February 2013 Edition. . (Topeka, KS: Peterson Publications, Inc.). Section: Assembly: pages 36–39, covers Crawford's article.
Ultrasound
Welding
Packaging machinery
Plastic welding | Ultrasonic welding | [
"Engineering"
] | 2,978 | [
"Packaging machinery",
"Welding",
"Mechanical engineering",
"Industrial machinery"
] |
710,251 | https://en.wikipedia.org/wiki/Wind%20wave | In fluid dynamics, a wind wave, or wind-generated water wave, is a surface wave that occurs on the free surface of bodies of water as a result of the wind blowing over the water's surface. The contact distance in the direction of the wind is known as the fetch. Waves in the oceans can travel thousands of kilometers before reaching land. Wind waves on Earth range in size from small ripples to waves over high, being limited by wind speed, duration, fetch, and water depth.
When directly generated and affected by local wind, a wind wave system is called a wind sea. Wind waves will travel in a great circle route after being generated – curving slightly left in the southern hemisphere and slightly right in the northern hemisphere. After moving out of the area of fetch and no longer being affected by the local wind, wind waves are called swells and can travel thousands of kilometers. A noteworthy example of this is waves generated south of Tasmania during heavy winds that will travel across the Pacific to southern California, producing desirable surfing conditions. Wind waves in the ocean are also called ocean surface waves and are mainly gravity waves, where gravity is the main equilibrium force.
Wind waves have a certain amount of randomness: subsequent waves differ in height, duration, and shape with limited predictability. They can be described as a stochastic process, in combination with the physics governing their generation, growth, propagation, and decay – as well as governing the interdependence between flow quantities such as the water surface movements, flow velocities, and water pressure. The key statistics of wind waves (both seas and swells) in evolving sea states can be predicted with wind wave models.
Although waves are usually considered in the water seas of Earth, the hydrocarbon seas of Titan may also have wind-driven waves. Waves in bodies of water may also be generated by other causes, both at the surface and underwater (such as watercraft, animals, waterfalls, landslides, earthquakes, bubbles, and impact events).
Formation
The great majority of large breakers seen at a beach result from distant winds. Five factors influence the formation of the flow structures in wind waves:
Wind speed or strength relative to wave speed – the wind must be moving faster than the wave crest for energy transfer to the wave.
The uninterrupted distance of open water over which the wind blows without significant change in direction (called the fetch)
Width of the area affected by fetch (at a right angle to the distance)
Wind duration – the time for which the wind has blown over the water.
Water depth
All of these factors work together to determine the size of the water waves and the structure of the flow within them.
The main dimensions associated with wave propagation are:
Wave height (vertical distance from trough to crest)
Wave length (distance from crest to crest in the direction of propagation)
Wave period (time interval between arrival of consecutive crests at a stationary point)
Wave direction or azimuth (predominantly driven by wind direction)
A fully developed sea has the maximum wave size theoretically possible for a wind of specific strength, duration, and fetch. Further exposure to that specific wind could only cause a dissipation of energy due to the breaking of wave tops and formation of "whitecaps". Waves in a given area typically have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is usually expressed as significant wave height. This figure represents an average height of the highest one-third of the waves in a given time period (usually chosen somewhere in the range from 20 minutes to twelve hours), or in a specific wave or storm system. The significant wave height is also the value a "trained observer" (e.g. from a ship's crew) would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm.
Wave formation on an initially flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water. This pressure fluctuation produces normal and tangential stresses in the surface water, which generates waves. It is usually assumed for the purpose of theoretical analysis that:
The water is originally at rest.
The water is not viscous.
The water is irrotational.
There is a random distribution of normal pressure to the water surface from the turbulent wind.
Correlations between air and water motions are neglected.
The second mechanism involves wind shear forces on the water surface. John W. Miles suggested a surface wave generation mechanism that is initiated by turbulent wind shear flows based on the inviscid Orr–Sommerfeld equation in 1957. He found the energy transfer from the wind to the water surface is proportional to the curvature of the velocity profile of the wind at the point where the mean wind speed is equal to the wave speed. Since the wind speed profile is logarithmic to the water surface, the curvature has a negative sign at this point. This relation shows the wind flow transferring its kinetic energy to the water surface at their interface.
Assumptions:
two-dimensional parallel shear flow
incompressible, inviscid water and wind
irrotational water
slope of the displacement of the water surface is small
Generally, these wave formation mechanisms occur together on the water surface and eventually produce fully developed waves.
For example, if we assume a flat sea surface (Beaufort state 0), and a sudden wind flow blows steadily across the sea surface, the physical wave generation process follows the sequence:
Turbulent wind forms random pressure fluctuations at the sea surface. Ripples with wavelengths in the order of a few centimeters are generated by the pressure fluctuations. (The Phillips mechanism)
The winds keep acting on the initially rippled sea surface causing the waves to become larger. As the waves grow, the pressure differences get larger causing the growth rate to increase. Finally, the shear instability expedites the wave growth exponentially. (The Miles mechanism)
The interactions between the waves on the surface generate longer waves and the interaction will transfer wave energy from the shorter waves generated by the Miles mechanism to the waves which have slightly lower frequencies than the frequency at the peak wave magnitudes, then finally the waves will be faster than the crosswind speed (Pierson & Moskowitz).
Types
Three different types of wind waves develop over time:
Capillary waves, or ripples, dominated by surface tension effects.
Gravity waves, dominated by gravitational and inertial forces.
Seas, raised locally by the wind.
Swells, which have traveled away from where they were raised by the wind, and have to a greater or lesser extent dispersed.
Ripples appear on smooth water when the wind blows, but will die quickly if the wind stops. The restoring force that allows them to propagate is surface tension. Sea waves are larger-scale, often irregular motions that form under sustained winds. These waves tend to last much longer, even after the wind has died, and the restoring force that allows them to propagate is gravity. As waves propagate away from their area of origin, they naturally separate into groups of common direction and wavelength. The sets of waves formed in this manner are known as swells. The Pacific Ocean is from Indonesia to the coast of Colombia and, based on an average wavelength of , would have ~258,824 swells over that width.
It is sometimes alleged that out of a set of waves, the seventh wave in a set is always the largest; while this isn't the case, the waves in the middle of a given set tend to be larger than those before and after them.
Individual "rogue waves" (also called "freak waves", "monster waves", "killer waves", and "king waves") much higher than the other waves in the sea state can occur. In the case of the Draupner wave, its height was 2.2 times the significant wave height. Such waves are distinct from tides, caused by the Moon and Sun's gravitational pull, tsunamis that are caused by underwater earthquakes or landslides, and waves generated by underwater explosions or the fall of meteorites—all having far longer wavelengths than wind waves.
The largest ever recorded wind waves are not rogue waves, but standard waves in extreme sea states. For example, high waves were recorded on the RRS Discovery in a sea with significant wave height, so the highest wave was only 1.6 times the significant wave height.
The biggest recorded by a buoy (as of 2011) was high during the 2007 typhoon Krosa near Taiwan.
Spectrum
Ocean waves can be classified based on: the disturbing force that creates them; the extent to which the disturbing force continues to influence them after formation; the extent to which the restoring force weakens or flattens them; and their wavelength or period. Seismic sea waves have a period of about 20 minutes, and speeds of . Wind waves (deep-water waves) have a period up to about 20 seconds.
The speed of all ocean waves is controlled by gravity, wavelength, and water depth. Most characteristics of ocean waves depend on the relationship between their wavelength and water depth. Wavelength determines the size of the orbits of water molecules within a wave, but water depth determines the shape of the orbits. The paths of water molecules in a wind wave are circular only when the wave is traveling in deep water. A wave cannot "feel" the bottom when it moves through water deeper than half its wavelength because too little wave energy is contained in the water movement below that depth. Waves moving through water deeper than half their wavelength are known as deep-water waves. On the other hand, the orbits of water molecules in waves moving through shallow water are flattened by the proximity of the sea bottom surface. Waves in water shallower than 1/20 their original wavelength are known as shallow-water waves. Transitional waves travel through water deeper than 1/20 their original wavelength but shallower than half their original wavelength.
In general, the longer the wavelength, the faster the wave energy will move through the water. The relationship between the wavelength, period and velocity of any wave is:
where C is speed (celerity), L is the wavelength, and T is the period (in seconds). Thus the speed of the wave derives from the functional dependence of the wavelength on the period (the dispersion relation).
The speed of a deep-water wave may also be approximated by:
where g is the acceleration due to gravity, per second squared. Because g and π (3.14) are constants, the equation can be reduced to:
when C is measured in meters per second and L in meters. In both formulas the wave speed is proportional to the square root of the wavelength.
The speed of shallow-water waves is described by a different equation that may be written as:
where C is speed (in meters per second), g is the acceleration due to gravity, and d is the depth of the water (in meters). The period of a wave remains unchanged regardless of the depth of water through which it is moving. As deep-water waves enter the shallows and feel the bottom, however, their speed is reduced, and their crests "bunch up", so their wavelength shortens.
Spectral models
Sea state can be described by the sea wave spectrum or just wave spectrum . It is composed of a wave height spectrum (WHS) and a wave direction spectrum (WDS) . Many interesting properties about the sea state can be found from the wave spectra.
WHS describes the spectral density of wave height variance ("power") versus wave frequency, with dimension .
The relationship between the spectrum and the wave amplitude for a wave component is:
Some WHS models are listed below.
The International Towing Tank Conference (ITTC) recommended spectrum model for fully developed sea (ISSC spectrum/modified Pierson-Moskowitz spectrum):
ITTC recommended spectrum model for limited fetch (JONSWAP spectrum)
where
(The latter model has since its creation improved based on the work of Phillips and Kitaigorodskii to better model the wave height spectrum for high wavenumbers.)
As for WDS, an example model of might be:
Thus the sea state is fully determined and can be recreated by the following function where is the wave elevation, is uniformly distributed between 0 and , and is randomly drawn from the directional distribution function
Shoaling and refraction
As waves travel from deep to shallow water, their shape changes (wave height increases, speed decreases, and length decreases as wave orbits become asymmetrical). This process is called shoaling.
Wave refraction is the process that occurs when waves interact with the sea bed to slow the velocity of propagation as a function of wavelength and period. As the waves slow down in shoaling water, the crests tend to realign at a decreasing angle to the depth contours. Varying depths along a wave crest cause the crest to travel at different phase speeds, with those parts of the wave in deeper water moving faster than those in shallow water. This process continues while the depth decreases, and reverses if it increases again, but the wave leaving the shoal area may have changed direction considerably. Rays—lines normal to wave crests between which a fixed amount of energy flux is contained—converge on local shallows and shoals. Therefore, the wave energy between rays is concentrated as they converge, with a resulting increase in wave height.
Because these effects are related to a spatial variation in the phase speed, and because the phase speed also changes with the ambient current—due to the Doppler shift—the same effects of refraction and altering wave height also occur due to current variations. In the case of meeting an adverse current the wave steepens, i.e. its wave height increases while the wavelength decreases, similar to the shoaling when the water depth decreases.
Breaking
Some waves undergo a phenomenon called "breaking". A breaking wave is one whose base can no longer support its top, causing it to collapse. A wave breaks when it runs into shallow water, or when two wave systems oppose and combine forces. When the slope, or steepness ratio, of a wave, is too great, breaking is inevitable.
Individual waves in deep water break when the wave steepness—the ratio of the wave height H to the wavelength λ—exceeds about 0.17, so for H > 0.17 λ. In shallow water, with the water depth small compared to the wavelength, the individual waves break when their wave height H is larger than 0.8 times the water depth h, that is H > 0.8 h. Waves can also break if the wind grows strong enough to blow the crest off the base of the wave.
In shallow water, the base of the wave is decelerated by drag on the seabed. As a result, the upper parts will propagate at a higher velocity than the base and the leading face of the crest will become steeper and the trailing face flatter. This may be exaggerated to the extent that the leading face forms a barrel profile, with the crest falling forward and down as it extends over the air ahead of the wave.
Three main types of breaking waves are identified by surfers or surf lifesavers. Their varying characteristics make them more or less suitable for surfing and present different dangers.
Spilling, or rolling: these are the safest waves on which to surf. They can be found in most areas with relatively flat shorelines. They are the most common type of shorebreak. The deceleration of the wave base is gradual, and the velocity of the upper parts does not differ much with height. Breaking occurs mainly when the steepness ratio exceeds the stability limit.
Plunging, or dumping: these break suddenly and can "dump" swimmers—pushing them to the bottom with great force. These are the preferred waves for experienced surfers. Strong offshore winds and long wave periods can cause dumpers. They are often found where there is a sudden rise in the seafloor, such as a reef or sandbar. Deceleration of the wave base is sufficient to cause upward acceleration and a significant forward velocity excess of the upper part of the crest. The peak rises and overtakes the forward face, forming a "barrel" or "tube" as it collapses.
Surging: these may never actually break as they approach the water's edge, as the water below them is very deep. They tend to form on steep shorelines. These waves can knock swimmers over and drag them back into deeper water.
When the shoreline is near vertical, waves do not break but are reflected. Most of the energy is retained in the wave as it returns to seaward. Interference patterns are caused by superposition of the incident and reflected waves, and the superposition may cause localized instability when peaks cross, and these peaks may break due to instability. (see also clapotic waves)
Physics of waves
Wind waves are mechanical waves that propagate along the interface between water and air; the restoring force is provided by gravity, and so they are often referred to as surface gravity waves. As the wind blows, pressure and friction perturb the equilibrium of the water surface and transfer energy from the air to the water, forming waves. The initial formation of waves by the wind is described in the theory of Phillips from 1957, and the subsequent growth of the small waves has been modeled by Miles, also in 1957.
In linear plane waves of one wavelength in deep water, parcels near the surface move not plainly up and down but in circular orbits: forward above and backward below (compared to the wave propagation direction). As a result, the surface of the water forms not an exact sine wave, but more a trochoid with the sharper curves upwards—as modeled in trochoidal wave theory. Wind waves are thus a combination of transversal and longitudinal waves.
When waves propagate in shallow water, (where the depth is less than half the wavelength) the particle trajectories are compressed into ellipses.
In reality, for finite values of the wave amplitude (height), the particle paths do not form closed orbits; rather, after the passage of each crest, particles are displaced slightly from their previous positions, a phenomenon known as Stokes drift.
As the depth below the free surface increases, the radius of the circular motion decreases. At a depth equal to half the wavelength λ, the orbital movement has decayed to less than 5% of its value at the surface. The phase speed (also called the celerity) of a surface gravity wave is—for pure periodic wave motion of small-amplitude waves—well approximated by
where
c = phase speed;
λ = wavelength;
d = water depth;
g = acceleration due to gravity at the Earth's surface.
In deep water, where , so and the hyperbolic tangent approaches , the speed approximates
In SI units, with in m/s, , when is measured in metres.
This expression tells us that waves of different wavelengths travel at different speeds. The fastest waves in a storm are the ones with the longest wavelength. As a result, after a storm, the first waves to arrive on the coast are the long-wavelength swells.
For intermediate and shallow water, the Boussinesq equations are applicable, combining frequency dispersion and nonlinear effects. And in very shallow water, the shallow water equations can be used.
If the wavelength is very long compared to the water depth, the phase speed (by taking the limit of when the wavelength approaches infinity) can be approximated by
On the other hand, for very short wavelengths, surface tension plays an important role and the phase speed of these gravity-capillary waves can (in deep water) be approximated by
where
S = surface tension of the air-water interface;
= density of the water.
When several wave trains are present, as is always the case in nature, the waves form groups. In deep water, the groups travel at a group velocity which is half of the phase speed. Following a single wave in a group one can see the wave appearing at the back of the group, growing, and finally disappearing at the front of the group.
As the water depth decreases towards the coast, this will have an effect: wave height changes due to wave shoaling and refraction. As the wave height increases, the wave may become unstable when the crest of the wave moves faster than the trough. This causes surf, a breaking of the waves.
The movement of wind waves can be captured by wave energy devices. The energy density (per unit area) of regular sinusoidal waves depends on the water density , gravity acceleration and the wave height (which, for regular waves, is equal to twice the amplitude, ):
The velocity of propagation of this energy is the group velocity.
Models
Surfers are very interested in the wave forecasts. There are many websites that provide predictions of the surf quality for the upcoming days and weeks. Wind wave models are driven by more general weather models that predict the winds and pressures over the oceans, seas, and lakes.
Wind wave models are also an important part of examining the impact of shore protection and beach nourishment proposals. For many beach areas there is only patchy information about the wave climate, therefore estimating the effect of wind waves is important for managing littoral environments.
A wind-generated wave can be predicted based on two parameters: wind speed at 10 m above sea level and wind duration, which must blow over long periods of time to be considered fully developed. The significant wave height and peak frequency can then be predicted for a certain fetch length.
Seismic signals
Ocean water waves generate seismic waves that are globally visible on seismographs. There are two principal constituents of the ocean wave-generated seismic microseism. The strongest of these is the secondary microseism which is created by ocean floor pressures generated by interfering ocean waves and has a spectrum that is generally between approximately 6–12 s periods, or at approximately half of the period of the responsible interfering waves. The theory for microseism generation by standing waves was provided by Michael Longuet-Higgins in 1950 after in 1941 Pierre Bernard suggested this relation with standing waves on the basis of observations. The weaker primary microseism, also globally visible, is generated by dynamic seafloor pressures of propagating waves above shallower (less than several hundred meters depth) regions of the global ocean. Microseisms were first reported in about 1900, and seismic records provide long-term proxy measurements of seasonal and climate-related large-scale wave intensity in Earth's oceans including those associated with anthropogenic global warming.
See also
References
Scientific
Other
External links
Current global map of peak wave periods
Current global map of significant wave heights
Coastal geography
Physical oceanography
Articles containing video clips
Oceanographical terminology
Surface waves | Wind wave | [
"Physics",
"Chemistry"
] | 4,663 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Surface waves",
"Waves",
"Physical oceanography",
"Fluid dynamics"
] |
711,201 | https://en.wikipedia.org/wiki/Snake%20oil%20%28cryptography%29 | In cryptography, snake oil is any cryptographic method or product considered to be bogus or fraudulent. The name derives from snake oil, one type of patent medicine widely available in 19th century United States.
Distinguishing secure cryptography from insecure cryptography can be difficult from the viewpoint of a user. Many cryptographers, such as Bruce Schneier and Phil Zimmermann, undertake to educate the public in how secure cryptography is done, as well as highlighting the misleading marketing of some cryptographic products.
The Snake Oil FAQ describes itself as "a compilation of common habits of snake oil vendors. It cannot be the sole method of rating a security product, since there can be exceptions to most of these rules. [...] But if you're looking at something that exhibits several warning signs, you're probably dealing with snake oil."
Some examples of snake oil cryptography techniques
This is not an exhaustive list of snake oil signs. A more thorough list is given in the references.
Secret system Some encryption systems will claim to rely on a secret algorithm, technique, or device; this is categorized as security through obscurity. Criticisms of this are twofold. First, a 19th century rule known as Kerckhoffs's principle, later formulated as Shannon's maxim, teaches that "the enemy knows the system" and the secrecy of a cryptosystem algorithm does not provide any advantage. Second, secret methods are not open to public peer review and cryptanalysis, so potential mistakes and insecurities can go unnoticed.
Technobabble Snake oil salespeople may use "technobabble" to sell their product since cryptography is a complicated subject.
"Unbreakable"Claims of a system or cryptographic method being "unbreakable" are always false (or true under some limited set of conditions), and are generally considered a sure sign of snake oil.
"Military grade" There is no accepted standard or criterion for "military grade" ciphers.
One-time pads One-time pads are a popular cryptographic method to invoke in advertising, because it is well known that one-time pads, when implemented correctly, are genuinely unbreakable. The problem comes in implementing one-time pads, which is rarely done correctly. Cryptographic systems that claim to be based on one-time pads are considered suspect, particularly if they do not describe how the one-time pad is implemented, or they describe a flawed implementation.
Unsubstantiated "bit" claims Cryptographic products are often accompanied with claims of using a high number of bits for encryption, apparently referring to the key length used. However key lengths are not directly comparable between symmetric and asymmetric systems. Furthermore, the details of implementation can render the system vulnerable. For example, in 2008 it was revealed that a number of hard drives sold with built-in "128-bit AES encryption" were actually using a simple and easily defeated "XOR" scheme. AES was only used to store the key, which was easy to recover without breaking AES.
References
External links
Beware of Snake Oil — by Phil Zimmermann
Google Search results for "The Doghouse" in Bruce Schneier's Crypto-Gram newsletters — the Doghouse section of the Crypto-Gram newsletter frequently describes various snake oil encryption products, commercial or otherwise.
Cryptography
Pejorative terms related to technology | Snake oil (cryptography) | [
"Mathematics",
"Engineering"
] | 697 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
711,288 | https://en.wikipedia.org/wiki/Viscoelasticity | In materials science and continuum mechanics, viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like water, resist both shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed.
Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material.
Background
In the nineteenth century, physicists such as James Clerk Maxwell, Ludwig Boltzmann, and Lord Kelvin researched and experimented with creep and recovery of glasses, metals, and rubbers. Viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η. The inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value (i.e. for a dashpot).
Depending on the change of strain rate versus stress inside a material, the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material. In this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, it is categorized as non-Newtonian fluid. There is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic. In addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the thermodynamic theory of polymer elasticity.
Some examples of viscoelastic materials are amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures, and bitumen materials. Cracking occurs when the strain is applied quickly and outside of the elastic limit. Ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends on both the rate of the change of their length and the force applied.
A viscoelastic material has the following properties:
hysteresis is seen in the stress–strain curve
stress relaxation occurs: step constant strain causes decreasing stress
creep occurs: step constant stress causes increasing strain
its stiffness depends on the strain rate or the stress rate
Elastic versus viscoelastic behavior
Unlike purely elastic substances, a viscoelastic substance has an elastic component and a viscous component. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time. Purely elastic materials do not dissipate energy (heat) when a load is applied, then removed. However, a viscoelastic substance dissipates energy when a load is applied, then removed. Hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a viscous material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic material's reaction to a loading cycle.
Specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a viscoelastic material such as a polymer, parts of the long polymer chain change positions. This movement or rearrangement is called creep. Polymers remain a solid material even when these parts of their chains are rearranging in order to accommodate the stress, and as this occurs, it creates a back stress in the material. When the back stress is the same magnitude as the applied stress, the material no longer creeps. When the original stress is taken away, the accumulated back stresses will cause the polymer to return to its original form. The material creeps, which gives the prefix visco-, and the material fully recovers, which gives the suffix -elasticity.
Linear viscoelasticity and nonlinear viscoelasticity
Linear viscoelasticity is when the function is separable in both creep response and load. All linear viscoelastic models can be represented by a Volterra equation connecting stress and strain:
or
where
is time
is stress
is strain
and are instantaneous elastic moduli for creep and relaxation
is the creep function
is the relaxation function
Linear viscoelasticity is usually applicable only for small deformations.
Nonlinear viscoelasticity is when the function is not separable. It usually happens when the deformations are large or if the material changes its properties under deformations. Nonlinear viscoelasticity also elucidates observed phenomena such as normal stresses, shear thinning, and extensional thickening in viscoelastic fluids.
An anelastic material is a special case of a viscoelastic material: an anelastic material will fully recover to its original state on the removal of load.
When distinguishing between elastic, viscous, and forms of viscoelastic behavior, it is helpful to reference the time scale of the measurement relative to the relaxation times of the material being observed, known as the Deborah number (De) where:
where
is the relaxation time of the material
is time
Dynamic modulus
Viscoelasticity is studied using dynamic mechanical analysis, applying a small oscillatory stress and measuring the resulting strain.
Purely elastic materials have stress and strain in phase, so that the response of one caused by the other is immediate.
In purely viscous materials, strain lags stress by a 90 degree phase.
Viscoelastic materials exhibit behavior somewhere in the middle of these two types of material, exhibiting some lag in strain.
A complex dynamic modulus G can be used to represent the relations between the oscillating stress and strain:
where ; is the storage modulus and is the loss modulus:
where and are the amplitudes of stress and strain respectively, and is the phase shift between them.
Constitutive models of linear viscoelasticity
Viscoelastic materials, such as amorphous polymers, semicrystalline polymers, biopolymers and even the living tissue and cells, can be modeled in order to determine their stress and strain or force and displacement interactions as well as their temporal dependencies. These models, which include the Maxwell model, the Kelvin–Voigt model, the standard linear solid model, and the Burgers model, are used to predict a material's response under different loading conditions.
Viscoelastic behavior has elastic and viscous components modeled as linear combinations of springs and dashpots, respectively. Each model differs in the arrangement of these elements, and all of these viscoelastic models can be equivalently modeled as electrical circuits.
In an equivalent electrical circuit, stress is represented by current, and strain rate by voltage. The elastic modulus of a spring is analogous to the inverse of a circuit's inductance (it stores energy) and the viscosity of a dashpot to a circuit's resistance (it dissipates energy).
The elastic components, as previously mentioned, can be modeled as springs of elastic constant E, given the formula:
where σ is the stress, E is the elastic modulus of the material, and ε is the strain that occurs under the given stress, similar to Hooke's law.
The viscous components can be modeled as dashpots such that the stress–strain rate relationship can be given as,
where σ is the stress, η is the viscosity of the material, and dε/dt is the time derivative of strain.
The relationship between stress and strain can be simplified for specific stress or strain rates. For high stress or strain rates/short time periods, the time derivative components of the stress–strain relationship dominate. In these conditions it can be approximated as a rigid rod capable of sustaining high loads without deforming. Hence, the dashpot can be considered to be a "short-circuit".
Conversely, for low stress states/longer time periods, the time derivative components are negligible and the dashpot can be effectively removed from the system – an "open" circuit. As a result, only the spring connected in parallel to the dashpot will contribute to the total strain in the system.
Maxwell model
The Maxwell model can be represented by a purely viscous damper and a purely elastic spring connected in series, as shown in the diagram. The model can be represented by the following equation:
Under this model, if the material is put under a constant strain, the stresses gradually relax. When a material is put under a constant stress, the strain has two components. First, an elastic component occurs instantaneously, corresponding to the spring, and relaxes immediately upon release of the stress. The second is a viscous component that grows with time as long as the stress is applied. The Maxwell model predicts that stress decays exponentially with time, which is accurate for most polymers. One limitation of this model is that it does not predict creep accurately. The Maxwell model for creep or constant-stress conditions postulates that strain will increase linearly with time. However, polymers for the most part show the strain rate to be decreasing with time.
This model can be applied to soft solids: thermoplastic polymers in the vicinity of their melting temperature, fresh concrete (neglecting its aging), and numerous metals at a temperature close to their melting point.
The equation introduced here, however, lacks a consistent derivation from more microscopic model and is not observer independent. The Upper-convected Maxwell model is its sound formulation in tems of the Cauchy stress tensor and constitutes the simplest tensorial constitutive model for viscoelasticity (see e.g. or
).
Kelvin–Voigt model
The Kelvin–Voigt model, also known as the Voigt model, consists of a Newtonian damper and Hookean elastic spring connected in parallel, as shown in the picture. It is used to explain the creep behaviour of polymers.
The constitutive relation is expressed as a linear first-order differential equation:
This model represents a solid undergoing reversible, viscoelastic strain. Upon application of a constant stress, the material deforms at a decreasing rate, asymptotically approaching the steady-state strain. When the stress is released, the material gradually relaxes to its undeformed state. At constant stress (creep), the model is quite realistic as it predicts strain to tend to σ/E as time continues to infinity. Similar to the Maxwell model, the Kelvin–Voigt model also has limitations. The model is extremely good with modelling creep in materials, but with regards to relaxation the model is much less accurate.
This model can be applied to organic polymers, rubber, and wood when the load is not too high.
Standard linear solid model
The standard linear solid model, also known as the Zener model, consists of two springs and a dashpot. It is the simplest model that describes both the creep and stress relaxation behaviors of a viscoelastic material properly. For this model, the governing constitutive relations are:
Under a constant stress, the modeled material will instantaneously deform to some strain, which is the instantaneous elastic portion of the strain. After that it will continue to deform and asymptotically approach a steady-state strain, which is the retarded elastic portion of the strain. Although the standard linear solid model is more accurate than the Maxwell and Kelvin–Voigt models in predicting material responses, mathematically it returns inaccurate results for strain under specific loading conditions.
Jeffreys model
The Jeffreys model like the Zener model is a three element model. It consist of two dashpots and a spring.
It was proposed in 1929 by Harold Jeffreys to study Earth's mantle.
Burgers model
The Burgers model consists of either two Maxwell components in parallel or a Kelvin–Voigt component, a spring and a dashpot in series. For this model, the governing constitutive relations are:
This model incorporates viscous flow into the standard linear solid model, giving a linearly increasing asymptote for strain under fixed loading conditions.
Generalized Maxwell model
The generalized Maxwell model, also known as the Wiechert model, is the most general form of the linear model for viscoelasticity. It takes into account that the relaxation does not occur at a single time, but at a distribution of times. Due to molecular segments of different lengths with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model.
Applications: metals and alloys at temperatures lower than one quarter of their absolute melting temperature (expressed in K).
Constitutive models for nonlinear viscoelasticity
Non-linear viscoelastic constitutive equations are needed to quantitatively account for phenomena in fluids like differences in normal stresses, shear thinning, and extensional thickening. Necessarily, the history experienced by the material is needed to account for time-dependent behavior, and is typically included in models as a history kernel K.
Second-order fluid
The second-order fluid is typically considered the simplest nonlinear viscoelastic model, and typically occurs in a narrow region of materials behavior occurring at high strain amplitudes and Deborah number between Newtonian fluids and other more complicated nonlinear viscoelastic fluids. The second-order fluid constitutive equation is given by:
where:
is the identity tensor
is the deformation tensor
denote viscosity, and first and second normal stress coefficients, respectively
denotes the upper-convected derivative of the deformation tensor where and is the material time derivative of the deformation tensor.
Upper-convected Maxwell model
The upper-convected Maxwell model incorporates nonlinear time behavior into the viscoelastic Maxwell model, given by:
where denotes the stress tensor.
Oldroyd-B model
The Oldroyd-B model is an extension of the Upper Convected Maxwell model and is interpreted as a solvent filled with elastic bead and spring dumbbells.
The model is named after its creator James G. Oldroyd.
The model can be written as:
where:
is the stress tensor;
is the relaxation time;
is the retardation time = ;
is the upper convected time derivative of stress tensor:
is the fluid velocity;
is the total viscosity composed of solvent and polymer components, ;
is the deformation rate tensor or rate of strain tensor, .
Whilst the model gives good approximations of viscoelastic fluids in shear flow, it has an unphysical singularity in extensional flow, where the dumbbells are infinitely stretched. This is, however, specific to idealised flow; in the case of a cross-slot geometry the extensional flow is not ideal, so the stress, although singular, remains integrable, although the stress is infinite in a correspondingly infinitely small region.
If the solvent viscosity is zero, the Oldroyd-B becomes the upper convected Maxwell model.
Wagner model
Wagner model is might be considered as a simplified practical form of the Bernstein–Kearsley–Zapas model. The model was developed by German rheologist Manfred Wagner.
For the isothermal conditions the model can be written as:
where:
is the Cauchy stress tensor as function of time t,
p is the pressure
is the unity tensor
M is the memory function showing, usually expressed as a sum of exponential terms for each mode of relaxation: where for each mode of the relaxation, is the relaxation modulus and is the relaxation time;
is the strain damping function that depends upon the first and second invariants of Finger tensor .
The strain damping function is usually written as:
If the value of the strain hardening function is equal to one, then the deformation is small; if it approaches zero, then the deformations are large.
Prony series
In a one-dimensional relaxation test, the material is subjected to a sudden strain that is kept constant over the duration of the test, and the stress is measured over time. The initial stress is due to the elastic response of the material. Then, the stress relaxes over time due to the viscous effects in the material. Typically, either a tensile, compressive, bulk compression, or shear strain is applied. The resulting stress vs. time data can be fitted with a number of equations, called models. Only the notation changes depending on the type of strain applied: tensile-compressive relaxation is denoted , shear is denoted , bulk is denoted . The Prony series for the shear relaxation is
where is the long term modulus once the material is totally relaxed, are the relaxation times (not to be confused with in the diagram); the higher their values, the longer it takes for the stress to relax. The data is fitted with the equation by using a minimization algorithm that adjust the parameters () to minimize the error between the predicted and data values.
An alternative form is obtained noting that the elastic modulus is related to the long term modulus by
Therefore,
This form is convenient when the elastic shear modulus is obtained from data independent from the relaxation data, and/or for computer implementation, when it is desired to specify the elastic properties separately from the viscous properties, as in Simulia (2010).
A creep experiment is usually easier to perform than a relaxation one, so most data is available as (creep) compliance vs. time. Unfortunately, there is no known closed form for the (creep) compliance in terms of the coefficient of the Prony
series. So, if one has creep data, it is not easy to get the coefficients of the (relaxation) Prony series, which are needed for example in. An expedient way to obtain these coefficients is the following. First, fit the creep data with a model that has closed form solutions in both compliance and relaxation; for example the Maxwell-Kelvin model
(eq. 7.18-7.19) in Barbero (2007) or the Standard Solid Model (eq. 7.20-7.21) in Barbero (2007) (section 7.1.3). Once the parameters of the creep model are known, produce relaxation pseudo-data with the conjugate relaxation model for the same
times of the original data. Finally, fit the pseudo data with the Prony series.
Effect of temperature
The secondary bonds of a polymer constantly break and reform due to thermal motion. Application of a stress favors some conformations over others, so the molecules of the polymer will gradually "flow" into the favored conformations over time. Because thermal motion is one factor contributing to the deformation of polymers, viscoelastic properties change with increasing or decreasing temperature. In most cases, the creep modulus, defined as the ratio of applied stress to the time-dependent strain, decreases with increasing temperature. Generally speaking, an increase in temperature correlates to a logarithmic decrease in the time required to impart equal strain under a constant stress. In other words, it takes less work to stretch a viscoelastic material an equal distance at a higher temperature than it does at a lower temperature.
More detailed effect of temperature on the viscoelastic behavior of polymer can be plotted as shown.
There are mainly five regions (some denoted four, which combines IV and V together) included in the typical polymers.
Region I: Glassy state of the polymer is presented in this region. The temperature in this region for a given polymer is too low to endow molecular motion. Hence the motion of the molecules is frozen in this area. The mechanical property is hard and brittle in this region.
Region II: Polymer passes glass transition temperature in this region. Beyond Tg, the thermal energy provided by the environment is enough to unfreeze the motion of molecules. The molecules are allowed to have local motion in this region hence leading to a sharp drop in stiffness compared to Region I.
Region III: Rubbery plateau region. Materials lie in this region would exist long-range elasticity driven by entropy. For instance, a rubber band is disordered in the initial state of this region. When stretching the rubber band, you also align the structure to be more ordered. Therefore, when releasing the rubber band, it will spontaneously seek higher entropy state hence goes back to its initial state. This is what we called entropy-driven elasticity shape recovery.
Region IV: The behavior in the rubbery flow region is highly time-dependent. Polymers in this region would need to use a time-temperature superposition to get more detailed information to cautiously decide how to use the materials. For instance, if the material is used to cope with short interaction time purpose, it could present as 'hard' material. While using for long interaction time purposes, it would act as 'soft' material.
Region V: Viscous polymer flows easily in this region. Another significant drop in stiffness.
Extreme cold temperatures can cause viscoelastic materials to change to the glass phase and become brittle. For example, exposure of pressure sensitive adhesives to extreme cold (dry ice, freeze spray, etc.) causes them to lose their tack, resulting in debonding.
Viscoelastic creep
When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.
At time , a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails, if it is a viscoelastic liquid. If, on the other hand, it is a viscoelastic solid, it may or may not fail depending on the applied stress versus the material's ultimate resistance. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time , after which the strain immediately decreases (discontinuity) then gradually decreases at times to a residual strain.
Viscoelastic creep data can be presented by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. Below its critical stress, the viscoelastic creep modulus is independent of stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value.
Viscoelastic creep is important when considering long-term structural design. Given loading and temperature conditions, designers can choose materials that best suit component lifetimes.
Measurement
Shear rheometry
Shear rheometers are based on the idea of putting the material to be measured between two plates, one or both of which move in a shear direction to induce stresses and strains in the material. The testing can be done at constant strain rate, stress, or in an oscillatory fashion (a form of dynamic mechanical analysis). Shear rheometers are typically limited by edge effects where the material may leak out from between the two plates and slipping at the material/plate interface.
Extensional rheometry
Extensional rheometers, also known as extensiometers, measure viscoelastic properties by pulling a viscoelastic fluid, typically uniaxially. Because this typically makes use of capillary forces and confines the fluid to a narrow geometry, the technique is often limited to fluids with relatively low viscosity like dilute polymer solutions or some molten polymers. Extensional rheometers are also limited by edge effects at the ends of the extensiometer and pressure differences between inside and outside the capillary.
Despite the apparent limitations mentioned above, extensional rheometry can also be performed on high viscosity fluids. Although this requires the use of different instruments, these techniques and apparatuses allow for the study of the extensional viscoelastic properties of materials such as polymer melts. Three of the most common extensional rheometry instruments developed within the last 50 years are the Meissner-type rheometer, the filament stretching rheometer (FiSER), and the Sentmanat Extensional Rheometer (SER).
The Meissner-type rheometer, developed by Meissner and Hostettler in 1996, uses two sets of counter-rotating rollers to strain a sample uniaxially. This method uses a constant sample length throughout the experiment, and supports the sample in between the rollers via an air cushion to eliminate sample sagging effects. It does suffer from a few issues – for one, the fluid may slip at the belts which leads to lower strain rates than one would expect. Additionally, this equipment is challenging to operate and costly to purchase and maintain.
The FiSER rheometer simply contains fluid in between two plates. During an experiment, the top plate is held steady and a force is applied to the bottom plate, moving it away from the top one. The strain rate is measured by the rate of change of the sample radius at its middle. It is calculated using the following equation:
where is the mid-radius value and is the strain rate. The viscosity of the sample is then calculated using the following equation:
where is the sample viscosity, and is the force applied to the sample to pull it apart.
Much like the Meissner-type rheometer, the SER rheometer uses a set of two rollers to strain a sample at a given rate. It then calculates the sample viscosity using the well known equation:
where is the stress, is the viscosity and is the strain rate. The stress in this case is determined via torque transducers present in the instrument. The small size of this instrument makes it easy to use and eliminates sample sagging between the rollers. A schematic detailing the operation of the SER extensional rheometer can be found on the right.
Other methods
Though there are many instruments that test the mechanical and viscoelastic response of materials, broadband viscoelastic spectroscopy (BVS) and resonant ultrasound spectroscopy (RUS) are more commonly used to test viscoelastic behavior because they can be used above and below ambient temperatures and are more specific to testing viscoelasticity. These two instruments employ a damping mechanism at various frequencies and time ranges with no appeal to time–temperature superposition. Using BVS and RUS to study the mechanical properties of materials is important to understanding how a material exhibiting viscoelasticity will perform.
See also
Bingham plastic
Biomaterial
Biomechanics
Blood viscoelasticity
Constant viscosity elastic fluids
Deformation index
Glass transition
Pressure-sensitive adhesive
Rheology
Rubber elasticity
Silly Putty
Viscoelasticity of bone
Viscoplasticity
Visco-elastic jets
References
Silbey and Alberty (2001): Physical Chemistry, 857. John Wiley & Sons, Inc.
Alan S. Wineman and K. R. Rajagopal (2000): Mechanical Response of Polymers: An Introduction
Allen and Thomas (1999): The Structure of Materials, 51.
Crandal et al. (1999): An Introduction to the Mechanics of Solids 348
J. Lemaitre and J. L. Chaboche (1994) Mechanics of solid materials
Yu. Dimitrienko (2011) Nonlinear continuum mechanics and Large Inelastic Deformations, Springer, 772p
Materials science
Elasticity (physics)
Non-Newtonian fluids
Continuum mechanics
Rubber properties
Hysteresis | Viscoelasticity | [
"Physics",
"Materials_science",
"Engineering"
] | 5,731 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Hysteresis",
"Classical mechanics",
"Materials science",
"nan",
"Physical properties"
] |
711,593 | https://en.wikipedia.org/wiki/Enalapril | Enalapril, sold under the brand name Vasotec among others, is an ACE inhibitor medication used to treat high blood pressure, diabetic kidney disease, and heart failure. For heart failure, it is generally used with a diuretic, such as furosemide. It is given by mouth or by injection into a vein. Onset of effects are typically within an hour when taken by mouth and last for up to a day.
Common side effects include headache, tiredness, feeling lightheaded with standing, and cough. Serious side effects include angioedema and low blood pressure. Use during pregnancy is believed to result in harm to the baby. It is in the angiotensin-converting-enzyme (ACE) inhibitor family of medications.
Enalapril was patented in 1978, and came into medical use in 1984. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 141st most commonly prescribed medication in the United States, with more than 4million prescriptions. It is available as a generic medicine.
Medical uses
Enalapril is used to treat hypertension, symptomatic heart failure, and asymptomatic left ventricular dysfunction. ACE-inhibitors (including enalapril) have demonstrated ability to reduce the progression and worsening of existing chronic kidney disease in the presence of proteinuria/microalbuminuria (protein in the urine, a biomarker for chronic kidney disease). This renal protective effect is not seen in the absence of proteinuria/microalbuminuria, including in diabetic populations. The benefit has been particularly demonstrated in patients with hypertension and/or diabetes, and is likely to be seen in other populations (although further studies and subgroup analyses of existing studies are needed) It is widely used in chronic kidney failure. Furthermore, enalapril is an emerging treatment for psychogenic polydipsia. A double-blind, placebo-controlled trial showed that when used for this purpose, enalapril led to decreased water consumption (determined by urine output and osmolality) in 60% of patients.
Side effects
The most common side effects of enalapril include increased serum creatinine (20%), dizziness (2–8%), low blood pressure (1–7%), syncope (2%), and dry cough (1–2%). The most serious common adverse event is angioedema (swelling) (0.68%) which often affects the face and lips, endangering the patient's airway. Angioedema can occur at any point during treatment with enalapril, but is most common after the first few doses. Angioedema and fatality therefrom are reportedly higher among black people. Agranulocytosis has been observed with Enalapril.
Some evidence suggests enalapril will cause injury and death to a developing fetus. In pregnancy, enalapril may result in damage to the fetus's kidneys and resulting oligohydramnios (not enough amniotic fluid). Enalapril is secreted in breast milk and is not recommended for use while breastfeeding.
Mechanism of action
Normally, angiotensin I is converted to angiotensin II by an angiotensin-converting enzyme (ACE). Angiotensin II constricts blood vessels, increasing blood pressure. Enalaprilat, the active metabolite of enalapril, inhibits ACE. Inhibition of ACE decreases levels of angiotensin II, leading to less vasoconstriction and decreased blood pressure.
Pharmacokinetics
Pharmacokinetic data of enalapril:
Onset of action: about 1 hour
Peak effect: 4–6 hours
Duration: 12–24 hours
Absorption: ~60%
Metabolism: prodrug, undergoes biotransformation to enalaprilat
Structure activity relationship
Enalapril has an L-proline moiety as a part of the molecule which is responsible for the oral bioavailability of the drug. It is a pro-drug, which means that it exerts its function after being metabolized. The "-OCH2CH3" part of the molecule will split during the metabolism and at the carbon will be a carboxylate, which then interacts with the Zn+2 site of the ACE enzyme. This structural feature and mechanism of metabolism that must occur before the drug can inhibit the enzyme explains why it has a greater duration of action than another similar drug used for the same indication, Captopril. Duration of effect is dose-related; at recommended doses, antihypertensive and haemodynamic effects have been shown to be maintained for at least 24 hours. Enalapril has a slower onset of action than Captopril but a greater duration of action. However, unlike Captopril, Enalapril does not have a thiol moiety.
History
Squibb developed the first ACE inhibitor, captopril, but it had adverse effects such as a metallic taste (which, as it turned out, was due to the sulfhydryl group). Merck developed enalapril as a competing product.
Enalaprilat was developed first, partly to overcome these limitations of captopril. The sulfhydryl moiety was replaced by a carboxylate moiety, but additional modifications were required in its structure-based design to achieve a potency similar to captopril. Enalaprilat, however, had a problem of its own in that it had poor oral availability. This was overcome by the Merck researchers through the esterification of enalaprilat with ethanol to produce enalapril.
Merck introduced enalapril to market in 1981; it became Merck's first billion dollar-selling drug in 1988. The patent expired in 2000, opening the way for generics.
Society and culture
Legal status
In September 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a pediatric use marketing authorization for the medicinal product Aqumeldi, intended for the treatment of heart failure in children from birth to less than 18 years of age. The applicant for this medicinal product is Proveca Pharma Limited. Aqumeldi was approved for medical use in the European Union in November 2023.
References
ACE inhibitors
Carboxylate esters
Enantiopure drugs
Prodrugs
Pyrrolidines
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Ethyl esters
Carboxylic acids | Enalapril | [
"Chemistry"
] | 1,402 | [
"Carboxylic acids",
"Stereochemistry",
"Functional groups",
"Enantiopure drugs",
"Prodrugs",
"Chemicals in medicine"
] |
711,862 | https://en.wikipedia.org/wiki/Reissner%E2%80%93Nordstr%C3%B6m%20metric | In physics and astronomy, the Reissner–Nordström metric is a static solution to the Einstein–Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M. The analogous solution for a charged, rotating body is given by the Kerr–Newman metric.
The metric was discovered between 1916 and 1921 by Hans Reissner, Hermann Weyl, Gunnar Nordström and George Barker Jeffery independently.
Metric
In spherical coordinates , the Reissner–Nordström metric (i.e. the line element) is
where
is the speed of light
is the proper time
is the time coordinate (measured by a stationary clock at infinity).
is the radial coordinate
are the spherical angles
is the Schwarzschild radius of the body given by
is a characteristic length scale given by
is the electric constant.
The total mass of the central body and its irreducible mass are related by
The difference between and is due to the equivalence of mass and energy, which makes the electric field energy also contribute to the total mass.
In the limit that the charge (or equivalently, the length scale ) goes to zero, one recovers the Schwarzschild metric. The classical Newtonian theory of gravity may then be recovered in the limit as the ratio goes to zero. In the limit that both and go to zero, the metric becomes the Minkowski metric for special relativity.
In practice, the ratio is often extremely small. For example, the Schwarzschild radius of the Earth is roughly (3/8 inch), whereas a satellite in a geosynchronous orbit has an orbital radius that is roughly four billion times larger, at (). Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The ratio only becomes large close to black holes and other ultra-dense objects such as neutron stars.
Charged black holes
Although charged black holes with rQ ≪ rs are similar to the Schwarzschild black hole, they have two horizons: the event horizon and an internal Cauchy horizon. As with the Schwarzschild metric, the event horizons for the spacetime are located where the metric component diverges; that is, where
This equation has two solutions:
These concentric event horizons become degenerate for 2rQ = rs, which corresponds to an extremal black hole. Black holes with 2rQ > rs cannot exist in nature because if the charge is greater than the mass there can be no physical event horizon (the term under the square root becomes negative). Objects with a charge greater than their mass can exist in nature, but they can not collapse down to a black hole, and if they could, they would display a naked singularity. Theories with supersymmetry usually guarantee that such "superextremal" black holes cannot exist.
The electromagnetic potential is
If magnetic monopoles are included in the theory, then a generalization to include magnetic charge P is obtained by replacing Q2 by Q2 + P2 in the metric and including the term P cos θ dφ in the electromagnetic potential.
Gravitational time dilation
The gravitational time dilation in the vicinity of the central body is given by
which relates to the local radial escape velocity of a neutral particle
Christoffel symbols
The Christoffel symbols
with the indices
give the nonvanishing expressions
Given the Christoffel symbols, one can compute the geodesics of a test-particle.
Tetrad form
Instead of working in the holonomic basis, one can perform efficient calculations with a tetrad. Let be a set of one-forms with internal Minkowski index , such that . The Reissner metric can be described by the tetrad
where . The parallel transport of the tetrad is captured by the connection one-forms . These have only 24 independent components compared to the 40 components of . The connections can be solved for by inspection from Cartan's equation , where the left hand side is the exterior derivative of the tetrad, and the right hand side is a wedge product.
The Riemann tensor can be constructed as a collection of two-forms by the second Cartan equation which again makes use of the exterior derivative and wedge product. This approach is significantly faster than the traditional computation with ; note that there are only four nonzero compared with nine nonzero components of .
Equations of motion
Because of the spherical symmetry of the metric, the coordinate system can always be aligned in a way that the motion of a test-particle is confined to a plane, so for brevity and without restriction of generality we use θ instead of φ. In dimensionless natural units of G = M = c = K = 1 the motion of an electrically charged particle with the charge q is given by
which yields
All total derivatives are with respect to proper time .
Constants of the motion are provided by solutions to the partial differential equation
after substitution of the second derivatives given above. The metric itself is a solution when written as a differential equation
The separable equation
immediately yields the constant relativistic specific angular momentum
a third constant obtained from
is the specific energy (energy per unit rest mass)
Substituting and into yields the radial equation
Multiplying under the integral sign by yields the orbital equation
The total time dilation between the test-particle and an observer at infinity is
The first derivatives and the contravariant components of the local 3-velocity are related by
which gives the initial conditions
The specific orbital energy
and the specific relative angular momentum
of the test-particle are conserved quantities of motion. and are the radial and transverse components of the local velocity-vector. The local velocity is therefore
Alternative formulation of metric
The metric can be expressed in Kerr–Schild form like this:
Notice that k is a unit vector. Here M is the constant mass of the object, Q is the constant charge of the object, and η is the Minkowski tensor.
See also
Black hole electron
Notes
References
External links
Spacetime diagrams including Finkelstein diagram and Penrose diagram, by Andrew J. S. Hamilton
"Particle Moving Around Two Extreme Black Holes" by Enrique Zeleny, The Wolfram Demonstrations Project.
Exact solutions in general relativity
Black holes
Metric tensors | Reissner–Nordström metric | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 1,270 | [
"Exact solutions in general relativity",
"Black holes",
"Physical phenomena",
"Tensors",
"Physical quantities",
"Unsolved problems in physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Metric tensors",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
711,886 | https://en.wikipedia.org/wiki/Cauchy%20horizon | In physics, a Cauchy horizon is a light-like boundary of the domain of validity of a Cauchy problem (a particular boundary value problem of the theory of partial differential equations). One side of the horizon contains closed space-like geodesics and the other side contains closed time-like geodesics. The concept is named after Augustin-Louis Cauchy.
Under the averaged weak energy condition (AWEC), Cauchy horizons are inherently unstable. However, cases of AWEC violation, such as the Casimir effect caused by periodic boundary conditions, do exist, and since the region of spacetime inside the Cauchy horizon has closed timelike curves it is subject to periodic boundary conditions. If the spacetime inside the Cauchy horizon violates AWEC, then the horizon becomes stable and frequency boosting effects would be canceled out by the tendency of the spacetime to act as a divergent lens. Were this conjecture to be shown empirically true, it would provide a counter-example to the strong cosmic censorship conjecture.
In 2018, it was shown that the spacetime behind the Cauchy horizon of a charged, rotating black hole exists, but is not smooth, so the strong cosmic censorship conjecture is false.
The simplest example is the internal horizon of a Reissner–Nordström black hole.
In popular media
In the 2020 film Palm Springs, the character Sarah mentions the Cauchy horizon as she formulates a plan to escape a time loop.
In the pilot episode of 2021 Amazon original series Solos, the character Leah solves time travel with "the Cauchy horizon", which is central to the episode.
Related
Augustin-Louis Cauchy
Event horizon
References
External links
On Crossing the Cauchy Horizon
General relativity | Cauchy horizon | [
"Physics"
] | 360 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
711,898 | https://en.wikipedia.org/wiki/Proper%20length | Proper length or rest length is the length of an object in the object's rest frame.
The measurement of lengths is more complicated in the theory of relativity than in classical mechanics. In classical mechanics, lengths are measured based on the assumption that the locations of all points involved are measured simultaneously. But in the theory of relativity, the notion of simultaneity is dependent on the observer.
A different term, proper distance, provides an invariant measure whose value is the same for all observers.
Proper distance is analogous to proper time. The difference is that the proper distance is defined between two spacelike-separated events (or along a spacelike path), while the proper time is defined between two timelike-separated events (or along a timelike path).
Proper length or rest length
The proper length or rest length of an object is the length of the object measured by an observer which is at rest relative to it, by applying standard measuring rods on the object. The measurement of the object's endpoints doesn't have to be simultaneous, since the endpoints are constantly at rest at the same positions in the object's rest frame, so it is independent of Δt. This length is thus given by:
However, in relatively moving frames the object's endpoints have to be measured simultaneously, since they are constantly changing their position. The resulting length is shorter than the rest length, and is given by the formula for length contraction (with γ being the Lorentz factor):
In comparison, the invariant proper distance between two arbitrary events happening at the endpoints of the same object is given by:
So Δσ depends on Δt, whereas (as explained above) the object's rest length L0 can be measured independently of Δt. It follows that Δσ and L0, measured at the endpoints of the same object, only agree with each other when the measurement events were simultaneous in the object's rest frame so that Δt is zero. As explained by Fayngold:
p. 407: "Note that the proper distance between two events is generally not the same as the proper length of an object whose end points happen to be respectively coincident with these events. Consider a solid rod of constant proper length l0. If you are in the rest frame K0 of the rod, and you want to measure its length, you can do it by first marking its endpoints. And it is not necessary that you mark them simultaneously in K0. You can mark one end now (at a moment t1) and the other end later (at a moment t2) in K0, and then quietly measure the distance between the marks. We can even consider such measurement as a possible operational definition of proper length. From the viewpoint of the experimental physics, the requirement that the marks be made simultaneously is redundant for a stationary object with constant shape and size, and can in this case be dropped from such definition. Since the rod is stationary in K0, the distance between the marks is the proper length of the rod regardless of the time lapse between the two markings. On the other hand, it is not the proper distance between the marking events if the marks are not made simultaneously in K0."
Proper distance between two events in flat space
In special relativity, the proper distance between two spacelike-separated events is the distance between the two events, as measured in an inertial frame of reference in which the events are simultaneous. In such a specific frame, the distance is given by
where
Δx, Δy, and Δz are differences in the linear, orthogonal, spatial coordinates of the two events.
The definition can be given equivalently with respect to any inertial frame of reference (without requiring the events to be simultaneous in that frame) by
where
Δt is the difference in the temporal coordinates of the two events, and
c is the speed of light.
The two formulae are equivalent because of the invariance of spacetime intervals, and since Δt = 0 exactly when the events are simultaneous in the given frame.
Two events are spacelike-separated if and only if the above formula gives a real, non-zero value for Δσ.
Proper distance along a path
The above formula for the proper distance between two events assumes that the spacetime in which the two events occur is flat. Hence, the above formula cannot in general be used in general relativity, in which curved spacetimes are considered. It is, however, possible to define the proper distance along a path in any spacetime, curved or flat. In a flat spacetime, the proper distance between two events is the proper distance along a straight path between the two events. In a curved spacetime, there may be more than one straight path (geodesic) between two events, so the proper distance along a straight path between two events would not uniquely define the proper distance between the two events.
Along an arbitrary spacelike path P, the proper distance is given in tensor syntax by the line integral
where
gμν is the metric tensor for the current spacetime and coordinate mapping, and
dxμ is the coordinate separation between neighboring events along the path P.
In the equation above, the metric tensor is assumed to use the +−−− metric signature, and is assumed to be normalized to return a time instead of a distance. The − sign in the equation should be dropped with a metric tensor that instead uses the −+++ metric signature. Also, the should be dropped with a metric tensor that is normalized to use a distance, or that uses geometrized units.
See also
Invariant interval
Proper time
Comoving distance
Relativity of simultaneity
References
Theory of relativity | Proper length | [
"Physics"
] | 1,156 | [
"Theory of relativity"
] |
12,477,587 | https://en.wikipedia.org/wiki/Cross-linked%20enzyme%20aggregate | In biochemistry, a cross-linked enzyme aggregate is an immobilized enzyme prepared via cross-linking of the physical enzyme aggregates with a difunctional cross-linker. They can be used as stereoselective industrial biocatalysts.
Background
Enzymes are proteins that catalyze (i.e. accelerate) chemical reactions. They are natural catalysts and are ubiquitous, in plants, animals and microorganisms where they catalyze processes that are vital to living organisms. They are intimately involved in numerous biotechnological processes, such as cheese making, beer brewing and winemaking, that date back to the dawn of civilization. Recent advances in biotechnology, particularly in genetic and protein engineering, and genetics have provided the basis for the efficient development of enzymes with improved properties for established applications and novel, tailor-made enzymes for completely new applications where enzymes were not previously used.
Today, enzymes are widely applied in many different industries and the number of applications continues to increase. Examples include food (baking, dairy products, starch conversion) and beverage (beer, wine, fruit and vegetable juices) processing, animal feed, textiles, pulp and paper, detergents, biosensors, cosmetics, health care and nutrition, waste water treatment, pharmaceutical and chemical manufacture and, more recently, biofuels such as biodiesel. The main driver for the widespread application of enzymes is their small environmental footprint.
Many traditional chemical conversions used in various industries suffer from inherent drawbacks from both an economic and environmental viewpoint. Non-specific reactions can afford low product yields, copious amounts of waste and impure products. The need for elevated temperatures and pressures leads to high energy consumption and high capital investment costs. Disposal of unwanted by-products may be difficult and/or expensive and hazardous solvents may be required. In stark contrast, enzymatic reactions are performed under mild conditions of temperature and pressure, in water as solvent, and exhibit very high rates and are often highly specific. Moreover, they are produced from renewable raw materials and are biodegradable. In addition, the mild operating conditions of enzymatic processes mean that they can be performed in relatively simple equipment and are easy to control. In short, they reduce the environmental footprint of manufacturing by reducing the consumption of energy and chemicals and concomitant generation of waste.
In the production of fine chemicals, flavors and fragrances, agrochemicals and pharmaceuticals an important benefit of enzymes is the high degree of chemoselectivity, regioselectivity and enantioselectivity which they exhibit. Particularly, their ability to catalyze the formation of products in high enantiopurity, by an exquisite stereochemical control, is of the utmost importance in these industries.
Notwithstanding all these desirable characteristic features of enzymes, their widespread industrial application is often hampered by their lack of long term operational stability and shelf-storage life, as well as by their cumbersome recovery and re-use. These drawbacks can be generally overcome by enzyme immobilization. A major present challenge in industrial biocatalysis is the development of stable, robust and preferably insoluble biocatalysts.
Immobilization
See Immobilized enzyme for more information.
There are several reasons for immobilizing an enzyme. In addition to more convenient handling of the enzyme, it provides for its facile separation from the product, thereby minimizing or eliminating protein contamination of the product. Immobilization also facilitates the efficient recovery and re-use of costly enzymes, in many applications a conditio sine qua non for economic viability, and enables their use in continuous, fixed-bed operation. A further benefit is often enhanced stability, under both storage and operational conditions, e.g. towards denaturation by heat or organic solvents or by autolysis. Enzymes are rather delicate molecules that can easily lose their unique three-dimensional structure, essential for their activity, by denaturation (unfolding). Improved enzyme performance via enhanced stability, over a broad pH and temperature range as well as tolerance towards organic solvents, coupled with repeated re-use is reflected in higher catalyst productivities (kg product/kg enzyme) which, in turn, determine the enzyme costs per kg product.
Basically, three traditional methods of enzyme immobilization can be distinguished: binding to a support(carrier), entrapment (encapsulation) and cross-linking. Support binding can be physical, ionic, or covalent in nature. However, physical bonding is generally too weak to keep the enzyme fixed to the carrier under industrial conditions of high reactant and product concentrations and high ionic strength. The support can be a synthetic resin, a biopolymer or an inorganic polymer such as (mesoporous) silica or a zeolite. Entrapment involves inclusion of an enzyme in a polymer network (gel lattice) such as an organic polymer or a silica sol-gel, or a membrane device such as a hollow fiber or a microcapsule. Entrapment requires the synthesis of the polymeric network in the presence of the enzyme. The third category involves cross-linking of enzyme aggregates or crystals, using a bifunctional reagent, to prepare carrier-free macroparticles.
The use of a carrier inevitably leads to ‘dilution of activity’, owing to the introduction of a large portion of non-catalytic ballast, ranging from 90% to >99%, which results in lower space-time yields and productivities. Moreover, immobilization of an enzyme on a carrier often leads to a substantial loss of activity, especially at high enzyme loadings. Consequently, there is an increasing interest in carrier-free immobilized enzymes, such as cross-linked enzyme crystals (CLECs) and cross-linked enzyme aggregates (CLEAs) that offer the advantages of highly concentrated enzyme activity combined with high stability and low production costs owing to the exclusion of an additional (expensive) carrier.
Cross-Linked Enzyme Aggregates (CLEAs)
The use of cross-linked enzyme crystals (CLECs) as industrial biocatalysts was pioneered by Altus Biologics in the 1990s. CLECs proved to be significantly more stable to denaturation by heat, organic solvents and proteolysis than the corresponding soluble enzyme or lyophilized (freeze-dried) powder. CLECs are robust, highly active immobilized enzymes of controllable particle size, varying from 1 to 100 micrometer. Their operational stability and ease of recycling, coupled with their high catalyst and volumetric productivities, renders them ideally suited for industrial biotransformations.
However, CLECs have an inherent disadvantage: enzyme crystallization is a laborious procedure requiring enzyme of high purity, which translates to prohibitively high costs. The more recently developed cross-linked enzyme aggregates (CLEAs), on the other hand, are produced by simple precipitation of the enzyme from aqueous solution, as physical aggregates of protein molecules, by the addition of salts, or water miscible organic solvents or non-ionic polymers. The physical aggregates are held together by covalent bonding without perturbation of their tertiary structure, that is without denaturation. Subsequent cross-linking of these physical aggregates renders them permanently insoluble while maintaining their pre-organized superstructure, and, hence their catalytic activity. This discovery led to the development of a new family of immobilized enzymes: cross-linked enzyme aggregates (CLEAs). Since precipitation from an aqueous medium, by addition of ammonium sulfate or polyethylene glycol, is often used to purify enzymes, the CLEA methodology essentially combines purification and immobilization into a single unit operation that does not require a highly pure enzyme. It could be used, for example, for the direct isolation of an enzyme, in a purified and immobilized form suitable for performing biotransformations, from a crude fermentation broth.
CLEAs are very attractive biocatalysts, owing to their facile, inexpensive and effective production method. They can readily be reused and exhibit improved stability and performance. The methodology is applicable to essentially any enzyme, including cofactor dependent oxidoreductases. Application to penicillin acylase used in antibiotic synthesis showed large improvements over other type of biocatalysts.
The potential applications of CLEAs are numerous and include:
Synthesis of pharmaceuticals, flavors and fragrances, agrochemicals, nutraceuticals, fine chemicals, bulk monomers and biofuels.
Animal feed, e.g. phytase for utilization of organically bound phosphate by pigs and poultry.
Food and beverage processing, e.g. lipases in cheese manufacture and laccase in wine clarification.
Cosmetics, e.g. in skin care products
Oils and fats processing, e.g. in biolubricants, bioemulsifiers, bio-derived emollients.
Carbohydrate processing, e.g. laccase in carbohydrate oxidations.
Pulp and paper, e.g. in pulp bleaching.
Detergents, e.g. proteases, amylases and lipases for removal of protein, carbohydrate and fat stains.
Waste water treatment, e.g. for removal of phenols, dyes, and endocrine disrupters.
Biosensors/diagnostics, e.g. glucose oxidase and cholesterol oxidase biosensors.
Delivery of proteins as therapeutic agents or nutritional/digestive supplements e.g. beta-galactosidase for digestive hydrolysis of lactose in dairy products to alleviate the symptoms of lactose intolerance.
References
External links
Description of Methodology
Catalysis
Enzymes
Stereochemistry | Cross-linked enzyme aggregate | [
"Physics",
"Chemistry"
] | 2,058 | [
"Catalysis",
"Stereochemistry",
"Space",
"nan",
"Spacetime",
"Chemical kinetics"
] |
14,123,742 | https://en.wikipedia.org/wiki/Myeloma%20protein | A myeloma protein is an abnormal antibody (immunoglobulin) or (more often) a fragment thereof, such as an immunoglobulin light chain, that is produced in excess by an abnormal monoclonal proliferation of plasma cells, typically in multiple myeloma or Monoclonal gammopathy of undetermined significance. Other terms for such a protein are monoclonal protein, M protein, M component, M spike, spike protein, or paraprotein. This proliferation of the myeloma protein has several deleterious effects on the body, including impaired immune function, abnormally high blood viscosity ("thickness" of the blood), and kidney damage.
History
In 1940, senior pathologist Kurt Apitz of the Charité – Berlin University Medicine hospital, introduced the concept and word paraprotein.
Cause
Myeloma is a malignancy of plasma cells. Plasma cells produce immunoglobulins, which are commonly called antibodies. There are thousands of different antibodies, each consisting of pairs of heavy and light chains. Antibodies are typically grouped into five classes: IgA, IgD, IgE, IgG, and IgM. When someone has myeloma, a malignant clone, a rogue plasma cell, reproduces in an uncontrolled fashion, resulting in overproduction of the specific antibody the original cell was generated to produce. Each type of antibody has a different number of light chain and heavy chain pairs. As a result, there is a characteristic normal distribution of these antibodies in the blood by molecular weight.
When there is a malignant clone, there is usually overproduction of a single antibody, resulting in a "spike" on the normal distribution (sharp peak on the graph), which is called an M spike (or monoclonal spike). People will sometimes develop a condition called MGUS (Monoclonal gammopathy of undetermined significance), where there is overproduction of one antibody but the condition is benign (non-cancerous). An explanation of the difference between multiple myeloma and MGUS can be found in the International Myeloma Foundation's Patient Handbook. and Concise Review
Detection of paraproteins in the urine or blood is most often associated with MGUS, where they remain "silent", and multiple myeloma. An excess in the blood is known as paraproteinemia. Paraproteins form a narrow band, or 'spike' in protein electrophoresis as they are all exactly the same protein. Unlike normal immunoglobulin antibodies, paraproteins cannot fight infection.
Serum free light-chain measurement can detect free light chains in the blood. Monoclonal free light chains in the serum or urine are called Bence Jones proteins.
Interpretation upon detection
Blood serum paraprotein levels of more than 30 g/L is diagnostic of smouldering myeloma, an intermediate in a spectrum of step-wise progressive diseases termed plasma cell dyscrasias. Elevated paraprotein level (above 30 g/L) in conjunction with end organ damage (elevated calcium, kidney failure, anemia, or bone lesions) or other biomarkers of malignancy, is diagnostic of multiple myeloma, according to the diagnostic criteria of the International Myeloma Working Group, which were updated in 2014. Detection of paraprotein in serum of less than 30 g/L is classified as monoclonal gammopathy of undetermined significance in cases where clonal plasma cells constitute less than 10% on bone marrow biopsy and there is no myeloma-related organ or tissue impairment.
See also
Tuftsin
Monoclonal gammopathy of undetermined significance
Smouldering myeloma
Multiple myeloma
Plasma cell dyscrasia
References
External links
Educational Resource for Paraproteins
Paraprotein typing
Hematology
Immune system
Oncology
Proteins | Myeloma protein | [
"Chemistry",
"Biology"
] | 812 | [
"Biomolecules by chemical classification",
"Immune system",
"Organ systems",
"Molecular biology",
"Proteins"
] |
14,125,672 | https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor%20receptor%204 | Fibroblast growth factor receptor 4 (FGFR-4) is a protein that in humans is encoded by the FGFR4 gene. FGFR4 has also been designated as CD334 (cluster of differentiation 334).
The protein encoded by this gene is a member of the fibroblast growth factor receptor family, where amino acid sequence is highly conserved between members and throughout evolution. FGFR family members differ from one another in their ligand affinities and tissue distribution. A full-length representative protein would consist of an extracellular region, composed of three immunoglobulin-like domains, a single hydrophobic membrane-spanning segment and a cytoplasmic tyrosine kinase domain. The extracellular portion of the protein interacts with fibroblast growth factors, setting in motion a cascade of downstream signals, ultimately influencing mitogenesis and differentiation. The genomic organization of this gene, compared to members 1-3, encompasses 18 exons rather than 19 or 20. Although alternative splicing has been observed, there is no evidence that the C-terminal half of the IgIII domain of this protein varies between three alternate forms, as indicated for members 1-3. This particular family member preferentially binds acidic fibroblast growth factor and, although its specific function is unknown, it is overexpressed in gynecological tumor samples, suggesting a role in breast and ovarian tumorigenesis. In a meta-analisis study, the functional polymorphism Gly388Arg (rs351855) of FGFR4 was observed to be significantly associated with nodal involvement and overall survival in patients with different types of cancer.
Interactions
Fibroblast growth factor receptor 4 has been shown to interact with FGF1.
References
Further reading
External links
Clusters of differentiation
Tyrosine kinase receptors | Fibroblast growth factor receptor 4 | [
"Chemistry"
] | 385 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,129,063 | https://en.wikipedia.org/wiki/Bradykinin%20receptor%20B2 | {{DISPLAYTITLE:Bradykinin receptor B2}}
Bradykinin receptor B2 is a G-protein coupled receptor for bradykinin, encoded by the BDKRB2 gene in humans.
Mechanism
The B2 receptor (B2R) is a G protein-coupled receptor, probably coupled to Gq and Gi. A 2022 Nature cryo-EM study of human B2R-Gq complexes by Jinkeng Sheng et al. investigated the proximal activation mechanisms of B2R. Sheng et al. propose that upon B2R binding bradykinin or kallidin to a "bulky orthosteric binding pocket," the phenylalanine F8 or F9 residue of bradykinin or kallidin respectively interacts with a "conserved toggle switch" W283. This hydrophobic interaction facilitates the outward movement of transmembrane domain 6 (TM6) of B2R on the cytoplasmic side of the membrane, as well as outward movement of F279, a key residue within the conserved PIF motif of GPCRs (involving proline, isoleucine and phenylalanine). This rearrangement of the PIF motif disrupts the ionic lock formed by the DRY motif and pushes the NPxxY motif towards the activated state, opening an "intracellular cleft" for insertion of the α5-helix of Gq.
Gq stimulates phospholipase C to increase intracellular free calcium and Gi inhibits adenylate cyclase. Furthermore, the receptor stimulates the mitogen-activated protein kinase pathways. It is ubiquitously and constitutively expressed in healthy tissues.
The B2 receptor forms a complex with angiotensin converting enzyme (ACE), and this is thought to play a role in cross-talk between the renin-angiotensin system (RAS) and the kinin–kallikrein system (KKS). The heptapeptide angiotensin (1-7) also potentiates bradykinin action on B2 receptors.
Kallidin also signals through the B2 receptor. An antagonist for the receptor is Hoe 140 (icatibant).
Function
The 9 amino acid bradykinin peptide elicits several responses including vasodilation, edema, smooth muscle spasm and nociceptor stimulation.
Gene
Alternate start codons result in two isoforms of the protein.
See also
Bradykinin receptor
References
Further reading
External links
G protein-coupled receptors | Bradykinin receptor B2 | [
"Chemistry"
] | 534 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,134,201 | https://en.wikipedia.org/wiki/Advanced%20oxidation%20process | Advanced oxidation processes (AOPs), in a broad sense, are a set of chemical treatment procedures designed to remove organic (and sometimes inorganic) materials in water and wastewater by oxidation through reactions with hydroxyl radicals (·OH). In real-world applications of wastewater treatment, however, this term usually refers more specifically to a subset of such chemical processes that employ ozone (O3), hydrogen peroxide (H2O2) and UV light or a combination of the few processes.
Description
AOPs rely on in-situ production of highly reactive hydroxyl radicals (·OH) or other oxidative species for oxidation of contaminant. These reactive species can be applied in water and can oxidize virtually any compound present in the water matrix, often at a diffusion-controlled reaction speed. Consequently, ·OH reacts unselectively once formed and contaminants will be quickly and efficiently fragmented and converted into small inorganic molecules. Hydroxyl radicals are produced with the help of one or more primary oxidants (e.g. ozone, hydrogen peroxide, oxygen) and/or energy sources (e.g. ultraviolet light) or catalysts (e.g. titanium dioxide). Precise, pre-programmed dosages, sequences and combinations of these reagents are applied in order to obtain a maximum •OH yield. In general, when applied in properly tuned conditions, AOPs can reduce the concentration of contaminants from several-hundreds ppm to less than 5 ppb and therefore significantly bring COD and TOC down, which earned it the credit of "water treatment processes of the 21st century".
The AOP procedure is particularly useful for cleaning biologically toxic or non-degradable materials such as aromatics, pesticides, petroleum constituents, and volatile organic compounds in wastewater. Additionally, AOPs can be used to treat effluent of secondary treated wastewater which is then called tertiary treatment. The contaminant materials are largely converted into stable inorganic compounds such as water, carbon dioxide and salts, i.e. they undergo mineralization. A goal of the wastewater purification by means of AOP procedures is the reduction of the chemical contaminants and the toxicity to such an extent that the cleaned wastewater may be reintroduced into receiving streams or, at least, into a conventional sewage treatment.
Although oxidation processes involving ·OH have been in use since late 19th century (such as Fenton's reagent, which was used as an analytical reagent at that time), the utilization of such oxidative species in water treatment did not receive adequate attention until Glaze et al. suggested the possible generation of ·OH "in sufficient quantity to affect water purification" and defined the term "Advanced Oxidation Processes" for the first time in 1987. AOPs still have not been put into commercial use on a large scale (especially in developing countries) even up to today mostly because of relatively high associated costs. Nevertheless, its high oxidative capability and efficiency make AOPs a popular technique in tertiary treatment in which the most recalcitrant organic and inorganic contaminants are to be eliminated. The increasing interest in water reuse and more stringent regulations regarding water pollution are currently accelerating the implementation of AOPs at full-scale.
There are roughly 500 commercialized AOP installations around the world at present, mostly in Europe and the United States. Other countries like China are showing increasing interests in AOPs.
The reaction, using H2O2 for the formation of ·OH, is carried out in an acidic medium (2.5-4.5 pH) and a low temperature (30 °C - 50 °C), in a safe and efficient way, using optimized catalyst and hydrogen peroxide formulations.
Chemical principles
Generally speaking, chemistry in AOPs could be essentially divided into three parts:
Formation of ·OH;
Initial attacks on target molecules by ·OH and their breakdown to fragments;
Subsequent attacks by ·OH until ultimate mineralization.
The mechanism of ·OH production (Part 1) highly depends on the sort of AOP technique that is used. For example, ozonation, UV/H2O2, photocatalytic oxidation and Fenton's oxidation rely on different mechanisms of ·OH generation:
UV/H2O2:
H2O2 + UV → 2·OH (homolytic bond cleavage of the O-O bond of H2O2 leads to formation of 2·OH radicals)
UV/HOCl:
HOCl + UV → ·OH + Cl·
Ozone based AOP:
O3 + HO− → HO2− + O2 (reaction between O3 and a hydroxyl ion leads to the formation of H2O2 (in charged form))
O3 + HO2− → HO2· + O3−· (a second O3 molecule reacts with the HO2− to produce the ozonide radical)
O3−· + H+ → HO3· (this radical gives to ·OH upon protonation)
HO3· → ·OH + O2
the reaction steps presented here are just a part of the reaction sequence, see reference for more details
Fenton based AOP:
Fe2+ + H2O2 → Fe3++ HO· + OH− (initiation of Fenton's reagent)
Fe3+ + H2O2 → Fe2++ HOO· + H+ (regeneration of Fe2+ catalyst)
H2O2 → HO· + HOO· + H2O (Self scavenging and decomposition of H2O2)
the reaction steps presented here are just a part of the reaction sequence, see reference for more details
Photocatalytic oxidation with TiO2:
TiO2 + UV → e− + h+ (irradiation of the photocatalytic surface leads to an excited electron (e−) and electron gap (h+))
Ti(IV) + H2O Ti(IV)-H2O (water adsorbs onto the catalyst surface)
Ti(IV)-H2O + h+ Ti(IV)-·OH + H+ the highly reactive electron gap will react with water
the reaction steps presented here are just a part of the reaction sequence, see reference for more details
Currently there is no consensus on the detailed mechanisms in Part 3, but researchers have cast light on the processes of initial attacks in Part 2. In essence, ·OH is a radical species and should behave like a highly reactive electrophile. Thus two type of initial attacks are supposed to be Hydrogen Abstraction and Addition. The following scheme, adopted from a technical handbook and later refined, describes a possible mechanism of the oxidation of benzene by ·OH.
Scheme 1. Proposed mechanism of the oxidation of benzene by hydroxyl radicals
The first and second steps are electrophilic addition that breaks the aromatic ring in benzene (A) and forms two hydroxyl groups (–OH) in intermediate C. Later an ·OH grabs a hydrogen atom in one of the hydroxyl groups, producing a radical species (D) that is prone to undergo rearrangement to form a more stable radical (E). E, on the other hand, is readily attacked by ·OH and eventually forms 2,4-hexadiene-1,6-dione (F).
As long as there are sufficient ·OH radicals, subsequent attacks on compound F will continue until the fragments are all converted into small and stable molecules like H2O and CO2 in the end, but such processes may still be subject to a myriad of possible and partially unknown mechanisms.
Advantages
AOPs hold several advantages in the field of water treatment:
They can effectively eliminate organic compounds in aqueous phase, rather than collecting or transferring pollutants into another phase.
Due to the reactivity of ·OH, it reacts with many aqueous pollutants without discriminating. AOPs are therefore applicable in many, if not all, scenarios where many organic contaminants must be removed at the same time.
Some heavy metals can also be removed in forms of precipitated M(OH)x.
In some AOPs designs, disinfection can also be achieved, which makes these AOPs an integrated solution to some water quality problems.
Since the complete reduction product of ·OH is H2O, AOPs theoretically do not introduce any new hazardous substances into the water.
Current shortcomings
AOPs are not perfect and have several drawbacks.
Most prominently, the cost of AOPs is fairly high, since a continuous input of expensive chemical reagents is required to maintain the operation of most AOP systems. As a result of their very nature, AOPs require hydroxyl radicals and other reagents proportional to the quantity of contaminants to be removed.
Some techniques require pre-treatment of wastewater to ensure reliable performance, which could be potentially costly and technically demanding. For instance, presence of bicarbonate ions (HCO3−) can appreciably reduce the concentration of ·OH due to scavenging processes that yield H2O and a much less reactive species, ·CO3−. As a result, bicarbonate must be wiped out from the system or AOPs are compromised.
It is not cost effective to use solely AOPs to handle a large amount of wastewater; instead, AOPs should be deployed in the final stage after primary and secondary treatment have successfully removed a large proportion of contaminants. Ongoing research also been done to combine AOPs with biological treatment to bring the cost down.
Future
Since AOPs were first defined in 1987, the field has witnessed a rapid development both in theory and in application. So far, TiO2/UV systems, H2O2/UV systems, and Fenton, photo-Fenton and Electro-Fenton systems have received extensive scrutiny. However, there are still many research needs on these existing AOPs.
Recent trends are the development of new, modified AOPs that are efficient and economical. In fact, there has been some studies that offer constructive solutions. For instance, doping TiO2 with non-metallic elements could possibly enhance the photocatalytic activity; and implementation of ultrasonic treatment could promote the production of hydroxyl radicals. Modified AOPs such as Fluidized-Bed Fenton has also shown great potential in terms of degradation performance and economics.
See also
List of waste-water treatment technologies
Fenton reaction
Electro-oxidation
In situ chemical oxidation
Process engineering
Water purification
References
Further reading
Michael OD Roth: Chemical oxidation: Technology for the Nineties, volume VI: Technologies for the Nineties: 6 (Chemical oxidation) W. Wesley corner fields and John A. Roth, Technomic Publishing CO, Lancaster among other things. 1997, . (engl.)
Water treatment
Environmental engineering
Environmental chemistry
Green chemistry | Advanced oxidation process | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,217 | [
"Green chemistry",
"Water treatment",
"Chemical engineering",
"Environmental chemistry",
"Water pollution",
"Civil engineering",
"nan",
"Environmental engineering",
"Water technology"
] |
14,134,326 | https://en.wikipedia.org/wiki/Liver%20X%20receptor%20alpha | Liver X receptor alpha (LXR-alpha) is a nuclear receptor protein that in humans is encoded by the NR1H3 gene (nuclear receptor subfamily 1, group H, member 3).
Expression
miRNA hsa-miR-613 autoregulates the human LXRα gene by targeting the endogenous LXRα through its specific miRNA response element (613MRE) within the LXRα 3′-untranslated region. LXRα autoregulates its own suppression via induction of SREBP1c which upregulates miRNA has-miR-613.
Function
The liver X receptors, LXRα (this protein) and LXRβ, form a subfamily of the nuclear receptor superfamily and are key regulators of macrophage function, controlling transcriptional programs involved in lipid homeostasis and inflammation. Additionally, they play an important role in the local activation of thyroid hormones via deiodinases. The inducible LXRα is highly expressed in liver, adrenal gland, intestine, adipose tissue, macrophages, lung, and kidney, whereas LXRβ is ubiquitously expressed. Ligand-activated LXRs form obligate heterodimers with retinoid X receptors (RXRs) and regulate expression of target genes containing LXR response elements. Restoration of LXR-alpha expression/function within a psoriatic lesion may help to switch the transition from psoriatic to symptomless skin.
Interactions
Liver X receptor alpha has been shown to interact with EDF1 and small heterodimer partner. LXRα activates the transcription factor SREBP-1c, resulting in lipogenesis.
Link to multiple sclerosis
In 2016, a study found 70% of individuals in two families with a rare form of rapidly progressing multiple sclerosis had a mutation in NR1H3. However, an analysis from The International Multiple Sclerosis Genetics Consortium using a 13-fold larger sample size could not find any evidence that the mutation in question (p.Arg415Gln) associated with multiple sclerosis, refuting these findings.
References
Further reading
External links
Intracellular receptors
Transcription factors | Liver X receptor alpha | [
"Chemistry",
"Biology"
] | 470 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,136,041 | https://en.wikipedia.org/wiki/Photo-reactive%20amino%20acid%20analog | Photo-reactive amino acid analogs are artificial analogs of natural amino acids that can be used for crosslinking of protein complexes. Photo-reactive amino acid analogs may be incorporated into proteins and peptides in vivo or in vitro. Photo-reactive amino acid analogs in common use are photoreactive diazirine analogs to leucine and methionine, and para-benzoylphenylalanine. Upon exposure to ultraviolet light, they are activated and covalently bind to interacting proteins that are within a few angstroms of the photo-reactive amino acid analog.
L-Photo-leucine and L-photo-methionine are analogs of the naturally occurring L-leucine and L-methionine amino acids that are endogenously incorporated into the primary sequence of proteins during synthesis using the normal translation machinery. They are then ultraviolet light (UV)-activated to covalently crosslink proteins within protein–protein interaction domains in their native in-vivo environment. The method enables the determination and characterization of both stable and transient protein interactions in cells without the addition of chemical crosslinkers and associated solvents that can adversely affect the cell biology being studied in the experiment.
When used in combination with limiting media that is devoid of leucine and methionine, the photo-activatable derivatives are treated like naturally occurring amino acids by the cellular protein synthesis machinery. As a result, they can be substituted for leucine or methionine in the primary structure of proteins. Photo-leucine and photo-methionine derivatives contain diazirine rings that are activated when exposed to UV light to become reactive intermediates that form covalent bonds with nearby protein side chains and backbones. Naturally interacting proteins within the cell can be instantly trapped by photoactivation of the diazirine-containing proteins in the cultured cells. Crosslinked protein complexes can be detected by decreased mobility on SDS-PAGE followed by Western blotting, size exclusion chromatography, sucrose density gradient sedimentation or mass spectrometry.
References
Vila-Perello, M., et al. (2007). Covalent capture of phospho-dependent protein oligomerization by site-specific incorporation of a diazirine photo-cross-linker. J. Am. Chem. Soc., 129(26):8068–69.
Bomgarden, R. (2008). Studying Protein Interactions in Living Cells. Gen. Eng. News. Vol. 28, No. 7.
P.-O. Hétu, et al. (2008) Photo-crosslinking of proteins in intact cells reveals a dimeric structure of cyclooxygenase-2 and an inhibitor-sensitive oligomeric structure of microsomal prostaglandin E2 synthase-1. Arch. Biochem. Biophys.
Weaver, M.S., et al. (2008) The copper-binding domain of sparc mediates cell survival in vitro via interaction with integrin beta 1 and activation of integrin-linked kinase. J. Biol. Chem.
Biochemistry methods
Biotechnology
Proteomics
Protein–protein interaction assays | Photo-reactive amino acid analog | [
"Chemistry",
"Biology"
] | 674 | [
"Biochemistry methods",
"Protein–protein interaction assays",
"Biotechnology",
"nan",
"Biochemistry"
] |
27,058 | https://en.wikipedia.org/wiki/Steel | Steel is an alloy of iron and carbon with improved strength and fracture resistance compared to other forms of iron. Because of its high tensile strength and low cost, steel is one of the most commonly manufactured materials in the world. Steel is used in buildings, as concrete reinforcing rods, in bridges, infrastructure, tools, ships, trains, cars, bicycles, machines, electrical appliances, furniture, and weapons.
Iron is always the main element in steel, but many other elements may be present or added. Stainless steels, which are resistant to corrosion and oxidation, typically are 11% chromium.
Iron is the base metal of steel. Depending on the temperature, it can take two crystalline forms (allotropic forms): body-centred cubic and face-centred cubic. The interaction of the allotropes of iron with the alloying elements, primarily carbon, gives steel and cast iron their range of unique properties. In pure iron, the crystal structure has relatively little resistance to the iron atoms slipping past one another, and so pure iron is quite ductile, or soft and easily formed. In steel, small amounts of carbon, other elements, and inclusions within the iron act as hardening agents that prevent the movement of dislocations.
The carbon in typical steel alloys may contribute up to 2.14% of its weight. Varying the amount of carbon and many other alloying elements, as well as controlling their chemical and physical makeup in the final steel (either as solute elements, or as precipitated phases), impedes the movement of the dislocations that make pure iron ductile, and thus controls and enhances its qualities. These qualities include the hardness, quenching behaviour, need for annealing, tempering behaviour, yield strength, and tensile strength of the resulting steel. The increase in steel's strength compared to pure iron is possible only by reducing iron's ductility.
Steel was produced in bloomery furnaces for thousands of years, but its large-scale, industrial use began only after more efficient production methods were devised in the 17th century, with the introduction of the blast furnace and production of crucible steel. This was followed by the Bessemer process in England in the mid-19th century, and then by the open-hearth furnace. With the invention of the Bessemer process, a new era of mass-produced steel began. Mild steel replaced wrought iron. The German states were the major steel producers in Europe in the 19th century. American steel production was centred in Pittsburgh, Bethlehem, Pennsylvania, and Cleveland until the late 20th century. Currently, world steel production is centered in China, which produced 54% of the world's steel in 2023.
Further refinements in the process, such as basic oxygen steelmaking (BOS), largely replaced earlier methods by further lowering the cost of production and increasing the quality of the final product. Today more than 1.6 billion tons of steel is produced annually. Modern steel is generally identified by various grades defined by assorted standards organizations. The modern steel industry is one of the largest manufacturing industries in the world, but also one of the most energy and greenhouse gas emission intense industries, contributing 8% of global emissions. However, steel is also very reusable: it is one of the world's most-recycled materials, with a recycling rate of over 60% globally.
Definitions and related materials
The noun steel originates from the Proto-Germanic adjective or 'made of steel', which is related to or 'standing firm'.
The carbon content of steel is between 0.02% and 2.14% by weight for plain carbon steel (iron-carbon alloys). Too little carbon content leaves (pure) iron quite soft, ductile, and weak. Carbon contents higher than those of steel make a brittle alloy commonly called pig iron. Alloy steel is steel to which other alloying elements have been intentionally added to modify the characteristics of steel. Common alloying elements include: manganese, nickel, chromium, molybdenum, boron, titanium, vanadium, tungsten, cobalt, and niobium. Additional elements, most frequently considered undesirable, are also important in steel: phosphorus, sulphur, silicon, and traces of oxygen, nitrogen, and copper.
Plain carbon-iron alloys with a higher than 2.1% carbon content are known as cast iron. With modern steelmaking techniques such as powder metal forming, it is possible to make very high-carbon (and other alloy material) steels, but such are not common. Cast iron is not malleable even when hot, but it can be formed by casting as it has a lower melting point than steel and good castability properties. Certain compositions of cast iron, while retaining the economies of melting and casting, can be heat treated after casting to make malleable iron or ductile iron objects. Steel is distinguishable from wrought iron (now largely obsolete), which may contain a small amount of carbon but large amounts of slag.
Material properties
Origins and production
Iron is commonly found in the Earth's crust in the form of an ore, usually an iron oxide, such as magnetite or hematite. Iron is extracted from iron ore by removing the oxygen through its combination with a preferred chemical partner such as carbon which is then lost to the atmosphere as carbon dioxide. This process, known as smelting, was first applied to metals with lower melting points, such as tin, which melts at about , and copper, which melts at about , and the combination, bronze, which has a melting point lower than . In comparison, cast iron melts at about . Small quantities of iron were smelted in ancient times, in the solid-state, by heating the ore in a charcoal fire and then welding the clumps together with a hammer and in the process squeezing out the impurities. With care, the carbon content could be controlled by moving it around in the fire. Unlike copper and tin, liquid or solid iron dissolves carbon quite readily.
All of these temperatures could be reached with ancient methods used since the Bronze Age. Since the oxidation rate of iron increases rapidly beyond , it is important that smelting take place in a low-oxygen environment. Smelting, using carbon to reduce iron oxides, results in an alloy (pig iron) that retains too much carbon to be called steel. The excess carbon and other impurities are removed in a subsequent step.
Other materials are often added to the iron/carbon mixture to produce steel with the desired properties. Nickel and manganese in steel add to its tensile strength and make the austenite form of the iron-carbon solution more stable, chromium increases hardness and melting temperature, and vanadium also increases hardness while making it less prone to metal fatigue.
To inhibit corrosion, at least 11% chromium can be added to steel so that a hard oxide forms on the metal surface; this is known as stainless steel. Tungsten slows the formation of cementite, keeping carbon in the iron matrix and allowing martensite to preferentially form at slower quench rates, resulting in high-speed steel. The addition of lead and sulphur decrease grain size, thereby making the steel easier to turn, but also more brittle and prone to corrosion. Such alloys are nevertheless frequently used for components such as nuts, bolts, and washers in applications where toughness and corrosion resistance are not paramount. For the most part, however, p-block elements such as sulphur, nitrogen, phosphorus, and lead are considered contaminants that make steel more brittle and are therefore removed from steel during the melting processing.
Properties
The density of steel varies based on the alloying constituents but usually ranges between , or .
Even in a narrow range of concentrations of mixtures of carbon and iron that make steel, several different metallurgical structures, with very different properties can form. Understanding such properties is essential to making quality steel. At room temperature, the most stable form of pure iron is the body-centred cubic (BCC) structure called alpha iron or α-iron. It is a fairly soft metal that can dissolve only a small concentration of carbon, no more than 0.005% at and 0.021 wt% at . The inclusion of carbon in alpha iron is called ferrite. At 910 °C, pure iron transforms into a face-centred cubic (FCC) structure, called gamma iron or γ-iron. The inclusion of carbon in gamma iron is called austenite. The more open FCC structure of austenite can dissolve considerably more carbon, as much as 2.1%, (38 times that of ferrite) carbon at , which reflects the upper carbon content of steel, beyond which is cast iron. When carbon moves out of solution with iron, it forms a very hard, but brittle material called cementite (Fe3C).
When steels with exactly 0.8% carbon (known as a eutectoid steel), are cooled, the austenitic phase (FCC) of the mixture attempts to revert to the ferrite phase (BCC). The carbon no longer fits within the FCC austenite structure, resulting in an excess of carbon. One way for carbon to leave the austenite is for it to precipitate out of solution as cementite, leaving behind a surrounding phase of BCC iron called ferrite with a small percentage of carbon in solution. The two, cementite and ferrite, precipitate simultaneously producing a layered structure called pearlite, named for its resemblance to mother of pearl. In a hypereutectoid composition (greater than 0.8% carbon), the carbon will first precipitate out as large inclusions of cementite at the austenite grain boundaries until the percentage of carbon in the grains has decreased to the eutectoid composition (0.8% carbon), at which point the pearlite structure forms. For steels that have less than 0.8% carbon (hypoeutectoid), ferrite will first form within the grains until the remaining composition rises to 0.8% of carbon, at which point the pearlite structure will form. No large inclusions of cementite will form at the boundaries in hypoeutectoid steel. The above assumes that the cooling process is very slow, allowing enough time for the carbon to migrate.
As the rate of cooling is increased the carbon will have less time to migrate to form carbide at the grain boundaries but will have increasingly large amounts of pearlite of a finer and finer structure within the grains; hence the carbide is more widely dispersed and acts to prevent slip of defects within those grains, resulting in hardening of the steel. At the very high cooling rates produced by quenching, the carbon has no time to migrate but is locked within the face-centred austenite and forms martensite. Martensite is a highly strained and stressed, supersaturated form of carbon and iron and is exceedingly hard but brittle. Depending on the carbon content, the martensitic phase takes different forms. Below 0.2% carbon, it takes on a ferrite BCC crystal form, but at higher carbon content it takes a body-centred tetragonal (BCT) structure. There is no thermal activation energy for the transformation from austenite to martensite. There is no compositional change so the atoms generally retain their same neighbours.
Martensite has a lower density (it expands during the cooling) than does austenite, so that the transformation between them results in a change of volume. In this case, expansion occurs. Internal stresses from this expansion generally take the form of compression on the crystals of martensite and tension on the remaining ferrite, with a fair amount of shear on both constituents. If quenching is done improperly, the internal stresses can cause a part to shatter as it cools. At the very least, they cause internal work hardening and other microscopic imperfections. It is common for quench cracks to form when steel is water quenched, although they may not always be visible.
Heat treatment
There are many types of heat treating processes available to steel. The most common are annealing, quenching, and tempering.
Annealing is the process of heating the steel to a sufficiently high temperature to relieve local internal stresses. It does not create a general softening of the product but only locally relieves strains and stresses locked up within the material. Annealing goes through three phases: recovery, recrystallization, and grain growth. The temperature required to anneal a particular steel depends on the type of annealing to be achieved and the alloying constituents.
Quenching involves heating the steel to create the austenite phase then quenching it in water or oil. This rapid cooling results in a hard but brittle martensitic structure. The steel is then tempered, which is just a specialized type of annealing, to reduce brittleness. In this application the annealing (tempering) process transforms some of the martensite into cementite, or spheroidite and hence it reduces the internal stresses and defects. The result is a more ductile and fracture-resistant steel.
Production
When iron is smelted from its ore, it contains more carbon than is desirable. To become steel, it must be reprocessed to reduce the carbon to the correct amount, at which point other elements can be added. In the past, steel facilities would cast the raw steel product into ingots which would be stored until use in further refinement processes that resulted in the finished product. In modern facilities, the initial product is close to the final composition and is continuously cast into long slabs, cut and shaped into bars and extrusions and heat treated to produce a final product. Today, approximately 96% of steel is continuously cast, while only 4% is produced as ingots.
The ingots are then heated in a soaking pit and hot rolled into slabs, billets, or blooms. Slabs are hot or cold rolled into sheet metal or plates. Billets are hot or cold rolled into bars, rods, and wire. Blooms are hot or cold rolled into structural steel, such as I-beams and rails. In modern steel mills these processes often occur in one assembly line, with ore coming in and finished steel products coming out. Sometimes after a steel's final rolling, it is heat treated for strength; however, this is relatively rare.
History
Ancient
Steel was known in antiquity and was produced in bloomeries and crucibles.
The earliest known production of steel is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) which are nearly 4,000 years old, dating from 1800 BC.
Wootz steel was developed in Southern India and Sri Lanka in the 1st millennium BCE. Metal production sites in Sri Lanka employed wind furnaces driven by the monsoon winds, capable of producing high-carbon steel. Large-scale wootz steel production in India using crucibles occurred by the sixth century BC, the pioneering precursor to modern steel production and metallurgy.
High-carbon steel was produced in Britain at Broxmouth Hillfort from 490–375 BC, and ultrahigh-carbon steel was produced in the Netherlands from the 2nd-4th centuries AD. The Roman author Horace identifies steel weapons such as the falcata in the Iberian Peninsula, while Noric steel was used by the Roman military.
The Chinese of the Warring States period (403–221 BC) had quench-hardened steel, while Chinese of the Han dynasty (202 BC—AD 220) created steel by melting together wrought iron with cast iron, thus producing a carbon-intermediate steel by the 1st century AD.
There is evidence that carbon steel was made in Western Tanzania by the ancestors of the Haya people as early as 2,000 years ago by a complex process of "pre-heating" allowing temperatures inside a furnace to reach 1300 to 1400 °C.
Wootz and Damascus
Evidence of the earliest production of high carbon steel in South Asia is found in Kodumanal in Tamil Nadu, the Golconda area in Andhra Pradesh and Karnataka, regions of India, as well as in Samanalawewa and Dehigaha Alakanda, regions of Sri Lanka. This came to be known as wootz steel, produced in South India by about the sixth century BC and exported globally. The steel technology existed prior to 326 BC in the region as they are mentioned in literature of Sangam Tamil, Arabic, and Latin as the finest steel in the world exported to the Roman, Egyptian, Chinese and Arab worlds at that time – what they called Seric Iron. A 200 BC Tamil trade guild in Tissamaharama, in the South East of Sri Lanka, brought with them some of the oldest iron and steel artifacts and production processes to the island from the classical period. The Chinese and locals in Anuradhapura, Sri Lanka had also adopted the production methods of creating wootz steel from the Chera Dynasty Tamils of South India by the 5th century AD. In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds, capable of producing high-carbon steel. Since the technology was acquired from the Tamilians from South India, the origin of steel technology in India can be conservatively estimated at 400–500 BC.
The manufacture of wootz steel and Damascus steel, famous for its durability and ability to hold an edge, may have been taken by the Arabs from Persia, who took it from India. In 327 BC, Alexander the Great was rewarded by the defeated King Porus, not with gold or silver but with 30 pounds of steel. A recent study has speculated that carbon nanotubes were included in its structure, which might explain some of its legendary qualities, though, given the technology of that time, such qualities were produced by chance rather than by design. Natural wind was used where the soil containing iron was heated by the use of wood. The ancient Sinhalese managed to extract a ton of steel for every 2 tons of soil, a remarkable feat at the time. One such furnace was found in Samanalawewa and archaeologists were able to produce steel as the ancients did.
Crucible steel, formed by slowly heating and cooling pure iron and carbon (typically in the form of charcoal) in a crucible, was produced in Merv by the 9th to 10th century AD. In the 11th century, there is evidence of the production of steel in Song China using two techniques: a "berganesque" method that produced inferior, inhomogeneous steel, and a precursor to the modern Bessemer process that used partial decarburization via repeated forging under a cold blast.
Modern
Since the 17th century, the first step in European steel production has been the smelting of iron ore into pig iron in a blast furnace. Originally employing charcoal, modern methods use coke, which has proven more economical.
Processes starting from bar iron
In these processes, pig iron made from raw iron ore was refined (fined) in a finery forge to produce bar iron, which was then used in steel-making.
The production of steel by the cementation process was described in a treatise published in Prague in 1574 and was in use in Nuremberg from 1601. A similar process for case hardening armour and files was described in a book published in Naples in 1589. The process was introduced to England in about 1614 and used to produce such steel by Sir Basil Brooke at Coalbrookdale during the 1610s.
The raw material for this process were bars of iron. During the 17th century, it was realized that the best steel came from oregrounds iron of a region north of Stockholm, Sweden. This was still the usual raw material source in the 19th century, almost as long as the process was used.
Crucible steel is steel that has been melted in a crucible rather than having been forged, with the result that it is more homogeneous. Most previous furnaces could not reach high enough temperatures to melt the steel. The early modern crucible steel industry resulted from the invention of Benjamin Huntsman in the 1740s. Blister steel (made as above) was melted in a crucible or in a furnace, and cast (usually) into ingots.
Processes starting from pig iron
The modern era in steelmaking began with the introduction of Henry Bessemer's process in 1855, the raw material for which was pig iron. His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used. The Gilchrist-Thomas process (or basic Bessemer process) was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus.
Another 19th-century steelmaking process was the Siemens-Martin process, which complemented the Bessemer process. It consisted of co-melting bar iron (or steel scrap) with pig iron.
These methods of steel production were rendered obsolete by the Linz-Donawitz process of basic oxygen steelmaking (BOS), developed in 1952, and other oxygen steel making methods. Basic oxygen steelmaking is superior to previous steelmaking methods because the oxygen pumped into the furnace limited impurities, primarily nitrogen, that previously had entered from the air used, and because, with respect to the open hearth process, the same quantity of steel from a BOS process is manufactured in one-twelfth the time. Today, electric arc furnaces (EAF) are a common method of reprocessing scrap metal to create new steel. They can also be used for converting pig iron to steel, but they use a lot of electrical energy (about 440 kWh per metric ton), and are thus generally only economical when there is a plentiful supply of cheap electricity.
Industry
The steel industry is often considered an indicator of economic progress, because of the critical role played by steel in infrastructural and overall economic development. In 1980, there were more than 500,000 U.S. steelworkers. By 2000, the number of steelworkers had fallen to 224,000.
The economic boom in China and India caused a massive increase in the demand for steel. Between 2000 and 2005, world steel demand increased by 6%. Since 2000, several Indian and Chinese steel firms have expanded to meet demand, such as Tata Steel (which bought Corus Group in 2007), Baosteel Group and Shagang Group. , though, ArcelorMittal is the world's largest steel producer.
In 2005, the British Geological Survey stated China was the top steel producer with about one-third of the world share; Japan, Russia, and the United States were second, third, and fourth, respectively, according to the survey. The large production capacity of steel results also in a significant amount of carbon dioxide emissions inherent related to the main production route.
At the end of 2008, the steel industry faced a sharp downturn that led to many cut-backs.
In 2021, it was estimated that around 7% of the global greenhouse gas emissions resulted from the steel industry. Reduction of these emissions are expected to come from a shift in the main production route using cokes, more recycling of steel and the application of carbon capture and storage technology.
Recycling
Steel is one of the world's most-recycled materials, with a recycling rate of over 60% globally; in the United States alone, over were recycled in the year 2008, for an overall recycling rate of 83%.
As more steel is produced than is scrapped, the amount of recycled raw materials is about 40% of the total of steel produced – in 2016, of crude steel was produced globally, with recycled.
Contemporary
Carbon
Modern steels are made with varying combinations of alloy metals to fulfil many purposes. Carbon steel, composed simply of iron and carbon, accounts for 90% of steel production. Low alloy steel is alloyed with other elements, usually molybdenum, manganese, chromium, or nickel, in amounts of up to 10% by weight to improve the hardenability of thick sections. High strength low alloy steel has small additions (usually < 2% by weight) of other elements, typically 1.5% manganese, to provide additional strength for a modest price increase.
Recent corporate average fuel economy (CAFE) regulations have given rise to a new variety of steel known as Advanced High Strength Steel (AHSS). This material is both strong and ductile so that vehicle structures can maintain their current safety levels while using less material. There are several commercially available grades of AHSS, such as dual-phase steel, which is heat treated to contain both a ferritic and martensitic microstructure to produce a formable, high strength steel. Transformation Induced Plasticity (TRIP) steel involves special alloying and heat treatments to stabilize amounts of austenite at room temperature in normally austenite-free low-alloy ferritic steels. By applying strain, the austenite undergoes a phase transition to martensite without the addition of heat. Twinning Induced Plasticity (TWIP) steel uses a specific type of strain to increase the effectiveness of work hardening on the alloy.
Carbon Steels are often galvanized, through hot-dip or electroplating in zinc for protection against rust.
Alloy
Stainless steel contains a minimum of 11% chromium, often combined with nickel, to resist corrosion. Some stainless steels, such as the ferritic stainless steels are magnetic, while others, such as the austenitic, are nonmagnetic. Corrosion-resistant steels are abbreviated as CRES.
Alloy steels are plain-carbon steels in which small amounts of alloying elements like chromium and vanadium have been added. Some more modern steels include tool steels, which are alloyed with large amounts of tungsten and cobalt or other elements to maximize solution hardening. This also allows the use of precipitation hardening and improves the alloy's temperature resistance. Tool steel is generally used in axes, drills, and other devices that need a sharp, long-lasting cutting edge. Other special-purpose alloys include weathering steels such as Cor-ten, which weather by acquiring a stable, rusted surface, and so can be used un-painted. Maraging steel is alloyed with nickel and other elements, but unlike most steel contains little carbon (0.01%). This creates a very strong but still malleable steel.
Eglin steel uses a combination of over a dozen different elements in varying amounts to create a relatively low-cost steel for use in bunker buster weapons. Hadfield steel, named after Robert Hadfield, or manganese steel, contains 12–14% manganese which, when abraded, strain-hardens to form a very hard skin which resists wearing. Uses of this particular alloy include tank tracks, bulldozer blade edges, and cutting blades on the jaws of life.
Standards
Most of the more commonly used steel alloys are categorized into various grades by standards organizations. For example, the Society of Automotive Engineers has a series of grades defining many types of steel. The American Society for Testing and Materials has a separate set of standards, which define alloys such as A36 steel, the most commonly used structural steel in the United States. The JIS also defines a series of steel grades that are being used extensively in Japan as well as in developing countries.
Uses
Iron and steel are used widely in the construction of roads, railways, other infrastructure, appliances, and buildings. Most large modern structures, such as stadiums and skyscrapers, bridges, and airports, are supported by a steel skeleton. Even those with a concrete structure employ steel for reinforcing. It sees widespread use in major appliances and cars. Despite the growth in usage of aluminium, steel is still the main material for car bodies. Steel is used in a variety of other construction materials, such as bolts, nails and screws, and other household products and cooking utensils.
Other common applications include shipbuilding, pipelines, mining, offshore construction, aerospace, white goods (e.g. washing machines), heavy equipment such as bulldozers, office furniture, steel wool, tool, and armour in the form of personal vests or vehicle armour (better known as rolled homogeneous armour in this role).
Historical
Before the introduction of the Bessemer process and other modern production techniques, steel was expensive and was only used where no cheaper alternative existed, particularly for the cutting edge of knives, razors, swords, and other items where a hard, sharp edge was needed. It was also used for springs, including those used in clocks and watches.
With the advent of faster and cheaper production methods, steel has become easier to obtain and much cheaper. It has replaced wrought iron for a multitude of purposes. However, the availability of plastics in the latter part of the 20th century allowed these materials to replace steel in some applications due to their lower fabrication cost and weight. Carbon fibre is replacing steel in some cost-insensitive applications such as sports equipment and high-end automobiles.
Long
As reinforcing bars and mesh in reinforced concrete
Railroad tracks
Structural steel in modern buildings and bridges
Wires
Input to reforging applications
Flat carbon
Major appliances
Magnetic cores
The inside and outside body of automobiles, trains, and ships.
Weathering (COR-TEN)
Intermodal containers
Outdoor sculptures
Architecture
Highliner train cars
Stainless
Cutlery
Rulers
Surgical instruments
Watches
Guns
Rail passenger vehicles
Tablets
Trash Cans
Body piercing jewellery
Inexpensive rings
Components of spacecraft and space stations
Low-background
Steel manufactured after World War II became contaminated with radionuclides by nuclear weapons testing. Low-background steel, steel manufactured prior to 1945, is used for certain radiation-sensitive applications such as Geiger counters and radiation shielding.
See also
Bulat steel
Direct reduction
Carbon steel
Damascus steel
Galvanizing
History of the steel industry (1970–present)
Iron in folklore
List of blade materials
Machinability
Noric steel
Pelletizing
Rolling
Rolling mill
Rust Belt
Second Industrial Revolution
Silicon steel
Steel abrasive
Steel mill
Tamahagane, used in Japanese swords
Tinplate
Toledo steel
Wootz steel
References
Bibliography
Further reading
External links
Official website of the World Steel Association (WorldSteel.org)
SteelUniversity.org – online steel education resources, an initiative of World Steel Association
MATDAT Database of Properties of Unalloyed, Low-Alloy and High-Alloy Steels – obtained from published results of materials testing
2nd-millennium BC introductions
Building materials
Roofing materials | Steel | [
"Physics",
"Engineering"
] | 6,279 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.