id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
10,426,512 | https://en.wikipedia.org/wiki/6-Phosphogluconic%20acid | 6-Phosphogluconic acid (with conjugate base 6-phosphogluconate) is a phosphorylated sugar acid which appears in the pentose phosphate pathway and the Entner–Doudoroff pathway.
During the oxidative phase of the pentose phosphate pathway, it is formed from 6-phosphogluconolactone by 6-phosphogluconolactonase, and in turn, it is converted to ribulose 5-phosphate by phosphogluconate dehydrogenase, in an oxidative decarboxylation which also produces NADPH.
In those microorganisms which host the Entner-Doudoroff pathway, 6-phosphogluconic acid may also be acted upon by 6-phosphogluconate dehydratase to produce 2-keto-3-deoxy-6-phosphogluconate.
Organophosphates | 6-Phosphogluconic acid | [
"Chemistry",
"Biology"
] | 215 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
10,426,763 | https://en.wikipedia.org/wiki/6-Phosphogluconolactone | 6-Phosphogluconolactone is an intermediate in the pentose phosphate pathway (PPP).
In the PPP pathway, it is produced from glucose-6-phosphate by glucose-6-phosphate dehydrogenase. It is then converted to 6-Phosphogluconic acid by 6-phosphogluconolactonase.
See also
Lactone
References
Organophosphates
Pentose phosphate pathway
Delta-lactones | 6-Phosphogluconolactone | [
"Chemistry",
"Biology"
] | 99 | [
"Carbohydrate metabolism",
"Pentose phosphate pathway",
"Biotechnology stubs",
"Biochemistry stubs",
"Biochemistry"
] |
10,428,127 | https://en.wikipedia.org/wiki/Thiophosphoryl%20chloride | Thiophosphoryl chloride is an inorganic compound with the chemical formula . It is a colorless pungent smelling liquid that fumes in air. It is synthesized from phosphorus chloride and used to thiophosphorylate organic compounds, such as to produce insecticides.
Synthesis
Thiophosphoryl chloride can be generated by several reactions starting from phosphorus trichloride. The most common and practical synthesis, hence used in industrial manufacturing, is directly reacting phosphorus trichloride with excess sulfur at 180 °C.
Using this method, yields can be very high after purification by distillation. Catalysts facilitate the reaction at lower temperatures, but are not usually necessary.
Alternatively, it is obtained by combining phosphorus pentasulfide and phosphorus pentachloride.
Structure
Thiophosphoryl chloride has tetrahedral molecular geometry and C3v molecular symmetry, with the structure . According to gas electron diffraction, the phosphorus–sulfur bond length is 189 pm and the phosphorus–chlorine bond length is 201 pm, while the bond angle is 102°.
Reactions
is soluble in benzene, carbon tetrachloride, chloroform, and carbon disulfide. However, it hydrolyzes rapidly in basic or hydroxylic solutions, such as alcohols and amines, to produce thiophosphates. In water reacts, and contingent on the reaction conditions, produces either phosphoric acid, hydrogen sulfide, and hydrochloric acid or dichlorothiophosphoric acid and hydrochloric acid.
An intermediate in this process appears to be tetraphosphorus nonasulfide.
is used to thiophosphorylate organic compounds (to add thiophosphoryl group, P=S, with three free valences at the P atom, to organic compounds). This conversion is widely applicable for amines and alcohols, as well as aminoalcohols, diols, and diamines. Industrially, is used to produce insecticides, like parathion.
reacts with tertiary amides to generate thioamides. For example:
When treated with methylmagnesium iodide, it give tetramethyldiphosphine disulfide .
References
Phosphorus halides
Thiophosphoryl compounds
Thiochlorides | Thiophosphoryl chloride | [
"Chemistry"
] | 496 | [
"Functional groups",
"Thiophosphoryl compounds"
] |
10,430,839 | https://en.wikipedia.org/wiki/TCEP | TCEP (tris(2-carboxyethyl)phosphine) is a reducing agent frequently used in biochemistry and molecular biology applications. It is often prepared and used as a hydrochloride salt (TCEP-HCl) with a molecular weight of 286.65 gram/mol. It is soluble in water and available as a stabilized solution at neutral pH and immobilized onto an agarose support to facilitate removal of the reducing agent.
Synthesis
TCEP can be prepared by the acid hydrolysis of tris(cyanoethyl)phosphine.
Applications
TCEP is often used as a reducing agent to break disulfide bonds within and between proteins as a preparatory step for gel electrophoresis.
Compared to the other two most common agents used for this purpose (dithiothreitol and β-mercaptoethanol), TCEP has the advantages of being odorless, a more powerful reducing agent, an irreversible reducing agent (in the sense that TCEP does not regenerate—the end product of TCEP-mediated disulfide cleavage is in fact two free thiols/cysteines), more hydrophilic, and more resistant to oxidation in air. It also does not reduce metals used in immobilized metal affinity chromatography.
TCEP is particularly useful when labeling cysteine residues with maleimides. TCEP can keep the cysteines from forming di-sulfide bonds and, unlike dithiothreitol and β-mercaptoethanol, it will not react as readily with the maleimide. However, TCEP has been reported to react with maleimide under certain conditions.
TCEP is also used in the tissue homogenization process for RNA isolation.
For Ultraviolet–visible spectroscopy applications, TCEP is useful when it is important to avoid interfering absorbance from 250 to 285 nanometers which can occur with dithiothreitol. Dithiothreitol will slowly over time absorb more and more light in this spectrum as various redox reactions occur.
History
Reduction of biomolecules with trialkyphosphines received little attention for decades because historically available phosphines were extremely malodorous and/or insoluble in water. In 1969, TCEP was reported as an oderless and water-soluble trialkyphosphine suitable for biochemical use, however the potential use of TCEP for biochemical applications was almost totally ignored for decades. In 1991, Burns reported a new convenient synthetic procedure for TCEP, which set off TCEP becoming more widely available and marketed as a "new" reducing agent for biochemical use, & thus TCEP came into more widespread use throughout the 1990s.
Reactions
TCEPT will reduce disulfides to thiols in the presence of water:
Via a similar process it can also reduce sulfoxides and N-oxides. Some other side reactions have also been reported:
Conversion of a cysteine residue into alanine in the presence of TCEP and heat (90˚C).
Slow (but significant, 40% cleavage reported for two week storage at 4˚C) protein backbone cleavage at cysteine residues under mild conditions.
Use in biological research
TCEP is available from various chemical suppliers as the hydrochloride salt. When dissolved in water, TCEP-HCl is acidic. A reported preparation is a 0.5 M TCEP-HCl aqueous stock solution that is pH adjusted to near-neutral pH and stored frozen at -20˚C. TCEP is reportedly less stable in phosphate buffers.
See also
2-Mercaptoethanol (BME)
Dithiothreitol (DTT)
Dithiobutylamine (DTBA)
References
Carboxylic acids
Tertiary phosphines
Reducing agents | TCEP | [
"Chemistry"
] | 794 | [
"Carboxylic acids",
"Redox",
"Functional groups",
"Reducing agents"
] |
18,466,294 | https://en.wikipedia.org/wiki/Photo%E2%80%93Dember%20effect | In semiconductor physics, the photo–Dember effect (named after its discoverer Harry Dember) is the formation of a charge dipole in the vicinity of a semiconductor surface after ultra-fast photo-generation of charge carriers.
The dipole forms owing to the difference of mobilities (or diffusion constants) for holes and electrons which combined with the break of symmetry provided by the surface lead to an effective charge separation in the direction perpendicular to the surface. In an isolated sample, where the macroscopic flow of an electric current is prohibited, the fast carriers (often the electrons) are slowed and the slow carriers (often the holes) are accelerated by an electric field, called the Dember field.
One of the main applications of the photo–Dember effect is the generation of terahertz (THz) radiation pulses for terahertz time-domain spectroscopy. This effect is present in most semiconductors but it is particularly strong in narrow-gap semiconductors (mainly arsenides and antimonides) such as InAs and InSb owing to their high electron mobility. The photo–Dember terahertz emission should not be confused with the surface field emission, which occurs if the surface energy bands of a semiconductor fall between its valence and conduction bands, which produces a phenomenon known as Fermi level pinning, causing, at its time, band bending and consequently the formation of a depletion or accumulation layer close to the surface which contributes to the acceleration of charge carriers. These two effects can contribute constructively or destructively for the dipole formation depending on the direction of the band-bending.
See also
Photoelectrochemical process
References
Semiconductors
Terahertz technology
Optoelectronics | Photo–Dember effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 354 | [
"Matter",
"Spectrum (physical sciences)",
"Physical quantities",
"Terahertz technology",
"Semiconductors",
"Electromagnetic spectrum",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Electrical resistance and conductance"
] |
18,467,408 | https://en.wikipedia.org/wiki/Chebychev%E2%80%93Gr%C3%BCbler%E2%80%93Kutzbach%20criterion | The Chebychev–Grübler–Kutzbach criterion determines the number of degrees of freedom of a kinematic chain, that is, a coupling of rigid bodies by means of mechanical constraints. These devices are also called linkages.
The Kutzbach criterion is also called the mobility formula, because it computes the number of parameters that define the configuration of a linkage from the number of links and joints and the degree of freedom at each joint.
Interesting and useful linkages have been designed that violate the mobility formula by using special geometric features and dimensions to provide more mobility than predicted by this formula. These devices are called overconstrained mechanisms.
Mobility formula
The mobility formula counts the number of parameters that define the positions of a set of rigid bodies and then reduces this number by the constraints that are imposed by joints connecting these bodies.
Imagine a spherical seagull. A single unconstrained body soaring in 3-space has 6 degrees of freedom: 3 translational (say, x,y,z); and 3 rotational (say, roll, pitch, yaw).
So a system of unconnected rigid bodies moving in space (a flock of soaring seagulls) has degrees of freedom measured relative to a fixed frame (coordinate system). The fixed frame can be chosen arbitrarily (an observer anywhere on the beach). And the frame can even be local or subjective: from the viewpoint of one of the seagulls, the world moves around it, while it stays fixed. So this frame can be included in the count of bodies (the flock of seagulls as seen from chosen gull A--perhaps A is standing on the beach, perhaps A is flying, but looking at the flock from the fixed local viewpoint of A), and thus mobility is independent of the choice of the link that will form the fixed frame. Then the degree-of-freedom of this system is where is the number of moving bodies plus the fixed body.
Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints that a joint imposes in terms of the joint's freedom where In the case of a hinge or slider, which are one degree of freedom joints, have and therefore
The result is that the mobility of a system formed from moving links and joints each with freedom for is given by
Recall that includes the fixed link.
There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A simple open chain consists of moving links connected end to end by joints, with one end connected to a ground link. Thus, in this case and the mobility of the chain is
For a simple closed chain, moving links are connected end-to-end by joints such that the two ends are connected to the ground link forming a loop. In this case, we have and the mobility of the chain is
An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom.
An example of a simple closed chain is the RSSR spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints.
Planar and spherical movement
It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the links in each system is now three rather than six, and the constraints imposed by joints are now c = 3 − f.
In this case, the mobility formula is given by
and the special cases become
planar or spherical simple open chain,
planar or spherical simple closed chain,
An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1.
See also
Burmester theory
Four-bar linkage
Linkage (mechanical)
Machine (mechanical)
Mechanical system
Overconstrained mechanism
Notes and references
Further reading
External links
Basic kinematics of rigid bodies
Chebychev–Grübler–Kutzbach's criterion - mobility formula calculator
Robot control
Mechanical power transmission | Chebychev–Grübler–Kutzbach criterion | [
"Physics",
"Engineering"
] | 978 | [
"Mechanical power transmission",
"Mechanics",
"Robotics engineering",
"Robot control"
] |
18,468,216 | https://en.wikipedia.org/wiki/Shearing%20%28manufacturing%29 | Shearing, also known as die cutting, is a process that cuts stock without the formation of chips or the use of burning or melting. Strictly speaking, if the cutting blades are straight the process is called shearing; if the cutting blades are curved then they are shearing-type operations. The most commonly sheared materials are in the form of sheet metal or plates. However, rods can also be sheared. Shearing-type operations include blanking, piercing, roll slitting, and trimming. It is used for metal, fabric, paper and plastics.
Principle
A punch (or moving blade) is used to push a workpiece against the die (or fixed blade), which is fixed. Usually, the clearance between the two is 5 to 40% of the thickness of the material, but dependent on the material. Clearance is defined as the separation between the blades, measured at the point where the cutting action takes place and perpendicular to the direction of blade movement. It affects the finish of the cut (burr) and the machine's power consumption. This causes the material to experience highly localized shear stresses between the punch and die. The material will then fail when the punch has moved 15 to 60% of the thickness of the material because the shear stresses are greater than the shear strength of the material and the remainder of the material is torn.
Two distinct sections can be seen on a sheared workpiece, the first part being plastic deformation and the second being fractured. Because of normal inhomogeneities in materials and inconsistencies in clearance between the punch and die, the shearing action does not occur in a uniform manner. The fracture will begin at the weakest point and progress to the next weakest point until the entire workpiece has been sheared; this is what causes the rough edge. The rough edge can be reduced if the workpiece is clamped from the top with a die cushion. Above a certain pressure, the fracture zone can be completely eliminated. However, the sheared edge of the workpiece will usually experience work-hardening and cracking. If the workpiece has too much clearance, then it may experience roll-over or heavy burring.
Tool materials
Low alloy steel is used in low production of materials that range up to 0.64 cm ( in) thick
High-carbon, high chromium steel is used in high production of materials that also range up to 0.64 cm ( in) in thickness
Shock-resistant steel is used in materials that are equal to 0.64 cm ( in) thick or more
Tolerances and surface finish
When shearing a sheet, the typical tolerance is +0.1 inch or −0.1 inch, but it is feasible to get the tolerance to within +0.005 inch or −0.005 inch. While shearing a bar and angle, the typical tolerance is +0.06 inch or −0.06 inch, but it is possible to get the tolerance to +0.03 inch or −0.03 inches. Surface finishes typically occur within the 250 to 1000 microinches range but can range from 125 to 2000 microinches. A secondary operation is required if one wants better surfaces than this.
See also
Alligator shear
Shear (sheet metal)
Stamping (metalworking)
References
Citations
General sources
.
.
External links
Shearing Capacity Guide
Cutting machines
Fabrication (metal)
Metalworking cutting tools
Machine tool builders | Shearing (manufacturing) | [
"Physics",
"Technology"
] | 693 | [
"Physical systems",
"Machines",
"Cutting machines"
] |
18,469,959 | https://en.wikipedia.org/wiki/George%20S.%20Hammond | George Simms Hammond (May 22, 1921 – October 5, 2005) was an American scientist and theoretical chemist who developed "Hammond's postulate", and fathered organic photochemistry,–the general theory of the geometric structure of the transition state in an organic chemical reaction. Hammond's research is also known for its influence on the philosophy of science. His research garnered him the Norris Award in 1968, the Priestley Medal in 1976, the National Medal of Science in 1994, and the Othmer Gold Medal in 2003. He served as the executive chairman of the Allied Chemical Corporation from 1979 to 1989.
He was a chemist at the California Institute of Technology, and subsequently headed both the Departments of Chemistry and Chemical Engineering at the university. He conducted research at the University of Oxford and University of Basel as a Guggenheim Fellow and National Science Foundation Fellow, respectively. He served as the foreign secretary of the National Academy of Sciences from 1974 to 1978.
A native of Maine, he was born and raised in Auburn; he attended nearby Bates College in Lewiston, Maine, where he graduated magna cum laude with a B.S. in chemistry in 1943. He completed his doctorate at Harvard University in 1947, under the mentorship of Paul Doughty Bartlett, and a postdoctorate at University of California, Los Angeles with Saul Winstein in 1948.
Early life and education
George Simms Hammond was born on May 22, 1921, in Auburn, Maine. Growing up in Auburn his family were charged with the operation of the neighborhood dairy farm on Hardscrabble Road. His father died when Hammond was thirteen. He was the oldest of seven children and was raised by a single mother. From an early age Hammond was charged with running the day-to-day operations of the dairy farm with his mother and older siblings. Hammond's parents were college graduates, but disliked the local schools in Auburn. As a result, he was homeschooled until the sixth grade. Afterwards, he was educated at various Auburn public schools before graduating in 1938. After graduating he took a gap year to continue operating his dairy farm. After his educational hiatus he applied to and was accepted into Bates College, in Lewiston, Maine. He graduated with a Bachelors of Science in chemistry magna cum laude and Phi Beta Kappa in January 1943.
Early career
Upon graduating from college, Hammond took a position as a chemist at Rohm and Haas in Philadelphia, Pennsylvania. After some months on the job he quit to pursue graduate studies at Harvard University, where he received a Masters of Science (M.S.) and Doctor of Philosophy (Ph.D.). His thesis, Inhibition of the Polymerization of Allylacetate, was supervised by Paul Doughty Bartlett. Hammond then moved to Los Angeles, California, to study intermolecular compounds at UCLA.
Career in academia
His academic career began in 1948 with a teaching position at Iowa State College; he served as Assistant Professor of Chemistry. In his capacity there he published his eponymous postulate which is now widely known as the most important publication in the field of organic photochemistry. He moved to the University of Oxford and University of Basel as a Guggenheim Fellow and National Science Foundation Fellow, respectively. In 1958, he moved to the California Institute of Technology as a Professor of Organic Chemistry. Later he was appointed the Arthur Amos Noyes Professor of Chemistry and subsequently went on to lead the Departments of Chemistry and Chemical Engineering. After 14 years teaching and serving as an academic administrator at Caltech he moved in 1972 to the University of California Santa Cruz. At University of California Santa Cruz he served as both a professor and the Chancellor of the natural sciences.
Life outside of academia
Aside from the academic world, during all these years, George Hammond, "made many public speeches on controversial themes, both political (e.g., the invasion of Cambodia, delivered in 1971 at a public rally on Caltech's Olive Walk) and scientific (e.g., the future of chemistry)" Many of these controversial speeches affected his career negatively. For example, after his speech at Olive Walk, president Richard Nixon's administration removed his name from nomination for a major NSF post. Nevertheless, he did not back down and continued to criticize the government, and not limiting to delivering speeches, he wrote a letter to the editor of a newspaper saying: “A June 30 front-page article describes the potential bonanza in arms sales to new members as the North Atlantic Treaty Organization expands. I was favorably inclined toward expansion because of my naive assumption that bringing most of the nations of Europe and North America together as a cooperating group would decrease the likelihood of war. I cannot believe this will be the case if a prerequisite for entry is that countries buy new armaments from present members. At whom will the guns be aimed? Russia? Then we will probably re-create the cold war." The way this excerpt was written says many things about George Hammond, starting with his passionate character. Hammond fought for everything he believed in. He cared about his nation and he was also a little reckless about the consequences he could suffer for defying the government. Also, in the excerpt, a sarcastic side of Hammond can be perceived, a man of strong character with the ability to recognize when he is wrong.
Later pursuits
He was appointed as the Foreign Secretary of the National Academy of Sciences in 1974 and served for one term retiring in 1978. He also gave notable speeches on political issues such as the invasion of Cambodia, and various topics on Chemistry. The talks he gave sometimes had negative impacts on his life, exemplified by Nixon's withdrawal of his name for major National Science Foundation positions. In 1979 he retired from academia and joined the Allied Chemical Corporation as Executive Chairman, serving for ten years. He retired from this capacity and all others after his tenure concluded.
Scientific career
Hammond's postulate
George Hammond published a hypothesis in physical organic chemistry which describes the geometric structure of the transition state in an organic chemical reaction in his publication, Hammond's principle.
His 1955 publication asserted:
"If two states, as, for example, a transition state and an unstable intermediate, occur consecutively during a reaction process and have nearly the same energy content, their interconversion will involve only a small reorganization of the molecular structures."
Therefore, the geometric structure of a state can be predicted by comparing its energy to the species neighboring it along the reaction coordinate. For example, in an exothermic reaction the transition state is closer in energy to the reactants than to the products. Therefore, the transition state will be more geometrically similar to the reactants than to the products. In contrast, however, in an endothermic reaction the transition state is closer in energy to the products than to the reactants. So, according to Hammond's postulate the structure of the transition state would resemble the products more than the reactants. This type of comparison is especially useful because most transition states cannot be characterized experimentally.
Hammond's postulate also helps to explain and rationalize the Bell–Evans–Polanyi principle. Namely, this principle describes the experimental observation that the rate of a reaction, and therefore its activation energy, is affected by the enthalpy change of that reaction. Hammond's postulate explains this observation by describing how varying the enthalpy of a reaction would also change the structure of the transition state. In turn, this change in geometric structure would alter the energy of the transition state, and therefore the activation energy and reaction rate as well.
The postulate has also been used to predict the shape of reaction coordinate diagrams. For example, electrophilic aromatic substitutions involves a distinct intermediate and two less well defined states. By measuring the effects of aromatic substituents and applying Hammond's postulate it was concluded that the rate-determining step involves formation of a transition state that should resemble the intermediate complex.
During the 1940s and 1950s, chemists had trouble explaining why even slight changes in the reactants caused significant differences in the rate and product distributions of a reaction. In 1955 George S. Hammond, a young professor at Iowa State University, postulated that transition-state theory could be used to qualitatively explain the observed structure-reactivity relationships. Notably, John E. Leffler of Florida State University proposed a similar idea in 1953. However, Hammond's version has received more attention since its qualitative nature was easier to understand and employ than Leffler's complex mathematical equations. Hammond's postulate is sometimes called the Hammond-Leffler postulate to give credit to both scientists.
Interpreting the postulate
Effectively, the postulate states that the structure of a transition state resembles that of the species nearest to it in free energy. This can be explained with reference to potential energy diagrams:
In case (a), which is an exothermic reaction, the energy of the transition state is closer in energy to that of the reactant than that of the intermediate or the product. Therefore, from the postulate, the structure of the transition state also more closely resembles that of the reactant. In case (b), the energy of the transition state is close to neither the reactant nor the product, making none of them a good structural model for the transition state. Further information would be needed in order to predict the structure or characteristics of the transition state. Case (c) depicts the potential diagram for an endothermic reaction, in which, according to the postulate, the transition state should more closely resemble that of the intermediate or the product.
Another significance of Hammond's postulate is that it permits us to discuss the structure of the transition state in terms of the reactants, intermediates, or products. In the case where the transition state closely resembles the reactants, the transition state is called “early” while a “late” transition state is the one that closely resembles the intermediate or the product.
An example of the “early” transition state is chlorination. Chlorination favors the products because it is an exothermic reaction, which means that the products are lower in energy than the reactants. When looking at the adjacent diagram (representation of an "early" transition state), one must focus on the transition state, which is not able to be observed during an experiment. To understand what is meant by an “early” transition state, the Hammond postulate represents a curve that shows the kinetics of this reaction. Since the reactants are higher in energy, the transition state appears to be right after the reaction starts.
An example of the “late” transition state is bromination. Bromination favors the reactants because it is an endothermic reaction, which means that the reactants are lower in energy than the products. Since the transition state is hard to observe, the postulate of bromination helps to picture the “late” transition state (see the representation of the "late" transition state). Since the products are higher in energy, the transition state appears to be right before the reaction is complete.
One other useful interpretation of the postulate often found in textbooks of organic chemistry is the following:
Assume that the transition states for reactions involving unstable intermediates can be closely approximated by the intermediates themselves.
This interpretation ignores extremely exothermic and endothermic reactions which are relatively unusual and relates the transition state to the intermediates which are usually the most unstable.
Structure of transition states
SN1 reactions
Hammond's postulate can be used to examine the structure of the transition states of a SN1 reaction. In particular, the dissociation of the leaving group is the first transition state in a SN1 reaction. The stabilities of the carbocations formed by this dissociation are known to follow the trend tertiary > secondary > primary > methyl.
Therefore, since the tertiary carbocation is relatively stable and therefore close in energy to the R-X reactant, then the tertiary transition state will have a structure that is fairly similar to the R-X reactant. In terms of the graph of reaction coordinate versus energy, this is shown by the fact that the tertiary transition state is further to the left than the other transition states. In contrast, the energy of a methyl carbocation is very high, and therefore the structure of the transition state is more similar to the intermediate carbocation than to the R-X reactant. Accordingly, the methyl transition state is very far to the right.
SN2 reactions
Substitution, nucleophilic bimolecular reactions are concerted reactions where both the nucleophile and substrate are involved in the rate limiting step. Since this reaction is concerted, the reaction occurs in one step, where the bonds are broken, while new bonds are formed. Therefore, to interpret this reaction, it is important to look at the transition state, which resembles the concerted rate limiting step. In the "Depiction of SN2 Reaction" figure, the nucleophile forms a new bond to the carbon, while the halide (L) bond is broken.
E1 reactions
An E1 reaction consists of a unimolecular elimination, where the rate determining step of the mechanism depends on the removal of a single molecular species. This is a two-step mechanism. The more stable the carbocation intermediate is, the faster the reaction will proceed, favoring the products. Stabilization of the carbocation intermediate lowers the activation energy. The reactivity order is (CH3)3C- > (CH3)2CH- > CH3CH2- > CH3-.
Furthermore, studies describe a typical kinetic resolution process that starts out with two enantiomers that are energetically equivalent and, in the end, forms two energy-inequivalent intermediates, referred to as diastereomers. According to Hammond's postulate, the more stable diastereomer is formed faster.
E2 reactions
Elimination, bimolecular reactions are one step, concerted reaction where both base and substrate participate in the rate limiting step. In an E2 mechanism, a base takes a proton near the leaving group, forcing the electrons down to make a double bond, and forcing off the leaving group-all in one concerted step. The rate law depends on the first order concentration of two reactants, making it a 2nd order (bimolecular) elimination reaction. Factors that affect the rate determining step are stereochemistry, leaving groups, and base strength.
A theory, for an E2 reaction, by Joseph Bunnett suggests the lowest pass through the energy barrier between reactants and products is gained by an adjustment between the degrees of Cβ-H and Cα-X rupture at the transition state. The adjustment involves much breaking of the bond more easily broken, and a small amount of breaking of the bond which requires more energy. This conclusion by Bunnett is a contradiction from the Hammond postulate. The Hammond postulate is the opposite of what Bunnett theorized. In the transition state of a bond breaking step it involves little breaking when the bond is easily broken and much breaking when it is difficult to break. Despite these differences, the two postulates are not in conflict since they are concerned with different sorts of processes. Hammond focuses on reaction steps where one bond is made or broken, or the breaking of two or more bonds occur simultaneously. The E2 theory transition state concerns a process when bond formation or breaking are not simultaneous.
Kinetics and the Bell-Evans-Polanyi principle
Technically, Hammond's postulate only describes the geometric structure of a chemical reaction. However, Hammond's postulate indirectly gives information about the rate, kinetics, and activation energy of reactions. Hence, it gives a theoretical basis for the understanding the Bell-Evans-Polanyi principle, which describes the experimental observation that the enthalpy change and rate of similar reactions were usually correlated.
The relationship between Hammond's postulate and the BEP principle can be understood by considering a SN1 reaction. Although two transition states occur during a SN1 reaction (dissociation of the leaving group and then attack by the nucleophile), the dissociation of the leaving group is almost always the rate-determining step. Hence, the activation energy and therefore rate of the reaction will depend only upon the dissociation step.
First, consider the reaction at secondary and tertiary carbons. As the BEP principle notes, experimentally SN1 reactions at tertiary carbons are faster than at secondary carbons. Therefore, by definition, the transition state for tertiary reactions will be at a lower energy than for secondary reactions. However, the BEP principle cannot justify why the energy is lower.
Using Hammond's postulate, the lower energy of the tertiary transition state means that its structure is relatively closer to its reactants R(tertiary)-X than to the carbocation "product" when compared to the secondary case. Thus, the tertiary transition state will be more geometrically similar to the R(tertiary)-X reactants than the secondary transition state is to its R(secondary)-X reactants. Hence, if the tertiary transition state is close in structure to the (low energy) reactants, then it will also be lower in energy because structure determines energy. Likewise, if the secondary transition state is more similar to the (high energy) carbocation "product," then it will be higher in energy.
Applying the postulate
Hammond's postulate is useful for understanding the relationship between the rate of a reaction and the stability of the products. While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG‡ “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products.
Hammond's postulate connects the rate of a reaction process with the structural features of those states that form part of it, by saying that the molecular reorganizations have to be small in those steps that involve two states that are very close in energy. This gave birth to the structural comparison between the starting materials, products, and the possible "stable intermediates" that led to the understanding that the most stable product is not always the one that is favored in a reaction process.
Critical acclaim and question
Hammond's postulate is especially important when looking at the rate-limiting step of a reaction. However, one must be cautious when examining a multistep reaction or one with the possibility of rearrangements during an intermediate stage. In some cases, the final products appear in skewed ratios in favor of a more unstable product (called the kinetic product) rather than the more stable product (the thermodynamic product). In this case one must examine the rate-limiting step and the intermediates. Often, the rate-limiting step is the initial formation of an unstable species such as a carbocation. Then, once the carbocation is formed, subsequent rearrangements can occur. In these kinds of reactions, especially when run at lower temperatures, the reactants simply react before the rearrangements necessary to form a more stable intermediate have time to occur. At higher temperatures when microscopic reversal is easier, the more stable thermodynamic product is favored because these intermediates have time to rearrange. Whether run at high or low temperatures, the mixture of the kinetic and thermodynamic products eventually reach the same ratio, one in favor of the more stable thermodynamic product, when given time to equilibrate due to microreversal.
Personal
Hammond married Marian Reese in 1945, and had five children with her. The couple divorced in 1975, and he was remarried soon after to Eve Menger. He had two children with Eve.
Awards and honors
Norris Award (1968)
Priestley Medal (1976)
Golden Plate Award of the American Academy of Achievement (1976)
National Medal of Science (1994)
Glenn T. Seaborg Medal (1994)
Othmer Gold Medal (2003)
See also
Bema Hapothle
Curtin-Hammett principle
Microscopic reversibility
Bell-Evans-Polanyi principle
References
Further reading
External links
Photographs of George S. Hammond from the UC Santa Cruz Library's Digital Collections
1921 births
2005 deaths
National Medal of Science laureates
20th-century American chemists
Bates College alumni
Harvard University alumni
Chemical kinetics
Physical organic chemistry
University of California, Los Angeles alumni
Iowa State University faculty
California Institute of Technology faculty
University of California, Santa Cruz faculty
American expatriates in the United Kingdom
American expatriates in Switzerland | George S. Hammond | [
"Chemistry"
] | 4,241 | [
"Chemical reaction engineering",
"Chemical kinetics",
"Physical organic chemistry"
] |
3,475,308 | https://en.wikipedia.org/wiki/Nanopillar | Nanopillars is an emerging technology within the field of nanostructures. Nanopillars are pillar shaped nanostructures approximately 10 nanometers in diameter that can be grouped together in lattice like arrays. They are a type of metamaterial, which means that nanopillars get their attributes from being grouped into artificially designed structures and not their natural properties. Nanopillars set themselves apart from other nanostructures due to their unique shape. Each nanopillar has a pillar shape at the bottom and a tapered pointy end on top. This shape in combination with nanopillars' ability to be grouped together exhibits many useful properties. Nanopillars have many applications including efficient solar panels, high resolution analysis, and antibacterial surfaces.
Applications
Solar panels
Due to their tapered ends, nanopillars are very efficient at capturing light. Solar collector surfaces coated with nanopillars are three times as efficient as nanowire solar cells. Less material is needed to build a solar cell out of nanopillars compared to regular semi conductive materials. They also hold up well during the manufacturing process of solar panels. This durability allows manufacturers to use cheaper materials and less expensive methods to produce solar panels. Researchers are looking into putting dopants into the bottom of the nanopillars, to increase the amount of time photons will bounce around the pillars and thus the amount of light captured. As well as capturing light more efficiently, using nanopillars in solar panels will allow them to be flexible. The flexibility gives manufacturers more options on how they want their solar panels to be shaped as well as reduces costs in terms of how delicately the panels have to be handled. Although nanopillars are more efficient and cheaper than standard materials, scientists have not been able to mass-produce them yet. This is a significant drawback to using nanopillars as a part of the manufacturing process.
Antibacterial surfaces
Nanopillars also have functions outside of electronics and can imitate nature's defenses. Cicadas' wings are covered in tiny, nanopillar shaped rods. When bacteria rests on a cicada's wing, its cell membrane will stick to the nanopillars and the crevices between them, rupturing it. Since the rods on the cicadas are about the same size and shape as artificial nanopillars, it is possible for humans to copy this defense. A surface covered with nanopillars would immediately kill off all soft membrane bacteria. More rigid bacteria will be more likely to not rupture. If mass-produced and installed everywhere, nanopillars could reduce much of the risk of transmitting diseases through touching infected surfaces.
Antibacterial mechanism
There are several models proposed to explain the antibacterial mechanism of the nanopillars. According to the stretching and mechano-inducing model, for a relatively uniform nanotopographies like nanopillars found on cicada wing, the bacteria die due to the rupturing of bacterial cell wall that is suspended between two adjacent nanopillars as opposed to a puncturing mechanism. The nanopillar features like height, density, and sharpness of the nanopillars was found to be affecting the overall antibacterial properties of the nanopillars. However, the relative correlation of nanopillar features is difficult to establish due to several conflicting results in the literature. Alternative antibacterial mechanism of nanopillars include the potential effects of shear force, negative physiological response of bacteria, and intrinsic pressure effects from the interaction between bacterial surface proteins and nanopillars.
High resolution molecular analysis
Another use of nanopillars is observing cells. Nanopillars capture light so well that when lights hits them, the glow the nanopillars emit dies down at around 150 nanometers. Because this distance is less than the wavelength of light, it allows researchers to observe small objects without the interference of background light. This is especially useful in cellular analysis. The cells group around the nanopillars because of its small size and recognize it as an organelle. The nanopillars simply hold the cells in place while the cells are being observed.
Diamond-based quantum sensing
Nano pillars are used in quantum technologies to enhance the photon outcoupling efficiency of fluorescent defects. Nanopillars are especially effective in the context of color centers hosted in diamond. Due to the high refractive index of diamond, most of the photons originating from the fluorescence of, e.g. Nitrogen-Vacancy (NV) centers are lost due to total internal reflection. Nanopillars can enhance the outcoupling efficiency and the directionality of the color center emission. This allows significant boosts in sensitivity for the application of NV quantum sensing, both in the context of nanoscale nuclear magnetic resonance and quantum magnetometry (e.g., in the form of scanning probe microscopy). Zhu et al. have shown that it is crucial to include an appropriate tapering of the nanopillars to maximize collection efficiency.
History
In 2006, researchers at the University of Nebraska-Lincoln and the Lawrence Livermore National Laboratory developed a cheaper and more efficient way to create nanopillars. They used a combination of nanosphere lithography (a way of organizing the lattice) and reactive ion etching(molding the nanopillars to the right shape) to make large groups of silicon pillars with less than 500 nm diameters. Then, in 2010, researchers fabricated a way to manufacture nanopillars with tapered ends. The former design of a pillar with a flat blunt top reflected much of the light coming onto the pillars. The tapered tops allow light to enter the forest of nanopillars and the wider bottom absorbs almost all of the light that hits it. This design captures about 99% of the light whereas nanorods which have a uniform thickness only captured 85% of the light. After the introduction of tapered ends, researchers started to find many more applications for nanopillars.
See also
Manufacturing process
Constructing nanopillars is a simple but lengthy procedure that can take hours. The process to create nanopillars starts with anodizing a 2.5 mm thick aluminum foil mold. Anodizing the foil creates pores in the foil a micrometer deep and 60 nanometers wide. The next step is to treat the foil with phosphoric acid which expands the pores to 130 nanometers. The foil is anodized once more making its pores a micrometer deeper. Lastly, a small amount of gold is added to the pores to catalyze the reaction for the growth of the semiconductor material. When the aluminum is scraped away there is a forest of nanopillars left inside a casing of aluminum oxide. Furthermore, pillar and tube structures can also be fabricated by the top-down approach of the combination of deep UV (DUV) lithography and atomic layer deposition (ALD).
References
Nanomaterials | Nanopillar | [
"Materials_science"
] | 1,461 | [
"Nanotechnology",
"Nanomaterials"
] |
3,475,321 | https://en.wikipedia.org/wiki/Motronic | Motronic is the trade name given to a range of digital engine control units developed by Robert Bosch GmbH (commonly known as Bosch) which combined control of fuel injection and ignition in a single unit. By controlling both major systems in a single unit, many aspects of the engine's characteristics (such as power, fuel economy, drivability, and emissions) can be improved.
Motronic 1.x
Motronic M1.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535. Code/data is stored in DIL or PLCC EPROM and ranges from 32k to 128k.
1.0
Often known as "Motronic basic", Motronic ML1.x was one of the first digital engine-management systems developed by Bosch. These early Motronic systems integrated the spark timing element with then-existing Jetronic fuel injection technology. It was originally developed and first used in the BMW 7 Series, before being implemented on several Volvo and Porsche engines throughout the 1980s.
The components of the Motronic ML1.x systems for the most part remained unchanged during production, although there are some differences in certain situations. The engine control module (ECM) receives information regarding engine speed, crankshaft angle, coolant temperature and throttle position. An air flow meter also measures the volume of air entering the induction system.
If the engine is naturally aspirated, an air temperature sensor is located in the air flow meter to work out the air mass. However, if the engine is turbocharged, an additional charge air temperature sensor is used to monitor the temperature of the inducted air after it has passed through the turbocharger and intercooler, in order to accurately and dynamically calculate the overall air mass.
Main system characteristics
Fuel delivery, ignition timing, and dwell angle incorporated into the same control unit.
Crank position and engine speed is determined by a pair of sensors reading from the flywheel.
Separate constant idle speed system monitors and regulates base idle speed settings.
5th injector is used to provide extra fuel enrichment during different cold-start conditions. (in some configurations)
Depending on application and version, an oxygen sensor may be fitted (the system was originally designed for leaded fuel).
1.1
Motronic 1.1 was used by BMW from 1987 on motors such as the M20. This version was also used by Volvo from 1982-1989 on the turbocharged B23ET, B230ET and B200ET engines.
The systems have the option for a lambda sensor, enabling their use with catalytic converter-equipped vehicles. This feedback system allows the system to analyse exhaust emissions so that fuel and spark can be continually optimised to minimise emissions. Also present is adaptive circuitry, which adjusts for changes in an engine's characteristics over time. Some PSA engines also include a knock sensor for ignition timing adjustment, perhaps this was achieved using an external Knock Control Regulator.
The Motronic units have 2 injection outputs, and the injectors are arranged in 2 "banks" which fire once every two engine revolutions. In an example 4-cylinder engine, one output controls the injectors for cylinders 1 and 3, and the other controls 2 and 4. The system uses a "cylinder ID" sensor mounted to the camshaft to detect which cylinders are approaching the top of their stroke, therefore which injector bank should be fired. During start-up (below 600 rpm), or if there is no signal from the cylinder ID sensor, all injectors are fired simultaneously once per engine revolution. In BMW vehicles, this Motronic version did not have a cylinder ID and as a result, both banks of injectors fired at once.
1.2
Motronic 1.2 is the same as 1.1, but uses a hot-film MAF in place of the flapper-door style AFM. This version was used by BMW on the S38B36 engine in the E34 M5 and on the M70B50 engine in the 750il from 1988 until 1990.
1.3
Motronic 1.1 was superseded in 1988 by the Motronic 1.3 system that was also used by PSA on some XU9J-series engines (which previously used Motronic 4.1). and by BMW.
The Motronic 1.1 and 1.3 systems are largely similar, the main improvement being the increased diagnostic capabilities of Motronic 1.3. The 1.3 ECM can store many more detailed fault codes than 1.1, and has a permanent 12-volt feed from the vehicle's battery which allows it to log intermittent faults in memory across several trips. Motronic 1.1 can only advise of a few currently-occurring faults.
1.5
This system was used on some of General Motors engines (C20NE, 20NE, C20SE, 20SE, 20SEH, 20SER, C20NEF, C20NEJ, C24NE, C26NE, C30LE, C30NE, C30SE, C30SEJ, C30XEI...). The system is very reliable and problems encountered are usually caused by poor contact at the associated plug/socket combinations
that link the various system sensors to the Electronic Control Unit (ECU). Predecessor of the ME Motronic. Also used in the Opel engines C16SEI
1.5.2
Was used since 1991 in the Opel Astra F with C20NE engine. Major change was the use of a MAF instead of AFM in the Motronic 1.5.
1.5.4
Was used since 1994 in the Opel Omega B with X20SE engine. (Modified successor of C20NE engine) Major change to the Motronic 1.5.2 was the use of DIS ignition system, knock sensor and EGR valve. Was also used in the Opel engine X22XE.
M1.5.5
Used in Fiat/Alfa/Lancia and Opel vehicles.
1.7
The key feature of Motronic 1.7 is the elimination of an ignition distributor, where instead each cylinder has its own electronically triggered ignition coil. Motronic 1.7 family has versions 1.7, 1.7.2, 1.7.3, all of them used on M42/M43 engines in BMW 3 Series (E36) up to 1998 and BMW 5 Series (E34) up to 1995. The BMW M70 12 cylinder had the Motronic M1.7 and two distributors.
1.8
This system was used by Volvo on the B6304 engine used in the Volvo 960.
Motronic 2.x
Motronic M2.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535.
2.1
The ML 2.1 system integrates an advanced engine management with 2 knock sensors, provision for adaptive fuel & timing adjustment, purge canister control, precision sequential fuel control and diagnostics (pre OBD-1). Fuel enrichment during cold-start is achieved by altering the timing of the main injectors based on engine temperature. The idle speed is also fully controlled by the digital Motronic unit, including fast-idle during warm-up. Updated variants ML 2.10.1 through 2.5 add MAF Mass Air Flow sensor logic and direct fire ignition coils per cylinder. Motronic 2.1 is used in the Porsche 4 cyl 16V 944S/S2/968 and the 6 cyl Boxer Carrera 964 & 993, Opel/Vauxhall, FIAT & Alfa Romeo engines.
2.3/2.3.2
The M2.3.2 system was made for Audi's turbo 20V 5-cylinder engines mainly, but a variant was also used on the Audi 32V 3.6L V8 and a few Audi 32V 4.2 V8 engines. The turbo 5 cylinder version was the first time knock and boost control had been introduced in one ECU, though the ECU was really two computers in one package. One side of the ECU controlled the timing and fueling while the other side controlled the boost and knock control. Each side has its own Siemens SAB80C535 processor and its own EPROM for storing operating data. What made this ECU special was the use of two crank sensors and one cam sensor. The ECU used one crank sensor to count the teeth on the starter ring for its RPM signal, and the other read a pin on the back of the flywheel for TDC reference. This ECU was first seen when the 20V turbo 5-cylinder engine (RR Code) was installed into the Audi Quattro. It was then used in the Audi 200 20V turbo until 1991 when the Audi S4 was introduced and the ECU received several upgrades, including migration from a distributor-based ignition to coil on plug sequential ignition and an added overboost function. This ECU ended in 1997 when the last Audi S6 rolled off the assembly line. This ECU was also used in the legendary Audi RS2 Avant.
The V8 version of the ECU was only single processor based while retaining all the same features of the turbo 5-cylinder ECU less the boost control. The 3.6 V8 version had a distributor-based ignition system and was upgraded around the same time to coil on plug as its 20V turbo counterpart in 1992–1993.
2.5
Was introduced in 1988 in the Opel Kadett E GSi 16V C20XE engine. Sequential fuel injection and knock control.
2.7
Late '80s and early '90s, various Ferrari. Some Opel / Vauxhall (C20LET engine).
2.8
Successor of the Motronic 2.5. Was used from 1992 at Opel C20XE engine. Major change was the introduction of DIS ignition. Was also at Opel V6 engine C25XE (1993, Opel Calibra (also X25XE), Opel Vectra A) used. Modified as M2.8.1 (1994) for X30XE and X25XE (Opel Omega B). M2.8.3 engine X25XE (Opel Vectra B) and X30XE (Opel Sintra).
Motronic 3.x
Motronic M3.x is powered by i196 microcontroller with code in flash memory ranging from 128kB to 256kB.
3.1
Compared with ML1.3, this system adds knock sensor control, purge canister control and start-up diagnostics. Motronic 3.1 is used in non-VANOS BMW M50B25 engines.
3.3
Motronic 3.3 is used by BMW M60B30/B40 V8's in the 5, 7 & 8 series.
3.3.1
Motronic 3.3.1 is used in BMW M50B25 engines with VANOS.
3.7
Motronic 3.7 is used in the Alfa Romeo V6 engine in the later 12 valve 3.0L variants, replacing the L-Jetronic.
3.7.1
Motronic 3.7.1 is used in the Alfa Romeo V6 engine in the 24 valve variants.
3.8.1, 3.8.2, 3.8.3, 3.8.4
Motronic M3.8x is used in many Volkswagen/Audi/Skoda vehicles
Motronic 4.X
Motronic M4.x is powered by various i8051 derivatives made by Siemens.
40.0
??
40.1
??
4.1
The Motronic ML4.1 system was used on Opel / Vauxhall eight-valve engines from 1987 to 1990, Alfa Romeo and some PSA Peugeot Citroën XU9J-series engines.
Fuel enrichment during cold-start is achieved by altering the timing of the main injectors based on engine temperature, no "cold start" injector is required. The idle speed is also fully controlled by the Motronic unit, including fast-idle during warm-up (therefore no thermo-time switch is required).
The ML4.1 system did not include provision for a knock sensor for timing adjustment. The ignition timing and fuel map could be altered to take account of fuels with different octane ratings by connecting a calibrated resistor (taking the form of an "octane coding plug" in the vehicle's wiring loom) to one of the ECU pins, the resistance depending on the octane adjustment required. With no resistor attached the system would default to 98 octane.
There is a single output for the injectors, resulting in all injectors firing simultaneously. The injectors are opened once for every revolution of the engine, injecting half the required fuel each time.
Motronic ML4.1 was used in the Opel engines: 20NE, 20SE, 20SEH, 20SER, C20NE, C30LE, C30NE.
4.3
The Motronic 4.3 was used by Volvo for their five-cylinder turbocharged 850 models from 1993 until 1996.
It was introduced with the launch of the 850 Turbo (also called the 850 T-5 and 850 T-5 Turbo) in October 1993 for model year 1994. Features included OBD I diagnostics, dual knock sensors and a lot more. For the 1996 model year OBD II diagnostics were introduced on some cars while M4.3 was beginning to be phased out. The last M4.3 equipped cars were made for model year 1997.
4.4
The Motronic 4.4 was used by Volvo from 1996 until 1998.
The M4.4 was based on its predecessor and featured only a small number of improvements. Memory capacity was doubled and a few new functions were introduced such as immobilizer compatibility. OBD II was standard on all cars fitted with this system albeit the necessary protocols were not integrated for all markets.
The system was used for the five- and six-cylinder modular engined cars and was used on turbocharged and naturally aspirated models. Introduced in 1996 for 1997 model year it was first installed on some of the last 850 models like the 2.5 20V and AWD. A coil on plug variant existed for the six cylinder Volvo 960/S90/V90. After the 850 was replaced by the Volvo V70, Volvo S70 and Volvo C70 the system was used until the end of model year 1998.
4.6
The Motronic 4.6 was used in Nissan Micra K11 from 2000 until 2003.
Motronic 5.X
5.2
Motronic 5.2 was used in the BMW M44B19 engine. Compared to 1.7, Motronic 5.2 has OBD-II capability and uses a hot-wire MAF sensor in place of the flapper-door AFM.
5.2.1
Motronic 5.2.1 was used in Land Rover Discovery Series II and P38 Range Rovers that were built starting with late 1999. It was only used in cars equipped with V8 gasoline engines. This variant of the engine management system was adapted for off-road use. Unlike the Motronic system in BMW sedans, that uses a chassis accelerometer to differentiate between misfires and rough road, the Land Rover version used signal from ABS control unit to detect rough road conditions. This version of the system was integrated with body control module and anti-theft system.
short list of ML-Motronic
ML-Motronic appears in 1979. BMW equipped the E32 732i with the Bosch ML-Motronic. This was a L-Jetronic (now in digital technology) with digital ignition control in the same housing. Data was stored in EPROM.
ML-Motronic and M-Motronic must be keep apart. There is ML3.2 and M3.2, these a two different things.
Short list of M-Motronic
While the ML-Motronic is continuing and new ML-Motronic versions appear, Bosch launched the M-Motronic. There were many versions. While older versions were improved and further developed, new M-Motronic versions appear. So it makes no sense, to identify newer/older versions within the first counting numbers after the “M”. For example:
M1.5, introduced by Opel in 1988
M1.5.2, 1991
M1.5.4, 1994
M1.5.5, 1997
The M2.3 und M2.3.2 (used by Audi/VW) appears long before 1997. So the M1.5.5 is much more developed than the M2.3.2.
ML-Motronic and M-Motronic must be keep apart. There is ML3.2 and M3.2, these are two different things.
MP MA ME MED Motronic
MP-Motronic - load is calculated by manifold pressure
MA-Motronic - load is calculated by angle of the throttle body
ME-Motronic - drive by wire is integrated in the Motronic System
MED-Motronic - direct fuel injection is integrated in the Motronic System
See also
Digifant Engine Management system
Jetronic
References
External links
Bosch.com official website
Motronic 1.1/1.3 information
Fuel injection systems
Embedded systems
Power control
Engine technology
Automotive technology tradenames
Bosch (company) | Motronic | [
"Physics",
"Technology",
"Engineering"
] | 3,662 | [
"Physical quantities",
"Engines",
"Computer engineering",
"Embedded systems",
"Computer systems",
"Engine technology",
"Power (physics)",
"Computer science",
"Power control"
] |
3,476,702 | https://en.wikipedia.org/wiki/Bragg%20peak | The Bragg peak is a pronounced peak on the Bragg curve which plots the energy loss of ionizing radiation during its travel through matter. For protons, α-rays, and other ion rays, the peak occurs immediately before the particles come to rest. It is named after William Henry Bragg, who discovered it in 1903 using alpha particles from radium, and wrote the first empirical formula for ionization energy loss per distance with Richard Kleeman.
When a fast charged particle moves through matter, it ionizes atoms of the material and deposits a dose along its path. A peak occurs because the interaction cross section increases as the charged particle's energy decreases. Energy lost by charged particles is inversely proportional to the square of their velocity, which explains the peak occurring just before the particle comes to a complete stop. In the upper figure, it is the peak for alpha particles of 5.49 MeV moving through air. In the lower figure, it is the narrow peak of the "native" proton beam curve which is produced by a particle accelerator of 250 MeV. The figure also shows the absorption of a beam of energetic photons (X-rays) which is entirely different in nature; the curve is mainly exponential.
This characteristic of proton beams was first recommended for use in cancer therapy by Robert R. Wilson in his 1946 article, Radiological Use of Fast Protons. Wilson studied how the depth of proton beam penetration could be controlled by the energy of the protons. This phenomenon is exploited in particle therapy of cancer, specifically in proton therapy, to concentrate the effect of light ion beams on the tumor being treated while minimizing the effect on the surrounding healthy tissue.
The blue curve in the figure ("modified proton beam") shows how the originally monoenergetic proton beam with the sharp peak is widened by increasing the range of energies, so that a larger tumor volume can be treated. The plateau created by modifying the proton beam is referred to as the spread out Bragg Peak, or SOBP, which allows the treatment to conform to not only larger tumors, but to more specific 3D shapes. This can be achieved by using variable thickness attenuators like spinning wedges. Momentum cooling in cyclotron-based proton therapy facilities enables a sharper distal fall-off of the Bragg peak and the attainment of high dose rates.
See also
Stopping power (particle radiation)
Bremsstrahlung
Linear energy transfer
Proton therapy
References
External links
Ionizing radiation
Experimental particle physics
Radiation therapy | Bragg peak | [
"Physics"
] | 505 | [
"Ionizing radiation",
"Physical phenomena",
"Radiation",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
3,476,955 | https://en.wikipedia.org/wiki/Shear%20%28geology%29 | In geology, shear is the response of a rock to deformation usually by compressive stress and forms particular textures. Shear can be homogeneous or non-homogeneous, and may be pure shear or simple shear. Study of geological shear is related to the study of structural geology, rock microstructure or rock texture and fault mechanics.
The process of shearing occurs within brittle, brittle-ductile, and ductile rocks. Within purely brittle rocks, compressive stress results in fracturing and simple faulting.
Rocks
Rocks typical of shear zones include mylonite, cataclasite, S-tectonite and L-tectonite, pseudotachylite, certain breccias and highly foliated versions of the wall rocks.
Shear zone
A shear zone is a tabular to sheetlike, planar or curviplanar zone composed of rocks that are more highly strained than rocks adjacent to the zone. Typically this is a type of fault, but it may be difficult to place a distinct fault plane into the shear zone. Shear zones may form zones of much more intense foliation, deformation, and folding. En echelon veins or fractures may be observed within shear zones.
Many shear zones host ore deposits as they are a focus for hydrothermal flow through orogenic belts. They may often show some form of retrograde metamorphism from a peak metamorphic assemblage and are commonly metasomatised.
Shear zones can be only inches wide, or up to several kilometres wide. Often, due to their structural control and presence at the edges of tectonic blocks, shear zones are mappable units and form important discontinuities to separate terranes. As such, many large and long shear zones are named, identical to fault systems.
When the horizontal displacement of this faulting can be measured in the tens or hundreds of kilometers of length, the fault is referred to as a megashear. Megashears often indicate the edges of ancient tectonic plates.
Mechanisms of shearing
The mechanisms of shearing depend on the pressure and temperature of the rock and on the rate of shear which the rock is subjected to. The response of the rock to these conditions determines how it accommodates the deformation.
Shear zones which occur in more brittle rheological conditions (cooler, less confining pressure) or at high rates of strain, tend to fail by brittle failure; breaking of minerals, which are ground up into a breccia with a milled texture.
Shear zones which occur under brittle-ductile conditions can accommodate much deformation by enacting a series of mechanisms which rely less on fracture of the rock and occur within the minerals and the mineral lattices themselves. Shear zones accommodate compressive stress by movement on foliation planes.
Shearing at ductile conditions may occur by fracturing of minerals and growth of sub-grain boundaries, as well as by lattice glide. This occurs particularly on platy minerals, especially micas.
Mylonites are essentially ductile shear zones.
Microstructures of shear zones
During the initiation of shearing, a penetrative planar foliation is first formed within the rock mass. This manifests as realignment of textural features, growth and realignment of micas and growth of new minerals.
The incipient shear foliation typically forms normal to the direction of principal shortening, and is diagnostic of the direction of shortening. In symmetric shortening, objects flatten on this shear foliation much the same way that a round ball of treacle flattens with gravity.
Within asymmetric shear zones, the behavior of an object undergoing shortening is analogous to the ball of treacle being smeared as it flattens, generally into an ellipse. Within shear zones with pronounced displacements a shear foliation may form at a shallow angle to the gross plane of the shear zone. This foliation ideally manifests as a sinusoidal set of foliations formed at a shallow angle to the main shear foliation, and which curve into the main shear foliation. Such rocks are known as L-S tectonites.
If the rock mass begins to undergo large degrees of lateral movement, the strain ellipse lengthens into a cigar shaped volume. At this point shear foliations begin to break down into a rodding lineation or a stretch lineation. Such rocks are known as L-tectonites.
Ductile shear microstructures
Very distinctive textures form as a consequence of ductile shear. An important group of microstructures observed in ductile shear zones are S-planes, C-planes and C' planes.
S-planes or schistosité planes are generally defined by a planar fabric caused by the alignment of micas or platy minerals. Define the flattened long-axis of the strain ellipse.
C-planes or cisaillement planes form parallel to the shear zone boundary. The angle between the C and S planes is always acute, and defines the shear sense. Generally, the lower the C-S angle the greater the strain.
The C' planes, also known as shear bands and secondary shear fabrics, are commonly observed in strongly foliated mylonites especially phyllonites, and form at an angle of about 20 degrees to the S-plane.
The sense of shear shown by both S-C and S-C' structures matches that of the shear zone in which they are found.
Other microstructures which can give sense of shear include:
sigmoidal veins
mica fish
rotated porphyroclasts
asymmetric boudins (Figure 1)
asymmetric folds
Transpression
Transpression regimes are formed during oblique collision of tectonic plates and during non-orthogonal subduction. Typically a mixture of oblique-slip thrust faults and strike-slip or transform faults are formed. Microstructural evidence of transpressional regimes can be rodding lineations, mylonites, augen-structured gneisses, mica fish and so on.
A typical example of a transpression regime is the Alpine Fault zone of New Zealand, where the oblique subduction of the Pacific Plate under the Indo-Australian Plate is converted to oblique strike-slip movement. Here, the orogenic belt attains a trapezoidal shape dominated by oblique splay faults, steeply-dipping recumbent nappes and fault-bend folds.
The Alpine Schist of New Zealand is characterised by heavily crenulated and sheared phyllite. It is being pushed up at the rate of 8 to 10 mm per year, and the area is prone to large earthquakes with a south block up and west oblique sense of movement.
Transtension
Transtension regimes are oblique tensional environments. Oblique, normal geologic fault and detachment faults in rift zones are the typical structural manifestations of transtension conditions. Microstructural evidence of transtension includes rodding or stretching lineations, stretched porphyroblasts, mylonites, etc.
See also
Convergent boundary
Crenulation
Fault (geology)
Foliation (geology)
Rock microstructure
Strain partitioning
Sense of shear indicators: dextral and sinistral
References
Diagrams and definitions of shear (Wayback Machine), by University of the West of England, Bristol. Archive copy incomplete, 12/31/2012.
Structural geology
Geological processes
Deformation (mechanics) | Shear (geology) | [
"Materials_science",
"Engineering"
] | 1,524 | [
"Deformation (mechanics)",
"Materials science"
] |
3,479,393 | https://en.wikipedia.org/wiki/Firefly%20%28website%29 | Firefly.com (1995–1999) was a community website featuring collaborative filtering.
History
The Firefly website was created by Firefly Network, Inc.(originally known as Agents Inc.) The company was founded in March 1995 by a group of engineers from MIT Media Lab and some business people from Harvard Business School, including Pattie Maes (Media Lab professor), Upendra Shardanand, Nick Grouf, Max Metral, David Waxman and Yezdi Lashkari. At the Media Lab, under the supervision of Maes, some of the engineers built a music recommendation system called HOMR (Helpful Online Music Recommendation Service; preceded by RINGO, an email-based system) which used collaborative filtering to help navigate the music domain to find other artists and albums that a user might like. With Matt Bruck and Khinlei Myint-U, the team wrote a business plan and Agents Inc took second place in the 1995 MIT 10K student business plan competition. Firefly's core technology was based on the work done on HOMR.
The Firefly website was launched in October 1995. It went through several iterations but remained a community throughout. It was initially created as a community for users to navigate and discover new musical artists and albums. Later it was changed to allow users to discover movies, websites, and communities as well.
Firefly technology was adopted by a number of well-known businesses, including the recommendation engine for barnesandnoble.com, ZDnet, launch.com (later purchased by Yahoo) and MyYahoo.
Since Firefly was amassing large amounts of profile data from its users, privacy became a big concern of the company. They worked with the Federal Government to help define consumer privacy protection in the digital age. They also were key contributors to OPS (Open Profiling Standard), a recommendation to the W3C (along with Netscape and VeriSign) to what eventually became known as the P3P (Platform for Privacy Preferences).
In April 1998, Microsoft purchased Firefly, presumably because of their innovations in privacy, and their long-term goal of creating a safe marketplace for consumers' profile data which the consumer controlled. The Firefly team at Microsoft was largely responsible for the first versions of Microsoft Passport.
Microsoft shut down the website in August 1999.
Homepages
The Firefly website had distinctive design and graphics. Early designs featured bright colors and a fun and eclectic look. Later redesigns reflected the company's push towards corporate customers and desire to de-emphasize the Firefly community website.
See also
Collaborative filtering
References
External links
Spanish Firefly - a Suck.com parody
HBS bulletin on Firefly
American social networking websites
Social information processing
Social software
Internet properties established in 1995
Defunct websites | Firefly (website) | [
"Technology"
] | 562 | [
"Mobile content",
"Social software"
] |
3,480,100 | https://en.wikipedia.org/wiki/Lamington%20Road | Lamington Road, officially Dr. Dadasaheb Bhadkamkar Marg, named after Lord Lamington, the Governor of Bombay between 1903 and 1907, is a busy thoroughfare near Grant Road station in South Mumbai. The official name of the road is rarely used. It is often called the "IT shop of Mumbai".
Lamington Road is famous for its wholesale and retail market in electronics goods. Shops on the street sell computer goods, electronic items, television equipment, and wireless equipment at rates much lower than the maximum retail price as they have a high turnover. They sell not only the latest computer related items but even outdated electronic parts for radios like transistors, capacitors, cables, sound cards, TV tuners and adaptors. Lamington road is the third largest grey market in India for electronic goods and peripherals after Nehru Place in Delhi and Ritchie street in Chennai.
It had been listed as a notorious market between 2009 and 2013 by the USTR for selling counterfeit software, media and goods.
Gallery
References
Streets in Mumbai
Culture of Mumbai
Electronics districts
Retail markets in Mumbai
Electronics industry in India
Notorious markets
High-technology business districts in India
Information technology places
Information technology in India | Lamington Road | [
"Technology"
] | 243 | [
"Information technology",
"Information technology places"
] |
3,481,221 | https://en.wikipedia.org/wiki/Denitrifying%20bacteria | Denitrifying bacteria are a diverse group of bacteria that encompass many different phyla. This group of bacteria, together with denitrifying fungi and archaea, is capable of performing denitrification as part of the nitrogen cycle. Denitrification is performed by a variety of denitrifying bacteria that are widely distributed in soils and sediments and that use oxidized nitrogen compounds such as nitrate and nitrite in the absence of oxygen as a terminal electron acceptor. They metabolize nitrogenous compounds using various enzymes, including nitrate reductase (NAR), nitrite reductase (NIR), nitric oxide reductase (NOR) and nitrous oxide reductase (NOS), turning nitrogen oxides back to nitrogen gas () or nitrous oxide ().
Diversity of denitrifying bacteria
There is a great diversity in biological traits. Denitrifying bacteria have been identified in over 50 genera with over 125 different species and are estimated to represent 10-15% of bacteria population in water, soil and sediment.
Denitrifying include for example several species of Pseudomonas, Alcaligenes , Bacillus and others.
The majority of denitrifying bacteria are facultative aerobic heterotrophs that switch from aerobic respiration to denitrification when oxygen as an available terminal electron acceptor (TEA) runs out. This forces the organism to use nitrate to be used as a TEA. Because the diversity of denitrifying bacteria is so large, this group can thrive in a wide range of habitats including some extreme environments such as environments that are highly saline and high in temperature. Aerobic denitrifiers can conduct an aerobic respiratory process in which nitrate is converted gradually to N2 (NO3− → NO2− → NO → N2O → N2 ), using nitrate reductase (Nar or Nap), nitrite reductase (Nir), nitric oxide reductase (Nor), and nitrous oxide reductase (Nos). Phylogenetic analysis revealed that aerobic denitrifiers mainly belong to α-, β- and γ-Proteobacteria.
Denitrification mechanism
Denitrifying bacteria use denitrification to generate ATP.
The most common denitrification process is outlined below, with the nitrogen oxides being converted back to gaseous nitrogen:
2 NO3− + 10 e− + 12 H+ → N2 + 6 H2O
The result is one molecule of nitrogen and six molecules of water. Denitrifying bacteria are a part of the N cycle, and consists of sending the N back into the atmosphere. The reaction above is the overall half reaction of the process of denitrification. The reaction can be further divided into different half reactions each requiring a specific enzyme. The transformation from nitrate to nitrite is performed by nitrate reductase (Nar)
NO3− + 2 H+ + 2 e− → NO2− + H2O
Nitrite reductase (Nir) then converts nitrite into nitric oxide
2 NO2− + 4 H+ + 2 e− → 2 NO + 2 H2O
Nitric oxide reductase (Nor) then converts nitric oxide into nitrous oxide
2 NO + 2 H+ + 2 e− → N2O + H2O
Nitrous oxide reductase (Nos) terminates the reaction by converting nitrous oxide into dinitrogen
N2O + 2 H+ + 2 e− → N2 + H2O
It is important to note that any of the products produced at any step can be exchanged with the soil environment.
Oxidation of methane and denitrification
Anaerobic oxidation of methane coupled to denitrification
Anaerobic denitrification coupled to methane oxidation was first observed in 2008, with the isolation of a methane-oxidizing bacterial strain found to oxidize methane independently. This process uses the excess electrons from methane oxidation to reduce nitrates, effectively removing both fixed nitrogen and methane from aquatic systems in habitats ranging from sediment to peat bogs to stratified water columns.
The process of anaerobic denitrification may contribute significantly to the global methane and nitrogen cycles, especially in light of the recent influx of both due to anthropogenic changes. The extent to which anthropogenic methane affects the atmosphere is known to be a significant driver of climate change, and considering it is multiple times more potent than carbon dioxide. Removing methane is widely considered to be beneficial to the environment, although the extent of the role that denitrification plays in the global flux of methane is not well understood. Anaerobic denitrification as a mechanism has been shown to be capable of removing the excess nitrate caused by fertilizer runoff, even in hypoxic conditions.
Additionally, microorganisms which employ this type of metabolism may be employed in bioremediation, as shown by a 2006 study of hydrocarbon contamination in the Antarctic, as well as a 2016 study which successfully increased the rates of denitrification by altering the environment housing the bacteria. Denitrifying bacteria are said to be high quality bioremediators because of their adaptability to a variety of different environments, as well as the lacking any toxic or undesirable leftovers, as are left by other metabolisms.
Role of denitrifying bacteria as a methane sink
Denitrifying bacteria have been found to play a significant role in the oxidation of methane (CH4) (where methane is converted to CO2, water, and energy) in deep freshwater bodies of water. This is important because methane is the second most significant anthropogenic greenhouse gas, with a global warming potential 25 times more potent than that of carbon dioxide, and freshwaters are a major contributor of global methane emissions.
A study conducted on Europe's Lake Constance found that anaerobic methane oxidation coupled to denitrification – also referred to as nitrate/nitrite-dependent anaerobic methane oxidation (n-damo) – is a dominant sink of methane in deep lakes. For a long time, it was considered that the mitigation of methane emissions was only due to aerobic methanotrophic bacteria. However, methane oxidation also takes place in anoxic, or oxygen depleted zones, of freshwater bodies. In the case of Lake Constance, this is carried out by M. oxyfera-like bacteria. M. oxyfera-like bacteria are bacteria similar to Candidatus Methylomirabilis oxyfera, which is a species of bacteria that acts as a denitrifying methanotroph.
The results from the study on Lake Constance found that nitrate was depleted in the water at the same depth as methane, which suggests that methane oxidation was coupled to denitrification. It could be inferred that it was M. oxyfera-like bacteria carrying out the methane oxidation because their abundance peaked at the same depth where the methane and nitrate profiles met. This n-damo process is significant because it aids in decreasing methane emissions from deep freshwater bodies and it aids in turning nitrates into nitrogen gas, reducing excess nitrates.
Denitrifying bacteria and the environment
Denitrification effects on limiting plant productivity and producing by-products
The process of denitrification can lower the fertility of soil as nitrogen, a growth-limiting factor, is removed from the soil and lost to the atmosphere. This loss of nitrogen to the atmosphere can eventually be regained via introduced nutrients, as part of the nitrogen cycle. Some nitrogen may also be fixated by species of nitrifying bacteria and the cyanobacteria. Another important environmental issue concerning denitrification is the fact that the process tends to produce large amounts of by-products. Examples of by-products are nitric oxide (NO) and nitrous oxide (N2O). NO is an ozone depleting species and N2O is a potent greenhouse gas which can contribute to global warming.
Denitrifying bacteria use in wastewater treatment
Denitrifying bacteria are an essential component in treating wastewater. Wastewater often contains large amounts of nitrogen (in the form of ammonium or nitrate), which could be damaging to human health and ecological processes if left untreated. Many physical, chemical, and biological methods have been used to remove the nitrogenous compounds and purify polluted waters. The process and methods vary, but it generally involves converting ammonium to nitrate via the nitrification process with ammonium oxidizing bacteria (AOB, NH4+ → NO2–) and nitrite oxidizing bacteria (NOB, NO2– → NO3–), and finally to nitrogen gas via denitrification. One example of this is ammonia-oxidizing bacteria which have a metabolic feature that, in combination with other nitrogen-cycling metabolic activities, such as nitrite oxidation and denitrification, remove nitrogen from wastewater in activated sludge. Since denitrifying bacteria are heterotrophic, an organic carbon source is supplied to the bacteria in an anoxic basin. With no available oxygen, denitrifying bacteria use the redox of nitrate to oxidize the carbon. This leads to the creation of nitrogen gas from nitrate, which then bubbles up out of the wastewater.
See also
Nitrifying bacteria
Nitrogen Cycle
References
Bacteria
Nitrogen cycle
Soil biology
Fishkeeping
Aquariums | Denitrifying bacteria | [
"Chemistry",
"Biology"
] | 1,964 | [
"Prokaryotes",
"Nitrogen cycle",
"Soil biology",
"Bacteria",
"Microorganisms",
"Metabolism"
] |
758,604 | https://en.wikipedia.org/wiki/Electron-beam%20welding | Electron-beam welding (EBW) is a fusion welding process in which a beam of high-velocity electrons is applied to two materials to be joined. The workpieces melt and flow together as the kinetic energy of the electrons is transformed into heat upon impact. EBW is often performed under vacuum conditions to prevent dissipation of the electron beam.
History
Electron-beam welding was developed by the German physicist in 1949, who was at the time working on various electron-beam applications. Steigerwald conceived and developed the first practical electron-beam welding machine, which began operation in 1958. American inventor James T. Russell was also credited with designing and building the first electron-beam welder.
Physics
Electrons are elementary particles possessing a mass m = 9.1 · 10−31 kg and a negative electrical charge e = 1.6 · 10−19 C. They exist either bound to an atomic nucleus, as conduction electrons in the atomic lattice of metals, or as free electrons in vacuum.
Free electrons in vacuum can be accelerated, with their paths controlled by electric and magnetic fields. In this way beams of electrons carrying high kinetic energy can be formed. Upon collision with atoms in solids their kinetic energy transforms into heat. EBW provides excellent welding conditions because it involves:
Strong electric fields, which can accelerate electrons to high speed that carry high power, equal to the product of beam current and accelerating voltage. By increasing the beam current and the accelerating voltage, the beam power can be increased to practically any desired value.
Magnetic lenses can shape the beam into a narrow cone and focus to a small diameter. This allows for a high power density on the surface to be welded. Values of power density in the crossover (focus) of the beam can be as high as 104–106 W/mm2.
Penetration depths can be on the order of hundredths of a millimeter. This provides a high volumetric power density, which can reach values of the order 105–107 W/mm3. The temperature in this volume can increase rapidly, up to 108–1010 K/s.
Beam effectiveness depends on many factors. The most important are the physical properties of the materials to be welded, especially the ease with which they can be melted or vaporize under low-pressure conditions. EBW can be so intense that material can boil way, which must be taken into account. At lower values of surface power density (in the range of about 103 W/mm2) the loss of material by evaporation is negligible for most metals, which is favorable for welding. At higher power, the material affected by the beam can quickly evaporate; switching from welding to machining.
Beam formation
Cathode
Conduction electrons (those not bound to the nucleus of atoms) move in a crystal lattice of metals with velocities distributed according to Gauss's law and depending on temperature. They cannot leave the metal unless their kinetic energy (in eV) is higher than the potential barrier at the metal surface. The number of electrons fulfilling this condition increases exponentially with increasing metal temperature, following Richardson's rule.
As a source of electrons for electron-beam welders, the material must fulfill certain requirements:
To achieve high power density, the emission current density [A/mm2], hence the working temperature, should be as high as possible,
To keep evaporation in vacuum low, the material must have a low enough vapour pressure at the working temperature.
The emitter must be mechanically stable, not chemically sensitive to gases present in the vacuum atmosphere (like oxygen and water vapour), easily available, etc.
These and other conditions limit the choice of material for the emitter to metals with high melting points, practically to only tantalum and tungsten. Tungsten cathodes allow emission current densities about 100 mA/mm2, but only a small portion of the emitted electrons takes part in beam formation, depending on the electric field produced by the anode and control electrode voltages. The most frequently used cathode is made of a tungsten strip, about 0.05 mm thick, shaped as shown in Figure 1a. The appropriate width of the strip depends on the highest required value of emission current. For the lower range of beam power, up to about 2 kW, the width w=0.5 mm is appropriate.
Acceleration
Electrons emitted from the cathode are low energy, only a few eV. To give them the required speed, they are accelerated by an electric field applied between the emitter and the anode. The accelerating field must also direct the electrons to form a narrow converging “bundle” around an axis. This can be achieved by an electric field in the proximity of the cathode which has a radial addition and an axial component, forcing the electrons in the direction of the axis. Due to this effect, the electron beam converges to some diameter in a plane close to the anode.
For practical applications the power of the electron beam must be controllable. This can be accomplished by another electric field produced by another cathode negatively charged with respect to the first.
At least this part of electron gun must be evacuated to high vacuum, to prevent "burning" the cathode and the emergence of electrical discharges.
Focusing
After leaving the anode, the divergent electron beam does not have a power density sufficient for welding metals and has to be focused. This can be accomplished by a magnetic field produced by electric current in a cylindrical coil.
The focusing effect of a rotationally symmetrical magnetic field on the trajectory of electrons is the result of the complicated influence of a magnetic field on a moving electron. This effect is a force proportional to the induction B of the field and electron velocity v. The vector product of the radial component of induction Br and axial component of velocity va is a force perpendicular to those vectors, causing the electron to move around the axis. An additional effect of this motion in the same magnetic field is another force F oriented radially to the axis, which is responsible for the focusing effect of the magnetic lens. The resulting trajectory of electrons in the magnetic lens is a curve similar to a helix. In this context variations of focal length (exciting current) cause a slight rotation of the beam cross-section.
Beam deflection system
The beam spot must be precisely positioned with respect to the joint to be welded. This is commonly accomplished mechanically by moving the workpiece with respect to the electron gun, but sometimes it is preferable to deflect the beam instead. A system of four coils positioned symmetrically around the gun axis behind the focusing lens, producing a magnetic field perpendicular to the gun axis, is typically used for this purpose.
Penetration
Electron penetration
When electrons from the beam impact the surface of a solid, some of them are reflected (backscattered), while others penetrate the surface, where they collide with the solid. In non-elastic collisions they lose their kinetic energy. Electrons can "travel" only a small distance below the surface before they transform their kinetic energy into heat. This distance is proportional to their initial energy and inversely proportional to the density of the solid. Under typical conditions the "travel distance" is on the order of hundredths of a millimeter.
Beam penetration
By increasing the number of electrons (the beam current) the power of the beam can be increased to any desired value. By focusing the beam onto a small diameter, planar power density values as high as 104 up to 107 W/mm2 can be reached. Because electrons transfer their energy into heat in a thin layer of the solid, the power density in this volume can be high. The volume density can reach values of the order 105–107 W/mm3. Consequently, the temperature in this volume increases rapidly, by 108–109 K/s.
Results
The results of the beam application depend on several factors:
Beam powerThe power of the beam [W] is the product of the accelerating voltage [kV] and beam current [mA], which are easily measured and must be precisely controlled. The power is controlled by the beam current at constant voltage, usually the highest accessible.
Power density (beam focusing)The power density at the spot of incidence depends on factors like the size of the cathode electron source, the optical quality of the accelerating electric lens and the focusing magnetic lens, alignment of the beam, the value of the accelerating voltage, and the focal length. All these factors (except the focal length) are a function of the design.
Welding speedThe welding equipment enables adjustment of the relative speed of motion of the workpiece with respect to the beam in wide enough limits, e.g., between 2 and 50 mm/s.
Material propertiesDepending on conditions, the extent of evaporation may vary, from negligible to complete. At values of surface power density of around 103 W/mm2 the loss of material by evaporation is negligible for most metals, which is favorable for welding.
Geometry (shape and dimensions) of the joint
The final effect depends on the particular combination of these parameters.
Action of the beam at low power density or over a short interval results in melting a thin surface layer.
A defocused beam does not penetrate, and the material at low welding speeds is heated only by conduction of the heat from the surface, producing a hemispherical melted zone.
High power density and low speed produces a deeper and slightly conical melt zone.
A high power density, focused beam penetrates deeper in proportion to total power.
Welding process
Weldability
For welding thin-walled parts, appropriate welding aids are generally needed. Their construction must provide perfect contact of the parts and prevent movement during welding. Usually they have to be designed individually for a given workpiece.
Not all materials can be welded by an electron beam in a vacuum. This technology cannot be applied to materials with high vapour pressure at the melting temperature, which affects zinc, cadmium, magnesium, and practically all non-metals.
Another limitation may be the change of material properties induced by the welding process, such as a high speed of cooling.
Joining dissimilar materials
Some metal components cannot be welded, i.e. to melt part of both in the vicinity of the joint, if the materials have different properties. It is still possible to realize joints meeting high demands for mechanical compactness and that are perfectly vacuum-tight. The principal approach is to melt the one with the lower melting point, while the other remains solid. The advantage of electron-beam welding is its ability to localize heating to a precise point and to control exactly the energy needed for the process. Higher-vacuum substantially contributes to a positive result. A general rule for construction of joints made this way is that the part with the lower melting point should be directly accessible by the beam.
Local vacuum
Local vacuum systems allow workpieces to be welded without requiring the workpiece to be enclosed within the work chamber. Instead, a vacuum is established by sealing the chamber to one section of the workpiece, welding that section, and moving the chamber or the workpiece (continuously or in discrete steps) to additional sections and repeating the process until the weld is complete. Using arc welding on pressure vessels requires or more separate welds/cycles with additional processing for each cycle. Materials up to thick can be welded in a single pass. Shrinkage is minimal (heat treatment is advisable). Welds avoid oxide or nitride contamination. The material retains strength better. The weld has fewer flaws/voids, less NDE required and it's been around for decades.
Challenges
If the material melted by the beam shrinks during cooling after solidification, cracking, deformation and changes of shape may occur.
The butt weld of two plates may result in bending of the weldment because more material has been melted at the head than at the root of the weld, although this effect is not as substantial as in arc welding.
Cracks may appear in the weld. If both parts are rigid, weld shrinkage can produce high stress which may crack a brittle material (even if only after remelting by welding).
Equipment
Many welder types have been designed, differing in construction, working space volume, workpiece manipulators, and beam power. Electron-beam generators (electron guns) designed for welding applications can supply beams with power ranging from a few watts up to some one hundred kilowatts. "Micro-welds" of tiny components can be realized, as well as deep welds up to 300 mm or more. Vacuum working chamber volumes range from a few liters to hundreds of cubic meters.
The major EBW components are:
Electron gun (beam generator)
Vacuum chamber
Workpiece manipulator (positioning mechanism)
Power supply
Control and monitoring electronics
Electron gun
Emitter
The electron gun generates, accelerates, and focuses the beam. Free electrons are gained by thermo-emission from a hot metal strap (or wire).
Accelerator
They are then accelerated and formed into a narrow beam by an electric field produced by three electrodes: the electron emitting strap, the cathode connected to the negative pole of the high (accelerating) voltage power supply () and the anode. The third (Wehnelt or control) electrode is charged negatively with respect to the cathode. Its negative potential controls the portion of emitted electrons entering into the accelerating field, i.e., the electron-beam current. After passing the anode opening, the electrons move with constant speed in a slightly divergent cone.
Focuser
For technological applications the divergent beam has to be focused, which is realized by the magnetic field of a coil, the magnetic focusing lens.
The beam must be oriented to the optical axes of the accelerating electrical lens and the magnetic focusing lens. This can be done by applying a magnetic field of some specific radial direction and strength perpendicular to the optical axis before the focusing lens. This is usually realized by a simple correction system consisting of two pairs of coils. Adjusting the currents in these coils produces the correct field.
Deflector
After passing the focusing lens, the beam can be applied for welding, either directly or after deflection by a deflection system. A deflection system This consists of two pairs of coils, one each for the and directions. These can be used for static or dynamic deflection. Static deflection is useful for exact positioning of the beam. Dynamic deflection is realized by supplying the deflection coils with currents controlled by a computer. The beam can then be redirected to meet the needs of applications beyond welding such as surface hardening, annealing, exact beam positioning, imaging, and engraving. Resolution of can be achieved.
Working chamber
Welding typically takes place in a working vacuum chamber in a high or low vacuum environment, although welders can also operate without a chamber.
Working chamber volumes range from a few liters up to hundreds of cubic meters.
Workpiece manipulator
Electron-beam welding can never be "hand-manipulated", even if not realized in vacuum, because of the presence of strong X-radiation. The relative motion of the beam and the workpiece is most often achieved by rotating or moving the workpiece or the beam.
Power supply
Electron-beam equipment must be provided with an appropriate power supply. The accelerating voltage ranges from , typically . Technical challenges and equipment costs are an increasing function of the operating voltage.
The high-voltage equipment must also supply low voltage current, above , for the cathode heating, and negative voltage up to about for the control electrode.
The electron gun needs low-voltage supplies for the correction system, the focusing lens, and the deflection system.
Control and monitoring
Electronics control the workpiece manipulator, monitor the welding process, and adjust the various voltages needed for a specific application.
Applications
Reactor pressure vessels
Such systems have been applied to welding reactor pressure vessels for small modular reactors, with enormous savings in time and costs over arc welding. Using arc welding on pressure vessels requires or more separate welds/cycles with additional processing for each cycle. Materials up to thick can be welded in a single pass. Shrinkage is minimal (heat treatment is advisable). Welds avoid oxide or nitride contamination. The material retains strength better. The weld has fewer flaws/voids. Less NDE is required.
Wind turbine
An offshore wind turbine can require arc-on hours of welding. Local vacuum EBM can replace this at far lower cost and time, with improved quality.
See also
Electron-beam technology
References
External links
Schulze, Klaus-Rainer. "Electron Beam Technologies". DVS Media, Düsseldorf, 2012.
What is Electron Beam Welding?
Electron beam welding of thin-walled parts
Weldability of various materials
Leptons-Technologies Weldability of metals
Electron beams in manufacturing
Welding
de:Schweißen#Elektronenstrahlschweißen | Electron-beam welding | [
"Engineering"
] | 3,472 | [
"Welding",
"Mechanical engineering"
] |
758,727 | https://en.wikipedia.org/wiki/Charged%20particle%20beam | A charged particle beam is a spatially localized group of electrically charged particles that have approximately the same position, kinetic energy (resulting in the same velocity), and direction. The kinetic energies of the particles are much larger than the energies of particles at ambient temperature. The high energy and directionality of charged particle beams make them useful for many applications in particle physics (see Particle beam#Applications and Electron-beam technology).
Such beams can be split into two main classes:
unbunched beams (coasting beams or DC beams), which have no longitudinal substructure in the direction of beam motion.
bunched beams, in which the particles are distributed into pulses (bunches) of particles. Bunched beams are most common in modern facilities, since the most modern particle accelerators require bunched beams for acceleration.
Assuming a normal distribution of particle positions and impulses, a charged particle beam (or a bunch of the beam) is characterized by
the species of particle, e.g. electrons, protons, or atomic nuclei
the mean energy of the particles, often expressed in electronvolts (typically keV to GeV)
the (average) particle current, often expressed in amperes
the particle beam size, often using the so-called β-function
the beam emittance, a measure of the area occupied by the beam in one of several phase spaces.
These parameters can be expressed in various ways. For example, the current and beam size can be combined into the current density, and the current and energy (or beam voltage V) can be combined into the perveance K = I V−3/2.
The charged particle beams that can be manipulated in particle accelerators can be subdivided into electron beams, ion beams and proton beams.
Common types
Electron beam, or cathode ray, such as in a scanning electron microscope or in accelerators such as the Large Electron–Positron Collider or synchrotron light sources.
Proton beam, such as the beams used in proton therapy, at colliders such as the Tevatron and the Large Hadron Collider, or for proton beam writing in lithography.
Ion beams, such as at the Relativistic Heavy Ion Collider or the Facility for Rare Isotope Beams.
References
Accelerator physics
Experimental particle physics | Charged particle beam | [
"Physics"
] | 470 | [
"Applied and interdisciplinary physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Accelerator physics"
] |
759,071 | https://en.wikipedia.org/wiki/Noctilucent%20cloud | Noctilucent clouds (NLCs), or night shining clouds, are tenuous cloud-like phenomena in the upper atmosphere of Earth. When viewed from space, they are called polar mesospheric clouds (PMCs), detectable as a diffuse scattering layer of water ice crystals near the summer polar mesopause. They consist of ice crystals and from the ground are only visible during astronomical twilight. Noctilucent roughly means "night shining" in Latin. They are most often observed during the summer months from latitudes between ±50° and ±70°. Too faint to be seen in daylight, they are visible only when the observer and the lower layers of the atmosphere are in Earth's shadow, but while these very high clouds are still in sunlight. Recent studies suggest that increased atmospheric methane emissions produce additional water vapor through chemical reactions once the methane molecules reach the mesosphere – creating, or reinforcing existing, noctilucent clouds.
General
No confirmed record of their observation exists before 1885, although they may have been observed a few decades earlier by Thomas Romney Robinson in Armagh.
Formation
Noctilucent clouds are composed of tiny crystals of water ice up to 100 nm in diameter and exist at a height of about , higher than any other clouds in Earth's atmosphere. Clouds in the Earth's lower atmosphere form when water collects on particles, but mesospheric clouds may form directly from water vapour in addition to forming on dust particles.
Data from the Aeronomy of Ice in the Mesosphere satellite suggests that noctilucent clouds require water vapour, dust, and very cold temperatures to form. The sources of both the dust and the water vapour in the upper atmosphere are not known with certainty. The dust is believed to come from micrometeors, although particulates from volcanoes and dust from the troposphere are also possibilities. The moisture could be lifted through gaps in the tropopause, as well as forming from the reaction of methane with hydroxyl radicals in the stratosphere.
The exhaust from Space Shuttles, in use between 1981 and 2011, which was almost entirely water vapour after the detachment of the Solid Rocket Booster at a height of about , was found to generate minuscule individual clouds. About half of the vapour was released into the thermosphere, usually at altitudes of . In August 2014, a SpaceX Falcon 9 also caused noctilucent clouds over Orlando, Florida after a launch.
The exhaust can be transported to the Arctic region in little over a day, although the exact mechanism of this very high-speed transfer is unknown. As the water migrates northward, it falls from the thermosphere into the colder mesosphere, which occupies the region of the atmosphere just below. Although this mechanism is the cause of individual noctilucent clouds, it is not thought to be a major contributor to the phenomenon as a whole.
As the mesosphere contains very little moisture, approximately one hundred millionth that of air from the Sahara, and is extremely thin, the ice crystals can form only at temperatures below about . This means that noctilucent clouds form predominantly during summer when, counterintuitively, the mesosphere is coldest as a result of seasonally varying vertical winds, leading to cold summertime conditions in the upper mesosphere (upwelling, resulting in adiabatic cooling) and wintertime heating (downwelling, resulting in adiabatic heating). Therefore, they cannot be observed (even if they are present) inside the Polar circles because the Sun is never low enough under the horizon at this season at these latitudes. Noctilucent clouds form mostly near the polar regions, because the mesosphere is coldest there. Clouds in the southern hemisphere are about higher than those in the northern hemisphere.
Ultraviolet radiation from the Sun breaks water molecules apart, reducing the amount of water available to form noctilucent clouds. The radiation is known to vary cyclically with the solar cycle and satellites have been tracking the decrease in brightness of the clouds with the increase of ultraviolet radiation for the last two solar cycles. It has been found that changes in the clouds follow changes in the intensity of ultraviolet rays by about a year, but the reason for this long lag is not yet known.
Noctilucent clouds are known to exhibit high radar reflectivity, in a frequency range of 50 MHz to 1.3 GHz. This behaviour is not well understood but a possible explanation is that the ice grains become coated with a thin metal film composed of sodium and iron, which makes the cloud far more reflective to radar, although this explanation remains controversial. Sodium and iron atoms are stripped from incoming micrometeors and settle into a layer just above the altitude of noctilucent clouds, and measurements have shown that these elements are severely depleted when the clouds are present. Other experiments have demonstrated that, at the extremely low temperatures of a noctilucent cloud, sodium vapour can rapidly be deposited onto an ice surface.
Discovery and investigation
Noctilucent clouds are first known to have been observed in 1885, two years after the 1883 eruption of Krakatoa. It remains unclear whether their appearance had anything to do with the volcanic eruption or whether their discovery was due to more people observing the spectacular sunsets caused by the volcanic debris in the atmosphere. Studies have shown that noctilucent clouds are not caused solely by volcanic activity, although dust and water vapour could be injected into the upper atmosphere by eruptions and contribute to their formation. Scientists at the time assumed the clouds were another manifestation of volcanic ash, but after the ash had settled out of the atmosphere, the noctilucent clouds persisted. Finally, the theory that the clouds were composed of volcanic dust was disproved by Malzev in 1926. In the years following their discovery, the clouds were studied extensively by Otto Jesse of Germany, who was the first to photograph them, in 1887, and seems to have been the one to coin the term "noctilucent cloud". His notes provide evidence that noctilucent clouds first appeared in 1885. He had been doing detailed observations of the unusual sunsets caused by the Krakatoa eruption the previous year and firmly believed that, if the clouds had been visible then, he would undoubtedly have noticed them. Systematic photographic observations of the clouds were organized in 1887 by Jesse, Foerster, and Stolze and, after that year, continuous observations were carried out at the Berlin Observatory.
In the decades after Otto Jesse's death in 1901, there were few new insights into the nature of noctilucent clouds. Wegener's conjecture, that they were composed of water ice, was later shown to be correct. Study was limited to ground-based observations and scientists had very little knowledge of the mesosphere until the 1960s, when direct rocket measurements began. These showed for the first time that the clouds' occurrence coincided with very low temperatures in the mesosphere.
Noctilucent clouds were first detected from space by an instrument on the OGO-6 satellite in 1972. The OGO-6 observations of a bright scattering layer over the polar caps were identified as poleward extensions of these clouds. A later satellite, the Solar Mesosphere Explorer, mapped the distribution of the clouds between 1981 and 1986 with its ultraviolet spectrometer. The clouds were detected with a lidar in 1995 at Utah State University, even when they were not visible to the naked eye. The first physical confirmation that water ice is indeed the primary component of noctilucent clouds came from the HALOE instrument on the Upper Atmosphere Research Satellite in 2001.
In 2001, the Swedish Odin satellite performed spectral analyses on the clouds, and produced daily global maps that revealed large patterns in their distribution.
The AIM (Aeronomy of Ice in the Mesosphere) satellite was launched on 25 April 2007. It was the first satellite dedicated to studying noctilucent clouds, and made its first observations a month later (25 May). Images taken by the satellite show shapes in the clouds that are similar to shapes in tropospheric clouds, hinting at similarities in their dynamics.
In the previous year, scientists with the Mars Express mission had announced their discovery of carbon dioxide–crystal clouds on Mars that extended to above the planet's surface. These are the highest clouds discovered over the surface of a rocky planet. Like noctilucent clouds on Earth, they can be observed only when the Sun is below the horizon.
Research published in the journal Geophysical Research Letters in June 2009 suggests that noctilucent clouds observed following the Tunguska Event of 1908 are evidence that the impact was caused by a comet.
The United States Naval Research Laboratory (NRL) and the United States Department of Defense Space Test Program (STP) conducted the Charged Aerosol Release Experiment (CARE) on September 19, 2009, using exhaust particles from a Black Brant XII suborbital sounding rocket launched from NASA's Wallops Flight Facility to create an artificial noctilucent cloud. The cloud was to be observed over a period of weeks or months by ground instruments and the Spatial Heterodyne IMager for MEsospheric Radicals (SHIMMER) instrument on the NRL/STP STPSat-1 spacecraft. The rocket's exhaust plume was observed and reported to news organizations in the United States from New Jersey to Massachusetts.
A 2018 experiment briefly created noctilucent clouds over Alaska, allowing ground-based measurements and experiments aimed at verifying computer simulations of the phenomenon. A suborbital NASA rocket was launched on 26 January 2018 by University of Alaska professor Richard Collins. It carried water-filled canisters, which were released at about above the Earth. Since the naturally-occurring clouds only appear in summer, this experiment was conducted in mid-winter to assure that its results would not be mixed with a natural event.
Description from satellites
PMC's have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus. Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds. Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus. Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.
Satellite observations allow the very coldest parts of the polar mesosphere to be observed, all the way to the geographic pole. In the early 1970s, visible airglow photometers first scanned the atmospheric horizon throughout the summer polar mesospause region. This experiment, which flew on the OGO-6 satellite, was the first to trace noctilucent-like cloud layers across the polar cap. The very bright scattering layer was seen in full daylight conditions, and was identified as the poleward extension of noctilucent clouds. In the early 1980s, the layer was observed again from a satellite, the Solar Mesospheric Explorer (SME). On board this satellite was an ultraviolet spectrometer, which mapped the distributions of clouds over the time period 1981 to 1986. The experiment measured the altitude profile of scattering from clouds at two spectral channels (primarily) 265 nm and 296 nm.
Polar mesospheric clouds generally increase in brightness and occurrence frequency with increasing latitude, from about 60° to the highest latitudes observed (85°). So far, no apparent dependence on longitude has been found, nor is there any evidence of a dependence on auroral activity.
On 8 July 2018, NASA launched a giant balloon from Esrange, Sweden which traveled through the stratosphere across the Arctic to Western Nunavut, Canada in five days. The giant balloon was loaded with cameras, which captured six million high-resolution images filling up 120 terabytes of data storage, aiming to study the PMCs which are affected by the atmospheric gravity waves, resulted from air being pushed up by mountain ranges all the way up to the mesosphere. These images would aid in studying turbulence in the atmosphere, and consequently better weather forecasting.
NASA uses AIM satellite to study these noctilucent clouds, which always occur during the summer season near the poles. However, tomographic analyses of AIM satellite indicate that there is a spatial negative correlation between albedo and wave‐induced altitude.
Observation
Noctilucent clouds are generally colourless or pale blue, although occasionally other colours including red and green have been observed. The characteristic blue colour comes from absorption by ozone in the path of the sunlight illuminating the noctilucent cloud. They can appear as featureless bands, but frequently show distinctive patterns such as streaks, wave-like undulations, and whirls. They are considered a "beautiful natural phenomenon". Noctilucent clouds may be confused with cirrus clouds, but appear sharper under magnification. Those caused by rocket exhausts tend to show colours other than silver or blue, because of iridescence caused by the uniform size of the water droplets produced.
Noctilucent clouds may be seen at latitudes of 50° to 65°. They seldom occur at lower latitudes (although there have been sightings as far south as Paris, Utah, Italy, Turkey and Spain), and closer to the poles it does not get dark enough for the clouds to become visible. They occur during summer, from mid-May to mid-August in the northern hemisphere and between mid-November and mid-February in the southern hemisphere. They are very faint and tenuous, and may be observed only in twilight around sunrise and sunset when the clouds of the lower atmosphere are in shadow, but the noctilucent cloud is illuminated by the Sun. They are best seen when the Sun is between 6° and 16° below the horizon. Although noctilucent clouds occur in both hemispheres, they have been observed thousands of times in the northern hemisphere, but fewer than 100 times in the southern. Southern hemisphere noctilucent clouds are fainter and occur less frequently; additionally the southern hemisphere has a lower population and less land area from which to make observations.
These clouds may be studied from the ground, from space, and directly by sounding rocket. Also, some noctilucent clouds are made of smaller crystals, 30 nm or less, which are invisible to observers on the ground because they do not scatter enough light.
Forms
The clouds may show a large variety of different patterns and forms. An identification scheme was developed by Fogle in 1970 that classified five different forms. These classifications have since been modified and subdivided.
Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.
Type II bands are long streaks that often occur in roughly parallel groups, usually more widely spaced than the bands or elements seen with cirrocumulus clouds.
Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.
Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.
See also
Aeronomy
Aeronomy of Ice in the Mesosphere
Cloud iridescence
Iridescent cloud
Polar stratospheric cloud
Space jellyfish
Twilight phenomenon
Citations
General and cited references
External links
NLC time-lapse movies
AIM satellite mission
BBC News Article – Mission to Target Highest Clouds
Noctilucent Cloud Observers' Homepage
Solar Occultation for Ice Experiment (SOFIE)
Southern Noctilucent Clouds observed at Punta Arenas, Chile
BBC Article – Spacecraft Chases Highest Clouds
CNN Article – Rocket launch prompts calls of strange lights in sky
BBC News – Audio slideshow: Noctilucent clouds
Time-lapse videos playlist of noctilucent clouds observed in Samara, Russia
Cloud types
Atmospheric optical phenomena | Noctilucent cloud | [
"Physics"
] | 3,317 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
759,264 | https://en.wikipedia.org/wiki/Amount%20of%20substance | In chemistry, the amount of substance (symbol ) in a given sample of matter is defined as a ratio () between the number of elementary entities () and the Avogadro constant (). The entities are usually molecules, atoms, ions, or ion pairs of a specified kind. The particular substance sampled may be specified using a subscript, e.g., the amount of sodium chloride (NaCl) would be denoted as . The unit of amount of substance in the International System of Units is the mole (symbol: mol), a base unit. Since 2019, the value of the Avogadro constant is defined to be exactly . Sometimes, the amount of substance is referred to as the chemical amount or, informally, as the "number of moles" in a given sample of matter.
Usage
Historically, the mole was defined as the amount of substance in 12 grams of the carbon-12 isotope. As a consequence, the mass of one mole of a chemical compound, in grams, is numerically equal (for all practical purposes) to the mass of one molecule or formula unit of the compound, in daltons, and the molar mass of an isotope in grams per mole is approximately equal to the mass number (historically exact for carbon-12 with a molar mass of 12 g/mol). For example, a molecule of water has a mass of about 18.015 daltons on average, whereas a mole of water (which contains water molecules) has a total mass of about 18.015 grams.
In chemistry, because of the law of multiple proportions, it is often much more convenient to work with amounts of substances (that is, number of moles or of molecules) than with masses (grams) or volumes (liters). For example, the chemical fact "1 molecule of oxygen () will react with 2 molecules of hydrogen () to make 2 molecules of water ()" can also be stated as "1 mole of will react with 2 moles of to form 2 moles of water". The same chemical fact, expressed in terms of masses, would be "32 g (1 mole) of oxygen will react with approximately 4.0304 g (2 moles of ) hydrogen to make approximately 36.0304 g (2 moles) of water" (and the numbers would depend on the isotopic composition of the reagents). In terms of volume, the numbers would depend on the pressure and temperature of the reagents and products. For the same reasons, the concentrations of reagents and products in solution are often specified in moles per liter, rather than grams per liter.
The amount of substance is also a convenient concept in thermodynamics. For example, the pressure of a certain quantity of a noble gas in a recipient of a given volume, at a given temperature, is directly related to the number of molecules in the gas (through the ideal gas law), not to its mass.
This technical sense of the term "amount of substance" should not be confused with the general sense of "amount" in the English language. The latter may refer to other measurements such as mass or volume, rather than the number of particles. There are proposals to replace "amount of substance" with more easily-distinguishable terms, such as enplethy and stoichiometric amount.
The IUPAC recommends that "amount of substance" should be used instead of "number of moles", just as the quantity mass should not be called "number of kilograms".
Nature of the particles
To avoid ambiguity, the nature of the particles should be specified in any measurement of the amount of substance: thus, a sample of 1 mol of molecules of oxygen () has a mass of about 32 grams, whereas a sample of 1 mol of atoms of oxygen () has a mass of about 16 grams.
Derived quantities
Molar quantities (per mole)
The quotient of some extensive physical quantity of a homogeneous sample by its amount of substance is an intensive property of the substance, usually named by the prefix "molar" or the suffix "per mole".
For example, the quotient of the mass of a sample by its amount of substance is its molar mass, for which the SI unit kilogram per mole or gram per mole may be used. This is about 18.015 g/mol for water, and 55.845 g/mol for iron. Similarly for volume, one gets the molar volume, which is about 18.069 millilitres per mole for liquid water and 7.092 mL/mol for iron at room temperature. From the heat capacity, one gets the molar heat capacity, which is about 75.385 J/(K⋅mol) for water and about 25.10 J/(K⋅mol) for iron.
Molar mass
The molar mass () of a substance is the ratio of the mass () of a sample of that substance to its amount of substance (): . The amount of substance is given as the number of moles in the sample. For most practical purposes, the numerical value of the molar mass in grams per mole is the same as that of the mean mass of one molecule or formula unit of the substance in daltons, as the mole was historically defined such that the molar mass constant was exactly 1 g/mol. Thus, given the molecular mass or formula mass in daltons, the same number in grams gives an amount very close to one mole of the substance. For example, the average molecular mass of water is about 18.015 Da and the molar mass of water is about 18.015 g/mol. This allows for accurate determination of the amount in moles of a substance by measuring its mass and dividing by the molar mass of the compound: . For example, 100 g of water is about 5.551 mol of water. Other methods of determining the amount of substance include the use of the molar volume or the measurement of electric charge.
The molar mass of a substance depends not only on its molecular formula, but also on the distribution of isotopes of each chemical element present in it. For example, the molar mass of calcium-40 is , whereas the molar mass of calcium-42 is , and of calcium with the normal isotopic mix is .
Amount (molar) concentration (moles per liter)
Another important derived quantity is the molar concentration () (also called amount of substance concentration, amount concentration, or substance concentration, especially in clinical chemistry), defined as the amount in moles () of a specific substance (solute in a solution or component of a mixture), divided by the volume () of the solution or mixture: .
The standard SI unit of this quantity is mol/m3, although more practical units are commonly used, such as mole per liter (mol/L, equivalent to mol/dm3). For example, the amount concentration of sodium chloride in ocean water is typically about 0.599 mol/L.
The denominator is the volume of the solution, not of the solvent. Thus, for example, one liter of standard vodka contains about 0.40 L of ethanol (315 g, 6.85 mol) and 0.60 L of water. The amount concentration of ethanol is therefore (6.85 mol of ethanol)/(1 L of vodka) = 6.85 mol/L, not (6.85 mol of ethanol)/(0.60 L of water), which would be 11.4 mol/L.
In chemistry, it is customary to read the unit "mol/L" as molar, and denote it by the symbol "M" (both following the numeric value). Thus, for example, each liter of a "0.5 molar" or "0.5 M" solution of urea () in water contains 0.5 moles of that molecule. By extension, the amount concentration is also commonly called the molarity of the substance of interest in the solution. However, as of May 2007, these terms and symbols are not condoned by IUPAC.
This quantity should not be confused with the mass concentration, which is the mass of the substance of interest divided by the volume of the solution (about 35 g/L for sodium chloride in ocean water).
Amount (molar) fraction (moles per mole)
Confusingly, the amount (molar) concentration should also be distinguished from the molar fraction (also called mole fraction or amount fraction) of a substance in a mixture (such as a solution), which is the number of moles of the compound in one sample of the mixture, divided by the total number of moles of all components. For example, if 20 g of is dissolved in 100 g of water, the amounts of the two substances in the solution will be (20 g)/(58.443 g/mol) = 0.34221 mol and (100 g)/(18.015 g/mol) = 5.5509 mol, respectively; and the molar fraction of will be .
In a mixture of gases, the partial pressure of each component is proportional to its molar fraction.
History
The alchemists, and especially the early metallurgists, probably had some notion of amount of substance, but there are no surviving records of any generalization of the idea beyond a set of recipes. In 1758, Mikhail Lomonosov questioned the idea that mass was the only measure of the quantity of matter, but he did so only in relation to his theories on gravitation. The development of the concept of amount of substance was coincidental with, and vital to, the birth of modern chemistry.
1777: Wenzel publishes Lessons on Affinity, in which he demonstrates that the proportions of the "base component" and the "acid component" (cation and anion in modern terminology) remain the same during reactions between two neutral salts.
1789: Lavoisier publishes Treatise of Elementary Chemistry, introducing the concept of a chemical element and clarifying the Law of conservation of mass for chemical reactions.
1792: Richter publishes the first volume of Stoichiometry or the Art of Measuring the Chemical Elements (publication of subsequent volumes continues until 1802). The term "stoichiometry" is used for the first time. The first tables of equivalent weights are published for acid–base reactions. Richter also notes that, for a given acid, the equivalent mass of the acid is proportional to the mass of oxygen in the base.
1794: Proust's Law of definite proportions generalizes the concept of equivalent weights to all types of chemical reaction, not simply acid–base reactions.
1805: Dalton publishes his first paper on modern atomic theory, including a "Table of the relative weights of the ultimate particles of gaseous and other bodies".
The concept of atoms raised the question of their weight. While many were skeptical about the reality of atoms, chemists quickly found atomic weights to be an invaluable tool in expressing stoichiometric relationships.
1808: Publication of Dalton's A New System of Chemical Philosophy, containing the first table of atomic weights (based on H = 1).
1809: Gay-Lussac's Law of combining volumes, stating an integer relationship between the volumes of reactants and products in the chemical reactions of gases.
1811: Avogadro hypothesizes that equal volumes of different gases (at same temperature and pressure) contain equal numbers of particles, now known as Avogadro's law.
1813/1814: Berzelius publishes the first of several tables of atomic weights based on the scale of m(O) = 100.
1815: Prout publishes his hypothesis that all atomic weights are integer multiple of the atomic weight of hydrogen. The hypothesis is later abandoned given the observed atomic weight of chlorine (approx. 35.5 relative to hydrogen).
1819: Dulong–Petit law relating the atomic weight of a solid element to its specific heat capacity.
1819: Mitscherlich's work on crystal isomorphism allows many chemical formulae to be clarified, resolving several ambiguities in the calculation of atomic weights.
1834: Clapeyron states the ideal gas law.
The ideal gas law was the first to be discovered of many relationships between the number of atoms or molecules in a system and other physical properties of the system, apart from its mass. However, this was not sufficient to convince all scientists of the existence of atoms and molecules, many considered it simply being a useful tool for calculation.
1834: Faraday states his Laws of electrolysis, in particular that "the chemical decomposing action of a current is constant for a constant quantity of electricity".
1856: Krönig derives the ideal gas law from kinetic theory. Clausius publishes an independent derivation the following year.
1860: The Karlsruhe Congress debates the relation between "physical molecules", "chemical molecules" and atoms, without reaching consensus.
1865: Loschmidt makes the first estimate of the size of gas molecules and hence of number of molecules in a given volume of gas, now known as the Loschmidt constant.
1886: van't Hoff demonstrates the similarities in behaviour between dilute solutions and ideal gases.
1886: Eugen Goldstein observes discrete particle rays in gas discharges, laying the foundation of mass spectrometry, a tool subsequently used to establish the masses of atoms and molecules.
1887: Arrhenius describes the dissociation of electrolyte in solution, resolving one of the problems in the study of colligative properties.
1893: First recorded use of the term mole to describe a unit of amount of substance by Ostwald in a university textbook.
1897: First recorded use of the term mole in English.
By the turn of the twentieth century, the concept of atomic and molecular entities was generally accepted, but many questions remained, not least the size of atoms and their number in a given sample. The concurrent development of mass spectrometry, starting in 1886, supported the concept of atomic and molecular mass and provided a tool of direct relative measurement.
1905: Einstein's paper on Brownian motion dispels any last doubts on the physical reality of atoms, and opens the way for an accurate determination of their mass.
1909: Perrin coins the name Avogadro constant and estimates its value.
1913: Discovery of isotopes of non-radioactive elements by Soddy and Thomson.
1914: Richards receives the Nobel Prize in Chemistry for "his determinations of the atomic weight of a large number of elements".
1920: Aston proposes the whole number rule, an updated version of Prout's hypothesis.
1921: Soddy receives the Nobel Prize in Chemistry "for his work on the chemistry of radioactive substances and investigations into isotopes".
1922: Aston receives the Nobel Prize in Chemistry "for his discovery of isotopes in a large number of non-radioactive elements, and for his whole-number rule".
1926: Perrin receives the Nobel Prize in Physics, in part for his work in measuring the Avogadro constant.
1959/1960: Unified atomic mass unit scale based on m(C) = 12 u adopted by IUPAP and IUPAC.
1968: The mole is recommended for inclusion in the International System of Units (SI) by the International Committee for Weights and Measures (CIPM).
1972: The mole is approved as the SI base unit of amount of substance.
2019: The mole is redefined in the SI as "the amount of substance of a system that contains specified elementary entities".
See also
International System of Quantities
Quantity of matter
References
SI base quantities | Amount of substance | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,241 | [
"Scalar physical quantities",
"Chemical reaction engineering",
"Stoichiometry",
"Physical quantities",
"SI base quantities",
"Chemical quantities",
"Quantity",
"Amount of substance",
"nan",
"Wikipedia categories named after physical quantities"
] |
759,298 | https://en.wikipedia.org/wiki/Linear%20density | Linear density is the measure of a quantity of any characteristic value per unit of length. Linear mass density (titer in textile engineering, the amount of mass per unit length) and linear charge density (the amount of electric charge per unit length) are two common examples used in science and engineering.
The term linear density or linear mass density is most often used when describing the characteristics of one-dimensional objects, although linear density can also be used to describe the density of a three-dimensional quantity along one particular dimension. Just as density is most often used to mean mass density, the term linear density likewise often refers to linear mass density. However, this is only one example of a linear density, as any quantity can be measured in terms of its value along one dimension.
Linear mass density
Consider a long, thin rod of mass and length . To calculate the average linear mass density, , of this one dimensional object, we can simply divide the total mass, , by the total length, :
If we describe the rod as having a varying mass (one that varies as a function of position along the length of the rod, ), we can write:
Each infinitesimal unit of mass, , is equal to the product of its linear mass density, , and the infinitesimal unit of length, :
The linear mass density can then be understood as the derivative of the mass function with respect to the one dimension of the rod (the position along its length, )
The SI unit of linear mass density is the kilogram per meter (kg/m).
Linear density of fibers and yarns can be measured by many methods. The simplest one is to measure a length of material and weigh it. However, this requires a large sample and masks the variability of linear density along the thread, and is difficult to apply if the fibers are crimped or otherwise cannot lay flat relaxed. If the density of the material is known, the fibers are measured individually and have a simple shape, a more accurate method is direct imaging of the fiber with a scanning electron microscope to measure the diameter and calculation of the linear density. Finally, linear density is directly measured with a vibroscope. The sample is tensioned between two hard points, mechanical vibration is induced and the fundamental frequency is measured.
Linear charge density
Consider a long, thin wire of charge and length . To calculate the average linear charge density, , of this one dimensional object, we can simply divide the total charge, , by the total length, :
If we describe the wire as having a varying charge (one that varies as a function of position along the length of the wire, ), we can write:
Each infinitesimal unit of charge, , is equal to the product of its linear charge density, , and the infinitesimal unit of length, :
The linear charge density can then be understood as the derivative of the charge function with respect to the one dimension of the wire (the position along its length, )
Notice that these steps were exactly the same ones we took before to find .
The SI unit of linear charge density is the coulomb per meter (C/m).
Other applications
In drawing or printing, the term linear density also refers to how densely or heavily a line is drawn.
The most famous abstraction of linear density is the probability density function of a single random variable.
Units
Common units include:
kilogram per meter (using SI base units)
ounce (mass) per foot
ounce (mass) per inch
pound (mass) per yard: used in the North American railway industry for the linear density of rails
pound (mass) per foot
pound (mass) per inch
tex, a unit of measure for the linear density of fibers, defined as the mass in grams per 1,000 meters
denier, a unit of measure for the linear density of fibers, defined as the mass in grams per 9,000 meters
decitex (dtex), a unit for the linear density of fibers, defined as the mass in grams per 10,000 meters
See also
Density
Area density
Columnar density
Paper density
Linear number density
References
Density
Length | Linear density | [
"Physics",
"Mathematics"
] | 826 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Mass",
"Size",
"Density",
"Length",
"Wikipedia categories named after physical quantities",
"Matter"
] |
759,360 | https://en.wikipedia.org/wiki/Area%20density | The area density (also known as areal density, surface density, superficial density, areic density, mass thickness, column density, or density thickness) of a two-dimensional object is calculated as the mass per unit area. The SI derived unit is the "kilogram per square metre" (kg·m−2).
In the paper and fabric industries, it is called grammage and is expressed in grams per square meter (g/m2); for paper in particular, it may be expressed as pounds per ream of standard sizes ("basis ream").
A related area number density can be defined by replacing mass by number of particles or other countable quantity, with resulting units of m−2.
Formulation
Area density can be calculated as:
or
where is the average area density, is the total mass of the object, is the total area of the object, is the average density, and is the average thickness of the object.
Column density
A special type of area density is called column density (also columnar mass density or simply column density), denoted ρA or σ. It is the mass of substance per unit area integrated along a path; It is obtained integrating volumetric density over a column:
In general the integration path can be slant or oblique incidence (as in, for example, line of sight propagation in atmospheric physics). A common special case is a vertical path, from the bottom to the top of the medium:
where denotes the vertical coordinate (e.g., height or depth).
Columnar density is closely related to the vertically averaged volumetric density as
where ; , , and have units of, for example, grams per cubic metre, grams per square metre, and metres, respectively.
Usage
Atmospheric physics
It is a quantity commonly retrieved by remote sensing instruments, for instance the Total Ozone Mapping Spectrometer (TOMS) which retrieves ozone columns around the globe. Columns are also returned by the differential optical absorption spectroscopy (DOAS) method and are a common retrieval product from nadir-looking microwave radiometers.
A closely related concept is that of ice or liquid water path, which specifies the volume per unit area or depth instead of mass per unit area, thus the two are related:
Another closely related concept is optical depth.
Astronomy
In astronomy, the column density is generally used to indicate the number of atoms or molecules per square cm (cm2) along the line of sight in a particular direction, as derived from observations of e.g. the 21-cm hydrogen line or from observations of a certain molecular species. Also the interstellar extinction can be related to the column density of H or H2.
The concept of area density can be useful when analysing accretion disks. In the case of a disk seen face-on, area density for a given area of the disk is defined as column density: that is, either as the mass of substance per unit area integrated along the vertical path that goes through the disk (line-of-sight), from the bottom to the top of the medium:
where denotes the vertical coordinate (e.g., height or depth), or as the number or count of a substance—rather than the mass—per unit area integrated along a path (column number density):
Data storage media
Areal density is used to quantify and compare different types media used in data storage devices such as hard disk drives, optical disc drives and tape drives. The current unit of measure is typically gigabits per square inch.
Paper
The area density is often used to describe the thickness of paper; e.g., 80 g/m2 is very common.
Fabric
Fabric "weight" is often specified as mass per unit area, grams per square meter (gsm) or ounces per square yard. It is also sometimes specified in ounces per yard in a standard width for the particular cloth. One gram per square meter equals 0.0295 ounces per square yard; one ounce per square yard equals 33.9 grams per square meter.
Other
It is also an important quantity for the absorption of radiation.
When studying bodies falling through air, area density is important because resistance depends on area, and gravitational force is dependent on mass.
Bone density is often expressed in grams per square centimeter (g·cm−2) as measured by x-ray absorptiometry, as a proxy for the actual density.
The body mass index is expressed in units of kilograms per square meter, though the area figure is nominal, being the square of the height.
The total electron content in the ionosphere is a quantity of type columnar number density.
Snow water equivalent is a quantity of type columnar mass density.
See also
Areic quantity
Linear density
Paper density
References
Atmospheric physics
Mass density
Classical mechanics
Area-specific quantities | Area density | [
"Physics",
"Mathematics"
] | 977 | [
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Area-specific quantities",
"Quantity",
"Mass",
"Intensive quantities",
"Classical mechanics",
"Volume-specific quantities",
"Atmospheric physics",
"Mechanics",
"Density",
"Mass density",
"Matter"
] |
759,422 | https://en.wikipedia.org/wiki/Data%20modeling | Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques. It may be applied as part of broader Model-driven engineering (MDE) concept.
Overview
Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.
Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:
to assist business analysts, programmers, testers, manual writers, IT package selectors, engineers, managers, related organizations and clients to understand and use an agreed upon semi-formal model that encompasses the concepts of the organization and how they relate to one another
to manage data as a resource
to integrate information systems
to design databases/data warehouses (aka data repositories)
Data modeling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time. Whitten et al. (2004) determined two types of data modeling:
Strategic data modeling: This is part of the creation of an information systems strategy, which defines an overall vision and architecture for information systems. Information technology engineering is a methodology that embraces this approach.
Data modeling during systems analysis: In systems analysis logical data models are created as part of the development of new databases.
Data modeling is also used as a technique for detailing business requirements for specific databases. It is sometimes called database modeling because a data model is eventually implemented in a database.
Topics
Data models
Data models provide a framework for data to be used within information systems by providing specific definitions and formats. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor.
Some common problems found in data models are:
Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces. So, business rules need to be implemented in a flexible way that does not result in complicated dependencies, rather the data model should be flexible enough so that changes in the business can be implemented within the data model in a relatively quick and efficient way.
Entity types are often not identified, or are identified incorrectly. This can lead to replication of data, data structure and functionality, together with the attendant costs of that duplication in development and maintenance. Therefore, data definitions should be made as explicit and easy to understand as possible to minimize misinterpretation and duplication.
Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25 and 70% of the cost of current systems. Required interfaces should be considered inherently while designing a data model, as a data model on its own would not be usable without interfaces within different systems.
Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data have not been standardised. To obtain optimal value from an implemented data model, it is very important to define standards that will ensure that data models will both meet business needs and be consistent.
Conceptual, logical and physical schemas
In 1975 ANSI described three kinds of data-model instance:
Conceptual schema: describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial "language" with a scope that is limited by the scope of the model. Simply described, a conceptual schema is the first step in organizing the data requirements.
Logical schema: describes the structure of some domain of information. This consists of descriptions of (for example) tables, columns, object-oriented classes, and XML tags. The logical schema and conceptual schema are sometimes implemented as one and the same.
Physical schema: describes the physical means used to store data. This is concerned with partitions, CPUs, tablespaces, and the like.
According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model.
Data modeling process
In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation.
The process of designing a database involves producing the previously described three types of schemas – conceptual, logical, and physical. The database design documented in these schemas is converted through a Data Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term "database design" can describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term "database design" could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS.
In the process, system interfaces account for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share a common data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization
Modeling methodologies
Data models represent information areas of interest. While there are many ways to create data models, according to Len Silverston (1997) only two modeling methodologies stand out, top-down and bottom-up:
Bottom-up models or View Integration models are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization.
Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template.
Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. In many environments the distinction between a logical data model and a physical data model is blurred. In addition, some CASE tools don't make a distinction between logical and physical data models.
Entity–relationship diagrams
There are several notations for data modeling. The actual model is frequently called "entity–relationship model", because it depicts data in terms of the entities and relationships described in the data. An entity–relationship model (ERM) is an abstract conceptual representation of structured data. Entity–relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.
These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. area of interest.
Several techniques have been developed for the design of data models. While these methodologies guide data modelers in their work, two different people using the same methodology will often come up with very different results. Most notable are:
Bachman diagrams
Barker's notation
Chen's notation
Data Vault Modeling
Extended Backus–Naur form
IDEF1X
Object-relational mapping
Object-Role Modeling and Fully Communication Oriented Information Modeling
Relational Model
Relational Model/Tasmania
Generic data modeling
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type.
The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related.
Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.
Semantic data modeling
The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. That is unless the semantic data model is implemented in the database on purpose, a choice which may slightly impact performance but generally vastly improves productivity.
Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure the real world, in terms of resources, ideas, events, etc., is symbolically defined by its description within physical data stores. A semantic data model is an abstraction which defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
The purpose of semantic data modeling is to create a structural model of a piece of the real world, called "universe of discourse". For this, three fundamental structural relations are considered:
Classification/instantiation: Objects with some structural similarity are described as instances of classes
Aggregation/decomposition: Composed objects are obtained joining its parts
Generalization/specialization: Distinct classes with some common properties are reconsidered in a more generic class with the common attributes
A semantic data model can be used to serve many purposes, such as:
Planning of data resources
Building of shareable databases
Evaluation of vendor software
Integration of existing databases
The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the artificial intelligence field. The idea is to provide high level modeling primitives as integral part of a data model in order to facilitate the representation of real world situations.
See also
Architectural pattern
Comparison of data modeling tools
Data (computer science)
Data dictionary
Document modeling
Enterprise data modelling
Entity Data Model
Information management
Information model
Building information modeling
Metadata modeling
Three-schema approach
Zachman Framework
References
Further reading
J.H. ter Bekke (1991). Semantic Data Modeling in Relational Environments
John Vincent Carlis, Joseph D. Maguire (2001). Mastering Data Modeling: A User-driven Approach.
Alan Chmura, J. Mark Heumann (2005). Logical Data Modeling: What it is and how to Do it.
Martin E. Modell (1992). Data Analysis, Data Modeling, and Classification.
M. Papazoglou, Stefano Spaccapietra, Zahir Tari (2000). Advances in Object-oriented Data Modeling.
G. Lawrence Sanders (1995). Data Modeling
Graeme C. Simsion, Graham C. Witt (2005). Data Modeling Essentials'''
Matthew West (2011) Developing High Quality Data Models''
External links
Agile/Evolutionary Data Modeling
Data modeling articles
Database Modelling in UML
Data Modeling 101
Semantic data modeling
System Development, Methodologies and Modeling Notes on by Tony Drewry
Request For Proposal - Information Management Metamodel (IMM) of the Object Management Group
Data Modeling is NOT just for DBMS's Part 1 Chris Bradley
Data Modeling is NOT just for DBMS's Part 2 Chris Bradley | Data modeling | [
"Engineering"
] | 3,057 | [
"Data modeling",
"Data engineering"
] |
759,690 | https://en.wikipedia.org/wiki/Slip%20ring | A slip ring is an electromechanical device that allows the transmission of power and electrical signals from a stationary to a rotating structure. A slip ring can be used in any electromechanical system that requires rotation while transmitting power or signals. It can improve mechanical performance, simplify system operation and eliminate damage-prone wires dangling from movable joints.
Also called rotary electrical interfaces, rotating electrical connectors, collectors, swivels, or electrical rotary joints, these rings are commonly found in slip ring motors, electrical generators for alternating current (AC) systems and alternators and in packaging machinery, cable reels, and wind turbines. They can be used on any rotating object to transfer power, control circuits, or analog or digital signals including data such as those found on aerodrome beacons, rotating tanks, power shovels, radio telescopes, telemetry systems, heliostats or ferris wheels.
A slip ring (in electrical engineering terms) is a method of making an electrical connection through a rotating assembly. Formally, it is an electric transmission device that allows energy flow between two electrical rotating parts, such as in a motor.
Composition
Typically, a slip ring consists of a stationary graphite or metal contact (brush) which rubs on the outside diameter of a rotating metal ring. As the metal ring turns, the electric current or signal is conducted through the stationary brush to the metal ring making the connection. Additional ring/brush assemblies are stacked along the rotating axis if more than one electrical circuit is needed. Either the brushes or the rings are stationary and the other component rotates.
This simple design has been used for decades as a rudimentary method of passing current into a rotating device.
Alternative names and uses
Some other names used for slip ring are collector ring, rotary electrical contact and electrical slip ring. Some people also use the term commutator; however, commutators are somewhat different and are specialized for use on DC motors and generators. While commutators are segmented, slip rings are continuous, and the terms are not interchangeable. Rotary transformers are often used instead of slip rings in high-speed or low-friction environments.
A slip ring can be used within a rotary union to function concurrently with the device, commonly referred to as a rotary joint. Slip rings do the same for electrical power and signal that rotary unions do for fluid media. They are often integrated into rotary unions to send power and data to and from rotating machinery in conjunction with the media that the rotary union provides.
History
The basic principle of slip rings can be traced back to the late 19th century when they were initially used in early electrical experiments and the development of electrical generators and motors. With the advent of the Industrial Revolution and the increasing demand for electrical power, the technology behind slip rings started to evolve. They became essential components in large-scale electrical machinery, such as turbines and generators, allowing for the transfer of power and signals in machines where a part of the machinery needed to rotate continuously.
Types
Slip rings are made in various types and sizes; one device made for theatrical stage lighting, for example, had 100 conductors. The slip ring allows for unlimited rotations of the connected object, whereas a slack cable can only be twisted a few times before it will bind up and restrict rotation.
Mercury-wetted slip rings
Mercury-wetted slip rings, noted for their low resistance and stable connection use a different principle which replaces the sliding brush contact with a pool of liquid metal molecularly bonded to the contacts. During rotation the liquid metal maintains the electrical connection between the stationary and rotating contacts. However, the use of mercury can pose safety concerns if not properly handled, as it is a toxic substance. The slip ring device is also limited by temperature, as mercury solidifies at approximately -40 °C.
Pancake slip rings
A pancake slip ring has the conductors arranged on a flat disc as concentric rings centered on the rotating shaft. This configuration has greater weight and volume for the same circuits, greater capacitance and crosstalk, greater brush wear and more readily collects wear debris on its vertical axis. However, a pancake offers reduced axial length for the number of circuits, and so may be appropriate in some applications.
Wireless slip rings
Wireless slip rings do not rely on the typical friction-based metal and carbon brush contact methods that have been employed by slip rings since their invention, such as those explored above. Instead, they transfer both power and data wirelessly via a magnetic field, which is created by the coils that are placed in the rotating receiver, and the stationary transmitter. Wireless slip rings are considered an upgrade from — or alternative to — traditional slip rings, as their lack of standard mechanical rotating parts means they are typically more resilient in harsh operating environments and require less maintenance. However, the amount of power that can be transmitted between coils is limited; typically a traditional contact-type slip ring can transmit orders of magnitude more power in the same volume.
References
External links
Video depicting the interior of a slip ring in motion
Electrical power connectors
Electric motors | Slip ring | [
"Technology",
"Engineering"
] | 1,019 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
759,697 | https://en.wikipedia.org/wiki/DC%20motor | A DC motor is an electrical motor that uses direct current (DC) to produce mechanical force. The most common types rely on magnetic forces produced by currents in the coils. Nearly all types of DC motors have some internal mechanism, either electromechanical or electronic, to periodically change the direction of current in part of the motor.
DC motors were the first form of motors widely used, as they could be powered from existing direct-current lighting power distribution systems. A DC motor's speed can be controlled over a wide range, using either a variable supply voltage or by changing the strength of current in its field windings. Small DC motors are used in tools, toys, and appliances. The universal motor, a lightweight brushed motor used for portable power tools and appliances can operate on direct current and alternating current. Larger DC motors are currently used in propulsion of electric vehicles, elevator and hoists, and in drives for steel rolling mills. The advent of power electronics has made replacement of DC motors with AC motors possible in many applications.
Electromagnetic motors
A coil of wire with a current running through it generates an electromagnetic field aligned with the center of the coil. The direction and magnitude of the magnetic field produced by the coil can be changed with the direction and magnitude of the current flowing through it.
A simple DC motor has a stationary set of magnets in the stator and an armature with one or more windings of insulated wire wrapped around a soft iron core that concentrates the magnetic field. The windings usually have multiple turns around the core, and in large motors there can be several parallel current paths. The ends of the wire winding are connected to a commutator. The commutator allows each armature coil to be energized in turn and connects the rotating coils with the external power supply through brushes. (Brushless DC motors have electronics that switch the DC current to each coil on and off and have no brushes.)
The total amount of current sent to the coil, the coil's size, and what it is wrapped around decide the strength of the electromagnetic field created.
The sequence of turning a particular coil on or off dictates what direction the effective electromagnetic fields are pointed. By turning on and off coils in sequence, a rotating magnetic field can be created. These rotating magnetic fields interact with the magnetic fields of the magnets (permanent or electromagnets) in the stationary part of the motor (stator) to create a torque on the armature which causes it to rotate. In some DC motor designs, the stator fields use electromagnets to create their magnetic fields which allows greater control over the motor.
At high power levels, DC motors are almost always cooled using forced air.
Different number of stator and armature fields as well as how they are connected provide different inherent speed and torque regulation characteristics. The speed of a DC motor can be controlled by changing the voltage applied to the armature. Variable resistance in the armature circuit or field circuit allows speed control. Modern DC motors are often controlled by power electronics systems which adjust the voltage by "chopping" the DC current into on and off cycles which have an effective lower voltage.
Since the series-wound DC motor develops its highest torque at low speed, it is often used in traction applications such as electric locomotives, and trams. The introduction of DC motors and an electrical grid system to run machinery starting in the 1870s started a new second Industrial Revolution. DC motors can operate directly from rechargeable batteries, providing the motive power for the first electric vehicles and today's hybrid cars and electric cars as well as driving a host of cordless tools. Today DC motors are still found in applications as small as toys and disk drives, or in large sizes to operate steel rolling mills and paper machines. Large DC motors with separately excited fields were generally used with winder drives for mine hoists, for high torque as well as smooth speed control using thyristor drives. These are now replaced with large AC motors with variable frequency drives.
If external mechanical power is applied to a DC motor it acts as a DC generator, a dynamo. This feature is used to slow down and recharge batteries on hybrid and electric cars or to return electricity back to the electric grid used on a street car or electric powered train line when they slow down. This process is called regenerative braking on hybrid and electric cars. In diesel electric locomotives they also use their DC motors as generators to slow down but dissipate the energy in resistor stacks. Newer designs are adding large battery packs to recapture some of this energy.
Commutation
Brushed
The brushed DC electric motor generates torque directly from DC power supplied to the motor by using internal commutation, stationary magnets (permanent or electromagnets), and rotating electromagnets.
Advantages of a brushed DC motor include low initial cost, high reliability, and simple control of motor speed. Disadvantages are high maintenance and low life-span for high intensity uses. Maintenance involves regularly replacing the carbon brushes and springs which carry the electric current, as well as cleaning or replacing the commutator. These components are necessary for transferring electrical power from outside the motor to the spinning wire windings of the rotor inside the motor.
Brushes are usually made of graphite or carbon, sometimes with added dispersed copper to improve conductivity. In use, the soft brush material wears to fit the diameter of the commutator, and continues to wear. A brush holder has a spring to maintain pressure on the brush as it shortens. For brushes intended to carry more than an ampere or two, a flying lead will be molded into the brush and connected to the motor terminals. Very small brushes may rely on sliding contact with a metal brush holder to carry current into the brush, or may rely on a contact spring pressing on the end of the brush. The brushes in very small, short-lived motors, such as are used in toys, may be made of a folded strip of metal that contacts the commutator.
Brushless
Typical brushless DC motors use one or more permanent magnets in the rotor and electromagnets on the motor housing for the stator. A motor controller converts DC to AC. This design is mechanically simpler than that of brushed motors because it eliminates the complication of transferring power from outside the motor to the spinning rotor. The motor controller can sense the rotor's position via Hall effect sensors or similar devices and can precisely control the timing, phase, etc., of the current in the rotor coils to optimize torque, conserve power, regulate speed, and even apply some braking. Advantages of brushless motors include long life span, little or no maintenance, and high efficiency. Disadvantages include high initial cost, and more complicated motor speed controllers. Some such brushless motors are sometimes referred to as "synchronous motors" although they have no external power supply to be synchronized with, as would be the case with normal AC synchronous motors.
Uncommutated
Other types of DC motors require no commutation.
Homopolar motor – A homopolar motor has a magnetic field along the axis of rotation and an electric current that at some point is not parallel to the magnetic field. The name homopolar refers to the absence of polarity change. Homopolar motors necessarily have a single-turn coil, which limits them to very low voltages. This has restricted the practical application of this type of motor.
Ball bearing motor – A ball bearing motor is an unusual electric motor that consists of two ball bearing-type bearings, with the inner races mounted on a common conductive shaft, and the outer races connected to a high current, low voltage power supply. An alternative construction fits the outer races inside a metal tube, while the inner races are mounted on a shaft with a non-conductive section (e.g. two sleeves on an insulating rod). This method has the advantage that the tube will act as a flywheel. The direction of rotation is determined by the initial spin which is usually required to get it going.
Permanent magnet stators
A permanent magnet (PM) motor does not have a field winding on the stator frame, instead relying on PMs to provide the magnetic field against which the rotor field interacts to produce torque. Compensation windings in series with the armature may be used on large motors to improve commutation under load. Because this field is fixed, it cannot be adjusted for speed control. PM fields (stators) are convenient in miniature motors to eliminate the power consumption of the field winding. Most larger DC motors are of the "dynamo" type, which have stator windings. Historically, PMs could not be made to retain high flux if they were disassembled; field windings were more practical to obtain the needed amount of flux. However, large PMs are costly, as well as dangerous and difficult to assemble; this favors wound fields for large machines.
To minimize overall weight and size, miniature PM motors may use high energy magnets made with neodymium or other strategic elements; most such are neodymium-iron-boron alloy. With their higher flux density, electric machines with high-energy PMs are at least competitive with all optimally designed singly fed synchronous and induction electric machines. Miniature motors resemble the structure in the illustration, except that they have at least three rotor poles (to ensure starting, regardless of rotor position) and their outer housing is a steel tube that magnetically links the exteriors of the curved field magnets.
Wound stators
There are three types of electrical connections between the stator and rotor possible for DC electric motors: series, shunt/parallel and compound (various blends of series and shunt/parallel) and each has unique speed/torque characteristics appropriate for different loading torque profiles/signatures.
Series connection
A series DC motor connects the armature and field windings in series with a common D.C. power source. The motor speed varies as a non-linear function of load torque and armature current; current is common to both the stator and rotor yielding current squared (I^2) behavior. A series motor has very high starting torque and is commonly used for starting high inertia loads, such as trains, elevators or hoists. This speed/torque characteristic is useful in applications such as dragline excavators, where the digging tool moves rapidly when unloaded but slowly when carrying a heavy load.
A series motor should never be started at no load. With no mechanical load on the series motor, the current is low, the counter-Electro motive force produced by the field winding is weak, and so the armature must turn faster to produce sufficient counter-EMF to balance the supply voltage. The motor can be damaged by overspeed. This is called a runaway condition.
Series motors called universal motors can be used on alternating current. Since the armature voltage and the field direction reverse at the same time, torque continues to be produced in the same direction. However they run at a lower speed with lower torque on AC supply when compared to DC due to reactance voltage drop in AC which is not present in DC. Since the speed is not related to the line frequency, universal motors can develop higher-than-synchronous speeds, making them lighter than induction motors of the same rated mechanical output. This is a valuable characteristic for hand-held power tools. Universal motors for commercial utility are usually of small capacity, not more than about 1 kW output. However, much larger universal motors were used for electric locomotives, fed by special low-frequency traction power networks to avoid problems with commutation under heavy and varying loads.
Shunt connection
A shunt DC motor connects the armature and field windings in parallel or shunt with a common D.C. power source. This type of motor has good speed regulation even as the load varies, but does not have the starting torque of a series DC motor. It is typically used for industrial, adjustable speed applications, such as machine tools, winding/unwinding machines and tensioners.
Compound connection
A compound DC motor connects the armature and fields windings in a shunt and a series combination to give it characteristics of both a shunt and a series DC motor. This motor is used when both a high starting torque and good speed regulation is needed. The motor can be connected in two arrangements: cumulatively or differentially. Cumulative compound motors connect the series field to aid the shunt field, which provides higher starting torque but less speed regulation. Differential compound DC motors have good speed regulation and are typically operated at constant speed.
See also
Cogging torque
Ward Leonard control
Torque and speed of a DC motor
Armature Controlled DC Motor
References
External links
Make a working model of dc motor at sci-toys.com
How to select a DC motor at MICROMO (archived page)
DC motor model in Simulink at File Exchange - MATLAB Central
Electric motors | DC motor | [
"Technology",
"Engineering"
] | 2,655 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
760,027 | https://en.wikipedia.org/wiki/Frost%20line | The frost line—also known as frost depth or freezing depth—is most commonly the depth to which the groundwater in soil is expected to freeze. The frost depth depends on the climatic conditions of an area, the heat transfer properties of the soil and adjacent materials, and on nearby heat sources. For example, snow cover and asphalt insulate the ground and homes can heat the ground (see also heat island). The line varies by latitude, it is deeper closer to the poles. The maximum frost depth observed in the contiguous United States ranges from 0 to . Below that depth, the temperature varies, but is always above .
Alternatively, in Arctic and Antarctic locations the freezing depth is so deep that it becomes year-round permafrost, and the term "thaw depth" is used instead. Finally, in tropical regions, frost line may refer to the vertical geographic elevation below which frost does not occur.
Frost front refers to the varying position of the frost line during seasonal periods of freezing and thawing.
Building codes
Building codes take frost depth into account because of frost heaving, which can damage buildings by moving their foundations. For this reason, foundations are generally built below the frost depth. Water and sewage pipes are typically buried below the frost line to prevent them from freezing. Alternatively, pipes may be insulated or heated using heat tape or similar products to allow for shallower depths. Due to additional cost, this method is typically only used where deeper trenching is not an option due to utility conflicts, shallow bedrock, or other conditions that make deeper excavation infeasible.
There are many ways to predict frost depth, including factors related to air temperature, soil temperature, and soil properties.
Sample frost lines for various locations
United States
Ohio (2018)
Columbus, Ohio: 32 inches (0.8 m)
Minnesota (2007):
Northern counties:
Southern counties:
Washington (2023)
Edmonds, WA: 18 inches (0.46 m)
Spokane, WA: 24 inches (0.61 m)
Canada: Can use table of estimates based on freezing index degree days.
Ontario: Map of frost penetration depths for Southern Ontario.
Ottawa:
Windsor:
References
Glaciology
Ground freezing
Soil mechanics | Frost line | [
"Physics"
] | 444 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
760,392 | https://en.wikipedia.org/wiki/Cadmium%20arsenide | Cadmium arsenide (Cd3As2) is an inorganic semimetal in the II-V family. It exhibits the Nernst effect.
Properties
Thermal
Cd3As2 dissociates between 220 and 280 °C according to the reaction
2 Cd3As2(s) → 6 Cd(g) + As4(g)
An energy barrier was found for the nonstoichiometric vaporization of arsenic due to the irregularity of the partial pressures with temperature. The range of the energy gap is from 0.5 to 0.6 eV. Cd3As2 melts at 716 °C and changes phase at 615 °C/
Phase transition
Pure cadmium arsenide undergoes several phase transitions at high temperatures, making phases labeled α (stable), α’, α” (metastable), and β. At 593° the polymorphic transition α → β occurs.
α-Cd3As2 ↔ α’-Cd3As2 occurs at ~500 K.
α’-Cd3As2 ↔ α’’-Cd3As2 occurs at ~742 K and is a regular first order phase transition with marked hysteresis loop.
α”-Cd3As2 ↔ β-Cd3As2 occurs at 868 K.
Single crystal x-ray diffraction was used to determine the lattice parameters of Cd3As2 between 23 and 700 °C. Transition α → α′ occurs slowly and therefore is most likely an intermediate phase. Transition α′ → α″ occurs much faster than α → α′ and has very small thermal hysteresis. This transition results in a change in the fourfold axis of the tetragonal cell, causing crystal twinning. The width of the loop is independent of the rate of heating although it becomes narrower after several temperature cycles.
Electronic
The compound cadmium arsenide has a lower vapor pressure (0.8 atm) than both cadmium and arsenic separately. Cadmium arsenide does not decompose when it is vaporized and re-condensed. Carrier Concentration in Cd3As2 are usually (1–4)×1018 electrons/cm3. Despite having high carrier concentrations, the electron mobilities are also very high (up to 10,000 cm2/(V·s) at room temperature).
In 2014 Cd3As2 was shown to be a semimetal material analogous to graphene that exists in a 3D form that should be much easier to shape into electronic devices. Three-dimensional (3D) topological Dirac semimetals (TDSs) are bulk analogues of graphene that also exhibit non-trivial topology in its electronic structure that shares similarities with topological insulators. Moreover, a TDS can potentially be driven into other exotic phases (such as Weyl semimetals, axion insulators and topological superconductors), Angle-resolved photoemission spectroscopy revealed a pair of 3D Dirac fermions in Cd3As2. Compared with other 3D TDSs, for example, β-cristobalite and , Cd3As2 is stable and has much higher Fermi velocities. In situ doping was used to tune its Fermi energy.
Conducting
Cadmium arsenide is a II-V semiconductor showing degenerate n-type semiconductor intrinsic conductivity with a large mobility, low effective mass and highly non parabolic conduction band, or a Narrow-gap semiconductor. It displays an inverted band structure, and the optical energy gap, eg, is less than 0. When deposited by thermal evaporation (deposition), cadmium arsenide displayed the Schottky (thermionic emission) and Poole–Frenkel effect at high electric fields.
Magnetoresistance
Cadmium Arsenide shows very strong quantum oscillations in resistance even at the relatively high temperature of 100K. This makes it useful for testing cryomagnetic systems as the presence of such a strong signal is a clear indicator of function.
Preparation
Cadmium arsenide can be prepared as amorphous semiconductive glass. According to Hiscocks and Elliot, the preparation of cadmium arsenide was made from cadmium metal, which had a purity of 6 N from Kock-Light Laboratories Limited. Hoboken supplied β-arsenic with a purity of 99.999%. Stoichiometric proportions of the elements cadmium and arsenic were heated together. Separation was difficult and lengthy due to the ingots sticking to the silica and breaking. Liquid encapsulated Stockbarger growth was created. Crystals are pulled from volatile melts in liquid encapsulation. The melt is covered by a layer of inert liquid, usually B2O3, and an inert gas pressure greater than the equilibrium vapor pressure is applied. This eliminates the evaporation from the melt which allows seeding and pulling to occur through the B2O3 layer.
Crystal structure
The unit cell of Cd3As2 is tetragonal. The arsenic ions are cubic close packed and the cadmium ions are tetrahedrally coordinated. The vacant tetrahedral sites provoked research by von Stackelberg and Paulus (1935), who determined the primary structure. Each arsenic ion is surrounded by cadmium ions at six of the eight corners of a distorted cube and the two vacant sites were at the diagonals.
The crystalline structure of cadmium arsenide is very similar to that of zinc phosphide (Zn3P2), zinc arsenide (Zn3As2) and cadmium phosphide (Cd3P2). These compounds of the Zn-Cd-P-As quaternary system exhibit full continuous solid-solution.
Nernst effect
Cadmium arsenide is used in infrared detectors using the Nernst effect, and in thin-film dynamic pressure sensors. It can be also used to make magnetoresistors, and in photodetectors.
Cadmium arsenide can be used as a dopant for HgCdTe.
References
External links
National Pollutant Inventory – Cadmium and compounds
Arsenides
Cadmium compounds
II-V compounds | Cadmium arsenide | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,272 | [
"Inorganic compounds",
"II-V compounds",
"Materials",
"Condensed matter physics",
"Semimetals",
"Matter"
] |
760,565 | https://en.wikipedia.org/wiki/Solar%20chimney | A solar chimney often referred to as a thermal chimney is a way of improving the natural ventilation of buildings by using convection of air heated by passive solar energy. A simple description of a solar chimney is that of a vertical shaft utilizing solar energy to enhance the natural stack ventilation through a building.
The solar chimney has been in use for centuries, particularly in the Middle East and Near East by the Persians, as well as in Europe by the Romans.
Description
In its simplest form, the solar chimney consists of a black-painted chimney. During the day solar energy heats the chimney and the air within it, creating an updraft of air in the chimney. The suction created at the chimney's base can be used to ventilate and cool the building below. In most parts of the world it is easier to harness wind power for such ventilation as with a windcatcher, but on hot windless days a solar chimney can provide ventilation where otherwise there would be none.
There are however a number of solar chimney variations. The basic design elements of a solar chimney are:
The solar collector area: This can be located in the top part of the chimney or can include the entire shaft. The orientation, type of glazing, insulation and thermal properties of this element are crucial for harnessing, retaining and utilizing solar gains.
The main ventilation shaft: The location, height, cross section and the thermal properties of this structure are also very important.
The inlet and outlet air apertures: The sizes, location as well as aerodynamic aspects of these elements are also significant.
A principle has been proposed for solar power generation, using a large greenhouse at the base rather than relying solely on heating the chimney itself. (For further information on this issue, see Solar updraft tower.)
Solar chimneys are painted black so that they absorb the sun's heat more effectively. When the air inside the chimney is heated, it rises and pulls cold air out from under the ground via the heat exchange tubes.
Solar chimney and sustainable architecture
Solar chimneys, also called heat chimneys or heat stacks, can also be used in architectural settings to decrease the energy used by mechanical systems (systems that heat and cool the building through mechanical means). For decades, air conditioning and mechanical ventilation have been the standard method of environmental control in many building types, especially offices, in developed countries. Pollution and reallocating energy supplies have led to a new environmental approach in building design. Innovative technologies along with bioclimatic principles and traditional design strategies are often combined to create new and potentially successful design solutions. The solar chimney is one of these concepts currently explored by scientists as well as designers, mostly through research and experimentation.
A solar chimney can serve many purposes. Direct sunlight warms air inside the chimney causing it to rise out the top and drawing air in from the bottom. This drawing of air can be used to ventilate a home or office, to draw air through a geothermal heat exchange, or to ventilate only a specific area such as a composting toilet.
Natural ventilation can be created by providing vents in the upper level of a building to allow warm air to rise by convection and escape to the outside. At the same time cooler air can be drawn in through vents at the lower level. Trees may be planted on that side of the building to provide shade for cooler outside air.
This natural ventilation process can be augmented by a solar chimney. The chimney has to be higher than the roof level, and has to be constructed on the wall facing the direction of the Sun. Absorption of heat from the Sun can be increased by using a glazed surface on the side facing the Sun. Heat absorbing material can be used on the opposing side. The size of the heat-absorbing surface is more important than the diameter of the chimney. A large surface area allows for more effective heat exchange with the air necessary for heating by solar radiation. Heating of the air within the chimney will enhance convection, and hence airflow through the chimney. Openings of the vents in the chimney should face away from the direction of the prevailing wind.
To further maximize the cooling effect, the incoming air may be led through underground ducts before it is allowed to enter the building. The solar chimney can be improved by integrating it with a trombe wall. The added advantage of this design is that the system may be reversed during the cold season, providing solar heating instead.
A variation of the solar chimney concept is the solar attic. In a hot sunny climate the attic space is often blazingly hot in the summer. In a conventional building this presents a problem as it leads to the need for increased air conditioning. By integrating the attic space with a solar chimney, the hot air in the attic can be put to work. It can help the convection in the chimney, improving ventilation.
The use of a solar chimney may benefit natural ventilation and passive cooling strategies of buildings thus help reduce energy use, CO2 emissions and pollution in general. Potential benefits regarding natural ventilation and use of solar chimneys are:
improved ventilation rates on still, hot days
reduced reliance on wind and wind driven ventilation
improved control of air flow through a building
greater choice of air intake (i.e. leeward side of building)
improved air quality and reduced noise levels in urban areas
increased night time ventilation rates
ventilation of narrow, small spaces with minimal exposure to external elements
Potential benefits regarding passive cooling may include:
improved passive cooling during warm season (mostly on still, hot days)
improved night cooling rates
enhanced performance of thermal mass (cooling, cool storage)
improved thermal comfort (improved air flow control, reduced draughts)
Precedent Study: The Environmental Building
The Building Research Establishment (BRE) office building in Garston, Watford, United Kingdom, incorporates solar-assisted passive ventilation stacks as part of its ventilation strategy.
Designed by architects Feilden Clegg Bradley, the BRE offices aim to reduce energy consumption and CO2 emissions by 30% from current best practice guidelines and sustain comfortable environmental conditions without the use of air conditioning. The passive ventilation stacks, solar shading, and hollow concrete slabs with embedded under floor cooling are key features of this building. Ventilation and heating systems are controlled by the building management system (BMS) while a degree of user override is provided to adjust conditions to occupants' needs.
The building utilizes five vertical shafts as an integral part of the ventilation and cooling strategy. The main components of these stacks are a south facing glass-block wall, thermal mass walls and stainless steel round exhausts rising a few meters above roof level. The chimneys are connected to the curved hollow concrete floor slabs which are cooled via night ventilation. Pipes embedded in the floor can provide additional cooling utilizing groundwater.
On warm windy days air is drawn in through passages in the curved hollow concrete floor slabs. Stack ventilation naturally rising out through the stainless steel chimneys enhances the air flow through the building. The movement of air across the chimney tops enhances the stack effect.
During warm, still days, the building relies mostly on the stack effect while air is taken from the shady north side of the building. Low-energy fans in the tops of the stacks can also be used to improve airflow.
Overnight, control systems enable ventilation paths through the hollow concrete slab removing the heat stored during the day, which then remains cold for the following day. The exposed curved ceiling gives more surface area than a flat ceiling would, acting as a heat sink, again providing summer cooling.
Research based on actual performance measurements of the passive stacks found that they enhanced the cooling ventilation of the space during warm and still days and may also have the potential to assist night-time cooling due to their thermally massive structure.
Passive down-draft cool tower
A technology closely related to the solar chimney is the evaporative down-draft cooltower. In areas with a hot, arid climate this approach may contribute to a sustainable way to provide air conditioning for buildings.
The principle is to allow water to evaporate at the top of a tower, either by using evaporative cooling pads or by spraying water. Evaporation cools the incoming air, causing a downdraft of cool air that will bring down the temperature inside the building. Airflow can be increased by using a solar chimney on the opposite side of the building to help in venting hot air to the outside. This concept has been used for the Visitor Center of Zion National Park. The Visitor Center was designed by the High Performance Buildings Research of the National Renewable Energy Laboratory (NREL).
The principle of the downdraft cooltower has been proposed for solar power generation as well. (See Energy tower for more information.)
Evaporation of moisture from the pads on top of the Toguna buildings built by the Dogon people of Mali, Africa, contribute to the coolness felt by the men who rest underneath. The women's buildings on the outskirts of town are functional as more conventional solar chimneys.
See also
Ab anbar
Autonomous building
Earth cooling tubes
Evaporative cooling
HVAC
Manitoba Hydro Place
Natural ventilation
Passive house
Stack effect
Toguna
Ventilation (architecture)
Yakhchāl
References
Sources
External links
Solar Chimney –
Solar Innovation Ideas – Victorian Solar Innovation Initiative
Architectural Environmental Analysis – A guide to environmental design
Passive Solar Heating & Cooling Manual
Appropriate technology
Architectural elements
Building engineering
Chimneys
Convection
Heating, ventilation, and air conditioning
Low-energy building
Solar architecture
Solar-powered devices
Sustainable building
Sustainable technologies | Solar chimney | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,893 | [
"Transport phenomena",
"Sustainable building",
"Physical phenomena",
"Building engineering",
"Convection",
"Construction",
"Architectural elements",
"Civil engineering",
"Thermodynamics",
"Components",
"Architecture"
] |
760,606 | https://en.wikipedia.org/wiki/Anomer | In carbohydrate chemistry, a pair of anomers () is a pair of near-identical stereoisomers or diastereomers that differ at only the anomeric carbon, the carbon atom that bears the aldehyde or ketone functional group in the sugar's open-chain form. However, in order for anomers to exist, the sugar must be in its cyclic form, since in open-chain form, the anomeric carbon atom is planar and thus achiral. More formally stated, then, an anomer is an epimer at the hemiacetal/hemiketal carbon atom in a cyclic saccharide. Anomerization is the process of conversion of one anomer to the other. As is typical for stereoisomeric compounds, different anomers have different physical properties, melting points and specific rotations.
Nomenclature
Every two anomers are designated alpha (α) or beta (β), according to the configurational relationship between the anomeric centre and the anomeric reference atom, hence they are relative stereodescriptors. The anomeric centre in hemiacetals is the anomeric carbon C-1; in hemiketals, it is the carbon derived from the carbonyl of the ketone (e.g. C-2 in D-fructose). In aldohexoses the anomeric reference atom is the stereocenter that is farthest from the anomeric carbon in the ring (the configurational atom, defining the sugar as D or L). For example, in α-D-glucopyranose the reference atom is C-5.
If in the cyclic Fischer projection the exocyclic oxygen atom at the anomeric centre is cis (on the same side) to the exocyclic oxygen attached to the anomeric reference atom (in the OH group) the anomer is α. If the two oxygens are trans (on different sides) the anomer is β.
Anomerization
Anomerization is the process of conversion of one anomer to the other. For reducing sugars, anomerization is referred to as mutarotation and occurs readily in solution and is catalyzed by acid and base. This reversible process typically leads to an anomeric mixture in which eventually an equilibrium is reached between the two single anomers.
The ratio of the two anomers is specific for the regarding sugar. For example, regardless of the configuration of the starting D-glucose, a solution will gradually move towards being a mixture of approximately 64% β-D-glucopyranoside and 36% of α-D-glucopyranose. As the ratio changes, the optical rotation of the mixture changes; this phenomenon is called mutarotation.
Mechanism of anomerization
Though the cyclic forms of sugars are usually heavily favoured, hemiacetals in aqueous solution are in equilibrium with their open-chain forms. In aldohexoses this equilibrium is established as the hemiacetal bond between C-1 (the carbon bound to two oxygens) and the C-5 oxygen. It is cleaved (forming the open-chain compound) and reformed (forming the cyclic compound). When the hemiacetal group is reformed, the OH group on C-5 may attack either of the two stereochemically distinct sides of the aldehyde group on C-1. Which side it attacks on determines whether the α- or β-anomer is formed.
Anomerization of glycosides typically occurs under acidic conditions. Typically, anomerization occurs through protonation of the exocyclic acetal oxygen, ionization to form an oxocarbenium ion with release of an alcohol, and nucleophilic attack by an alcohol on the reverse face of the oxocarbenium ion, followed by deprotonation.
Physical properties and stability
Anomers are different in structure, and thus have different stabilizing and destabilizing effects from each other. The major contributors to the stability of a certain anomer are:
The anomeric effect, which stabilizes the anomer that has an electron withdrawing group (typically an oxygen or nitrogen atom) in axial orientation on the ring. This effect is abolished in polar solvents such as water.
1,3-diaxial interactions, which usually destabilize the anomer that has the anomeric group in an axial orientation on the ring. This effect is especially noticeable in pyranoses and other six-membered ring compounds. This is a major factor in water.
Hydrogen bonds between the anomeric group and other groups on the ring, leading to stabilization of the anomer.
Dipolar repulsion between the anomeric group and other groups on the ring, leading to destabilization of the anomer.
For D-glucopyranoside, the β-anomer is the more stable anomer in water. For D-mannopyranose, the α-anomer is the more stable anomer.
Because anomers are diastereomers of each other, they often differ in physical and chemical properties. One of the most important physical properties that is used to study anomers is the specific rotation, which can be monitored by polarimetry.
See also
Monosaccharide nomenclature
Stereochemistry
References
External links
Carbohydrate chemistry
Carbohydrates
Stereochemistry | Anomer | [
"Physics",
"Chemistry"
] | 1,159 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Stereochemistry",
"Spacetime",
"Organic compounds",
"Space",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
8,075,001 | https://en.wikipedia.org/wiki/K-set%20%28geometry%29 | In discrete geometry, a -set of a finite point set in the Euclidean plane is a subset of elements of that can be strictly separated from the remaining points by a line. More generally, in Euclidean space of higher dimensions, a -set of a finite point set is a subset of elements that can be separated from the remaining points by a hyperplane. In particular, when (where is the size of ), the line or hyperplane that separates a -set from the rest of is a halving line or halving plane.
The -sets of a set of points in the plane are related by projective duality to the -levels in an arrangement of lines. The -level in an arrangement of lines in the plane is the curve consisting of the points that lie on one of the lines and have exactly lines below them. Discrete and computational geometers have also studied levels in arrangements of more general kinds of curves and surfaces.
Combinatorial bounds
It is of importance in the analysis of geometric algorithms to bound the number of -sets of a planar point set, or equivalently the number of -levels of a planar line arrangement, a problem first studied by Lovász and Erdős et al. The best known upper bound for this problem is , as was shown by Tamal Dey using the crossing number inequality of Ajtai, Chvátal, Newborn, and Szemerédi. However, the best known lower bound is far from Dey's upper bound: it is for some constant , as shown by Tóth.
In three dimensions, the best upper bound known is , and the best lower bound known is .
For points in three dimensions that are in convex position, that is, are the vertices of some convex polytope, the number of -sets is
, which follows from arguments used for bounding the complexity of th order Voronoi diagrams.
For the case when (halving lines), the maximum number of combinatorially distinct lines through two points of that bisect the remaining points when is
Bounds have also been proven on the number of -sets, where a -set is a -set for some . In two dimensions, the maximum number of -sets is exactly , while in dimensions the bound is .
Construction algorithms
Edelsbrunner and Welzl first studied the problem of constructing all -sets of an input point set, or dually of constructing the -level of an arrangement. The -level version of their algorithm can be viewed as a plane sweep algorithm that constructs the level in left-to-right order. Viewed in terms of -sets of point sets, their algorithm maintains a dynamic convex hull for the points on each side of a separating line, repeatedly finds a bitangent of these two hulls, and moves each of the two points of tangency to the opposite hull. Chan surveys subsequent results on this problem, and shows that it can be solved in time proportional to Dey's bound on the complexity of the -level.
Agarwal and Matoušek describe algorithms for efficiently constructing an approximate level; that is, a curve that passes between the -level and the -level for some small approximation parameter . They show that such an approximation can be found, consisting of a number of line segments that depends only on and not on or .
Matroid generalizations
The planar -level problem can be generalized to one of parametric optimization in a matroid: one is given a matroid in which each element is weighted by a linear function of a parameter , and must find the minimum weight basis of the matroid for each possible value of . If one graphs the weight functions as lines in a plane, the -level of the arrangement of these lines graphs as a function of the weight of the largest element in an optimal basis in a uniform matroid, and Dey showed that his bound on the complexity of the -level could be generalized to count the number of distinct optimal bases of any matroid with elements and rank .
For instance, the same upper bound holds for counting the number of different minimum spanning trees formed in a graph with edges and vertices, when the edges have weights that vary linearly with a parameter . This parametric minimum spanning tree problem has been studied by various authors and can be used to solve other bicriterion spanning tree optimization problems.
However, the best known lower bound for the parametric minimum spanning tree problem is , a weaker bound than that for the -set problem. For more general matroids, Dey's upper bound has a matching lower bound.
Notes
References
External links
Halving lines and -sets, Jeff Erickson
The Open Problems Project, Problem 7: -sets
Discrete geometry
Matroid theory | K-set (geometry) | [
"Mathematics"
] | 953 | [
"Discrete geometry",
"Discrete mathematics",
"Matroid theory",
"Combinatorics"
] |
8,075,308 | https://en.wikipedia.org/wiki/Kasner%20metric | The Kasner metric (developed by and named for the American mathematician Edward Kasner in 1921) is an exact solution to Albert Einstein's theory of general relativity. It describes an anisotropic universe without matter (i.e., it is a vacuum solution). It can be written in any spacetime dimension and has strong connections with the study of gravitational chaos.
Metric and conditions
The metric in spacetime dimensions is
,
and contains constants , called the Kasner exponents. The metric describes a spacetime whose equal-time slices are spatially flat, however space is expanding or contracting at different rates in different directions, depending on the values of the . Test particles in this metric whose comoving coordinate differs by are separated by a physical distance .
The Kasner metric is an exact solution to Einstein's equations in vacuum when the Kasner exponents satisfy the following Kasner conditions,
The first condition defines a plane, the Kasner plane, and the second describes a sphere, the Kasner sphere. The solutions (choices of ) satisfying the two conditions therefore lie on the sphere where the two intersect (sometimes confusingly also called the Kasner sphere). In spacetime dimensions, the space of solutions therefore lie on a dimensional sphere .
Features
There are several noticeable and unusual features of the Kasner solution:
The volume of the spatial slices is always . This is because their volume is proportional to , and
where we have used the first Kasner condition. Therefore can describe either a Big Bang or a Big Crunch, depending on the sense of
Isotropic expansion or contraction of space is not allowed. If the spatial slices were expanding isotropically, then all of the Kasner exponents must be equal, and therefore to satisfy the first Kasner condition. But then the second Kasner condition cannot be satisfied, for
The Friedmann–Lemaître–Robertson–Walker metric employed in cosmology, by contrast, is able to expand or contract isotropically because of the presence of matter.
With a little more work, one can show that at least one Kasner exponent is always negative (unless we are at one of the solutions with a single , and the rest vanishing). Suppose we take the time coordinate to increase from zero. Then this implies that while the volume of space is increasing like , at least one direction (corresponding to the negative Kasner exponent) is actually contracting.
The Kasner metric is a solution to the vacuum Einstein equations, and so the Ricci tensor always vanishes for any choice of exponents satisfying the Kasner conditions. The full Riemann tensor vanishes only when a single and the rest vanish, in which case the space is flat. The Minkowski metric can be recovered via the coordinate transformation and .
See also
BKL singularity
Mixmaster universe
Notes
References
Exact solutions in general relativity
Metric tensors | Kasner metric | [
"Mathematics",
"Engineering"
] | 583 | [
"Exact solutions in general relativity",
"Tensors",
"Mathematical objects",
"Equations",
"Metric tensors"
] |
8,082,019 | https://en.wikipedia.org/wiki/Galaxy%20effective%20radius | Galaxy effective radius or half-light radius () is the radius at which half of the total light of a galaxy is emitted. This assumes the galaxy has either intrinsic spherical symmetry or is at least circularly symmetric as viewed in the plane of the sky. Alternatively, a half-light contour, or isophote, may be used for spherically and circularly asymmetric objects.
is an important length scale in term in de Vaucouleurs's law, which characterizes a specific rate at which surface brightness decreases as a function of radius:
where is the surface brightness at . At ,
Thus, the central surface brightness is approximately .
See also
References
Physical quantities
Radii
Equations of astronomy | Galaxy effective radius | [
"Physics",
"Astronomy",
"Mathematics"
] | 145 | [
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Quantity",
"Astronomy stubs",
"Equations of astronomy",
"Physical properties"
] |
1,219,670 | https://en.wikipedia.org/wiki/Subcritical%20reactor | A subcritical reactor is a nuclear fission reactor concept that produces fission without achieving criticality. Instead of sustaining a chain reaction, a subcritical reactor uses additional neutrons from an outside source. There are two general classes of such devices. One uses neutrons provided by a nuclear fusion machine, a concept known as a fusion–fission hybrid. The other uses neutrons created through spallation of heavy nuclei by charged particles such as protons accelerated by a particle accelerator, a concept known as an accelerator-driven system (ADS) or accelerator-driven sub-critical reactor.
Motivation
A subcritical reactor can be used to destroy heavy isotopes contained in the used fuel from a conventional nuclear reactor, while at the same time producing electricity. The long-lived transuranic elements in nuclear waste can in principle be fissioned, releasing energy in the process and leaving behind the fission products which are shorter-lived. This would shorten considerably the time for disposal of radioactive waste. However, some isotopes have threshold fission cross sections and therefore require a fast reactor for being fissioned. While they can be transmuted into fissile material with thermal neutrons, some nuclides need as many as three successive neutron capture reactions to reach a fissile isotope and then yet another neutron to fission. Also, they release on average too few new neutrons per fission, so that with a fuel containing a high fraction of them, criticality cannot be reached. The accelerator-driven reactor is independent of this parameter and thus can utilize these nuclides. The three most important long-term radioactive isotopes that could advantageously be handled that way are neptunium-237, americium-241 and americium-243. The nuclear weapon material plutonium-239 is also suitable although it can be expended in a cheaper way as MOX fuel or inside existing fast reactors.
Besides nuclear waste incineration, there is interest in this type reactor because it is perceived as inherently safe, unlike a conventional reactor. In most types of critical reactors, there exist circumstances in which the rate of fission can increase rapidly, damaging or destroying the reactor and allowing the escape of radioactive material (see SL-1 or Chernobyl disaster). With a subcritical reactor, the reaction will cease unless continually fed neutrons from an outside source. However, the problem of heat generation even after ending the chain reaction remains, so that continuous cooling of such a reactor for a considerable period after shut-down remains vital in order to avoid overheating. However, even the issue of decay heat can be minimized as a subcritical reactor needn't assemble a critical mass of fissile material and can thus be built (nearly) arbitrarily small and thus reduce the required thermal mass of an emergency coolant system capable of absorbing all heat generated in the hours to days after a scram.
Delayed neutrons
Another issue in which a subcritical reactor is different from a "normal" nuclear reactor (no matter whether it operates with fast or thermal neutrons) is that all "normal" nuclear power plants rely on delayed neutrons to maintain safe operating conditions. Depending on the fissioning nuclide, a bit under 1% of neutrons aren't released immediately upon fission (prompt neutrons) but rather with fractions of seconds to minutes of delay by fission products which beta decay followed by neutron emission. Those delayed neutrons are essential for reactor control as the time between fission "generations" is on such a short order of magnitude that macroscopic physical processes or human intervention cannot keep a power excursion under control. However, as only the delayed neutrons provide enough neutrons to maintain criticality, the reaction times become several orders of magnitude larger and reactor control becomes feasible. By contrast this means that too low a fraction of delayed neutrons makes an otherwise fissile material unsuitable for operating a "conventional" nuclear power plant. Conversely, a subcritical reactor actually has slightly improved properties with a fuel with low delayed neutron fractions. (See below). It just so happens that while the currently most used fissile material has a relatively high delayed neutron fraction, has a much lower one, which - in addition to other physical and chemical properties - limits the possible plutonium content in "normal" reactor fuel. For this reason spent MOX-fuel, which still contains significant amounts of plutonium (including fissile and - when "fresh" - ) is usually not reprocessed due to the ingrowth of non-fissile which would require a higher plutonium content in fuel manufactured from this plutonium to maintain criticality. The other main component of spent fuel - reprocessed uranium - is usually only recovered as a byproduct and fetches worse prices on the uranium market than natural uranium due to ingrowth of and other "undesirable" isotopes of uranium.
Principle
Most current ADS designs propose a high-intensity proton accelerator with an energy of about 1 GeV, directed towards a spallation target or spallation neutron source. The source located in the heart of the reactor core contains liquid metal which is impacted by the beam, thus releasing neutrons and is cooled by circulating the liquid metal such as lead-bismuth towards a heat exchanger. The nuclear reactor core surrounding the spallation neutron source contains the fuel rods, the fuel being any fissile or fertile actinide mix, but preferable already with a certain amount of fissile material to not have to run at zero criticality during startup. Thereby, for each proton intersecting the spallation target, an average of 20 neutrons is released which fission the surrounding fissile part of the fuel and transmute atoms in the fertile part, "breeding" new fissile material. If the value of 20 neutrons per GeV expended is assumed, one neutron "costs" 50 MeV while fission (which requires one neutron) releases on the order of 200 MeV per actinide atom that is split. Efficiency can be increased by reducing the energy needed to produce a neutron, increasing the share of usable energy extracted from the fission (if a thermal process is used Carnot efficiency dictates that higher temperatures are needed to increase efficiency) and finally by getting criticality ever closer to 1 while still staying below it. An important factor in both efficiency and safety is how subcritical the reactor is. To simplify, the value of k(effective) that is used to give the criticality of a reactor (including delayed neutrons) can be interpreted as how many neutrons of each "generation" fission further nuclei. If k(effective) is 1, for every 1000 neutrons introduced, 1000 neutrons are produced that also fission further nuclei. Obviously the reaction rate would steadily increase in that case due to more and more neutrons being delivered from the neutron source. If k(effective) is just below 1, few neutrons have to be delivered from outside the reactor to keep the reaction at a steady state, increasing efficiency. On the other hand, in the extreme case of "zero criticality", that is k(effective)=0 (e.g. If the reactor is run for transmutation alone) all neutrons are "consumed" and none are produced inside the fuel. However, as neutronics can only ever be known to a certain degree of precision, the reactor must in practice allow a safety margin below criticality that depends on how well the neutronics are known and on the effect of the ingrowth of nuclides that decay via neutron emitting spontaneous fission such as Californium-252 or of nuclides that decay via neutron emission.
The neutron balance can be regulated or indeed shut off by adjusting the accelerator power so that the reactor would be below criticality. The additional neutrons provided by the spallation neutron source provide the degree of control as do the delayed neutrons in a conventional nuclear reactor, the difference being that spallation neutron source-driven neutrons are easily controlled by the accelerator. The main advantage is inherent safety. A conventional nuclear reactor's nuclear fuel possesses self-regulating properties such as the Doppler effect or void effect, which make these nuclear reactors safe. In addition to these physical properties of conventional reactors, in the subcritical reactor, whenever the neutron source is turned off, the fission reaction ceases and only the decay heat remains.
Technical challenges
There are technical difficulties to overcome before ADS can become economical and eventually be integrated into future nuclear waste management. The accelerator must provide a high intensity and also be highly reliable - each outage of the accelerator in addition to causing a scram will put the system under immense thermal stress. There are concerns about the window separating the protons from the spallation target, which is expected to be exposed to stress under extreme conditions. However, recent experience with the MEGAPIE liquid metal neutron spallation source tested at the Paul Scherrer Institute has demonstrated a working beam window under a 0.78 MW intense proton beam. The chemical separation of the transuranic elements and the fuel manufacturing, as well as the structure materials, are important issues. Finally, the lack of nuclear data at high neutron energies limits the efficiency of the design. This latter issue can be overcome by introducing a neutron moderator between the neutron source and the fuel, but this can lead to increased leakage as the moderator will also scatter neutrons away from the fuel. Changing the geometry of the reactor can reduce but never eliminate leakage. Leaking neutrons are also of concern due to the activation products they produce and due to the physical damage to materials neutron irradiation can cause. Furthermore, there are certain advantages to the fast neutron spectrum which cannot be achieved with thermal neutrons as are the result of a moderator. On the other hand, thermal neutron reactors are the most common and well understood type of nuclear reactor and thermal neutrons also have advantages over fast neutrons.
Some laboratory experiments and many theoretical studies have demonstrated the theoretical possibility of such a plant. Carlo Rubbia, a nuclear physicist, Nobel laureate, and former director of CERN, was one of the first to conceive a design of a subcritical reactor, the so-called "energy amplifier". In 2005, several large-scale projects are going on in Europe and Japan to further develop subcritical reactor technology. In 2012 CERN scientists and engineers launched the International Thorium Energy Committee (iThEC), an organization dedicated to pursuing this goal and which organized the ThEC13 conference on the subject.
Economics and public acceptance
Subcritical reactors have been proposed both as a means of generating electric power and as a means of transmutation of nuclear waste, so the gain is twofold. However, the costs for construction, safety and maintenance of such complex installations are expected to be very high, not to mention the amount of research needed to develop a practical design (see above). There exist cheaper and reasonably safe waste management concepts, such as the transmutation in fast-neutron reactors. However, the solution of a subcritical reactor might be favoured for a better public acceptance – it is considered more acceptable to burn the waste than to bury it for hundreds of thousands of years. For future waste management, a few transmutation devices could be integrated into a large-scale nuclear program, hopefully increasing only slightly the overall costs.
The main challenge facing partitioning and transmutation operations is the need to enter nuclear cycles of extremely long duration: about 200 years. Another disadvantage is the generation of high quantities of intermediate-level long-lived radioactive waste (ILW) which will also require deep geological disposal to be safely managed. A more positive aspect is the expected reduction in size of the repository, which was estimated to be an order of 4 to 6. Both positive and negative aspects were examined in an international benchmark study coordinated by Forschungszentrum Jülich and financed by the European Union.
Subcritical hybrid systems
While ADS was originally conceptualized as a part of a light water reactor design, other proposals have been made that incorporate an ADS into other generation IV reactor concepts.
One such proposal calls for a gas-cooled fast reactor that is fueled primarily by plutonium and americium. The neutronic properties of americium make it difficult to use in any critical reactor, because it tends to make the moderator temperature coefficient more positive, decreasing stability. The inherent safety of an ADS, however, would allow americium to be safely burned. These materials also have good neutron economy, allowing the pitch-to-diameter ratio to be large, which allows for improved natural circulation and economics.
Muon-driven systems for nuclear waste disposal
Subcritical methods for use in nuclear waste disposal that do not rely on neutron sources are also being developed. These include systems that rely on the mechanism of muon capture, in which muons (μ−) produced by a compact accelerator-driven source transmute long-lived radioactive isotopes to stable isotopes.
Natural
Generally the term "subcritical reactor" is reserved for artificial systems, but natural systems do exist—any natural source of fissile material exposed to cosmic and gamma rays (from even the sun) could be considered a subcritical reactor. This includes space launched satellites with radioisotope thermoelectric generators as well as any such exposed reservoirs.
See also
Alternative energy
Cosmic ray spallation
Spallation Neutron Source
ISIS neutron source
Hybrid nuclear fusion
References
Notes
Sources
World Nuclear Association Fact Sheet
MYRRHA (Belgium)
GEM STAR Reactor, ADNA Corporation
Multiple authors. "A Subcritical, Gas-Cooled Fast Transmutation Reactor with a Fusion Neutron Source", Nuclear Technology, Vol. 150, No. 2, May 2005, pages 162–188. URL: http://www.ans.org/pubs/journals/nt/va-150-2-162-188
Aker Solutions Accelerator Driven Thorium Reactor power station
Future nuclear energy systems: Generating electricity, burning wastes(IAEA)
Nuclear reactors
Particle physics | Subcritical reactor | [
"Physics"
] | 2,868 | [
"Particle physics"
] |
1,220,790 | https://en.wikipedia.org/wiki/Spallation | Spallation is a process in which fragments of material (spall) are ejected from a body due to impact or stress. In the context of impact mechanics it describes ejection of material from a target during impact by a projectile. In planetary physics, spallation describes meteoritic impacts on a planetary surface and the effects of stellar winds and cosmic rays on planetary atmospheres and surfaces. In the context of mining or geology, spallation can refer to pieces of rock breaking off a rock face due to the internal stresses in the rock; it commonly occurs on mine shaft walls. In the context of metal oxidation, spallation refers to the breaking off of the oxide layer from a metal. For example, the flaking off of rust from iron. In the context of anthropology, spallation is a process used to make stone tools such as arrowheads by knapping. In nuclear physics, spallation is the process in which a heavy nucleus emits numerous nucleons as a result of being hit by a high-energy particle, thus greatly reducing its atomic weight. In industrial processes and bioprocessing the loss of tubing material due to the repeated flexing of the tubing within a peristaltic pump is termed spallation.
In solid mechanics
Spallation can occur when a tensile stress wave propagates through a material and can be observed in flat plate impact tests. It is caused by an internal cavitation due to stresses, which are generated by the interaction of stress waves, exceeding the local tensile strength of materials. A fragment or multiple fragments will be created on the free end of the plate. This fragment known as "spall" acts as a secondary projectile with velocities that can be as high as one third of the stress wave speed on the material. This type of failure is typically an effect of high explosive squash head (HESH) charges.
Laser spallation
Laser induced spallation is a recent experimental technique developed to understand the adhesion of thin films with substrates. A high energy pulsed laser (typically Nd:YAG) is used to create a compressive stress pulse in the substrate wherein it propagates and reflects as a tensile wave at the free boundary. This tensile pulse spalls/peels the thin film while propagating towards the substrate. Using theory of wave propagation in solids it is possible to extract the interface strength. The stress pulse created in this example is usually around 3 to 8 nanoseconds in duration while its magnitude varies as a function of laser fluence. Due to the non-contact application of load, this technique is very well suited to spall ultra-thin films (1 micrometre in thickness or less). It is also possible to mode convert a longitudinal stress wave into a shear stress using a pulse shaping prism and achieve shear spallation.
Nuclear spallation
Nuclear spallation from the impact of cosmic rays occurs naturally in Earth's atmosphere and on the surfaces of bodies in space such as meteorites and the Moon. Evidence of cosmic ray spallation is seen on outer surfaces of bodies and gives a means of measuring the length of time of exposure. The composition of cosmic rays themselves may also indicate that they have suffered spallation before reaching Earth, because the proportion of light elements such as lithium, boron, and beryllium in them exceeds average cosmic abundances; these elements in the cosmic rays were evidently formed from spallation of oxygen, nitrogen, carbon and perhaps silicon in the cosmic ray sources or during their lengthy travel here. Cosmogenic isotopes of aluminium, beryllium, chlorine, iodine and neon, formed by spallation of terrestrial elements under cosmic ray bombardment, have been detected on Earth.
Nuclear spallation is one of the processes by which a particle accelerator may be used to produce a beam of neutrons. A particle beam consisting of protons at around 1 GeV is shot into a target consisting of mercury, tantalum, lead or another heavy metal. The target nuclei are excited and upon deexcitation, 20 to 30 neutrons are expelled per nucleus. Although this is a far more expensive way of producing neutron beams than by a chain reaction of nuclear fission in a nuclear reactor, it has the advantage that the beam can be pulsed with relative ease. Furthermore, the energetic cost of one spallation neutron is six times lower than that of a neutron gained via nuclear fission. In contrast to nuclear fission, the spallation neutrons cannot trigger further spallation or fission processes to produce further neutrons. Therefore, there is no chain reaction, which makes the process non-critical. Observations of cosmic ray spallation had already been made in the 1930s, but the first observations from a particle accelerator occurred in 1947, and the term "spallation" was coined by Nobelist Glenn T. Seaborg that same year. Spallation is a proposed neutron source in subcritical nuclear reactors like the upcoming research reactor MYRRHA, which is planned to investigate the feasibility of nuclear transmutation of high level waste into less harmful substances. Besides having a neutron multiplication factor just below criticality, subcritical reactors can also produce net usable energy as the average energy expenditure per neutron produced ranges around 30 MeV (1GeV beam producing a bit over 30 neutrons in the most productive targets) while fission produces on the order of 200 MeV per actinide atom that is split. Even at relatively low energy efficiency of the processes involved, net usable energy could be generated while being able to use actinides unsuitable for use in conventional reactors as "fuel".
Production of neutrons at a spallation neutron source
Generally the production of neutrons at a spallation source begins with a high-powered proton accelerator. The accelerator may consist of a linac only (as in the European Spallation Source) or a combination of linac and synchrotron (e.g. ISIS neutron source) or a cyclotron (e.g. SINQ (PSI)) . As an example, the ISIS neutron source is based on some components of the former Nimrod synchrotron. Nimrod was uncompetitive for particle physics so it was replaced with a new synchrotron, initially using the original injectors, but which produces a highly intense pulsed beam of protons. Whereas Nimrod would produce around 2 μA at 7 GeV, ISIS produces 200 μA at 0.8 GeV. This is pulsed at the rate of 50 Hz, and this intense beam of protons is focused onto a target. Experiments have been done with depleted uranium targets but although these produce the most intense neutron beams, they also have the shortest lives. Generally, therefore, tantalum or tungsten targets have been used. Spallation processes in the target produce the neutrons, initially at very high energies—a good fraction of the proton energy. These neutrons are then slowed in moderators filled with liquid hydrogen or liquid methane to the energies that are needed for the scattering instruments. Whilst protons can be focused since they have charge, chargeless neutrons cannot be, so in this arrangement the instruments are arranged around the moderators.
Inertial confinement fusion has the potential to produce orders of magnitude more neutrons than spallation. This could be useful for neutron radiography, which can be used to locate hydrogen atoms in structures, resolve atomic thermal motion, and study collective excitations of phonons more effectively than X-rays.
See also
Energy amplifier
Subcritical reactor (accelerator-driven system)
Sputtering (phenomenon in which microscopic particles of a solid material are ejected from its surface)
Spallation facilities
Institut-Laue-Langevin Grenoble, France
European Spallation Source, under construction, Sweden
ISIS neutron source, Harwell, UK
J-PARC
LANSCE Los Alamos
PSI Spallation Neutron Source (SINQ), Switzerland
Spallation Neutron Source Oak Ridge, USA
China Spallation Neutron Source
References
External links
IAEA database of spallation neutron sources in the Accelerator Knowledge Portal
Description of ISIS accelerator etc.
Spallation Neutron Source technical background.
How spallation works at the ISIS neutron and muon source
Nuclear technology
Neutron sources | Spallation | [
"Physics"
] | 1,663 | [
"Nuclear technology",
"Nuclear physics"
] |
1,221,168 | https://en.wikipedia.org/wiki/Hydroformylation | In organic chemistry, hydroformylation, also known as oxo synthesis or oxo process, is an industrial process for the production of aldehydes () from alkenes (). This chemical reaction entails the net addition of a formyl group () and a hydrogen atom to a carbon-carbon double bond. This process has undergone continuous growth since its invention: production capacity reached 6.6 tons in 1995. It is important because aldehydes are easily converted into many secondary products. For example, the resultant aldehydes are hydrogenated to alcohols that are converted to detergents. Hydroformylation is also used in speciality chemicals, relevant to the organic synthesis of fragrances and pharmaceuticals. The development of hydroformylation is one of the premier achievements of 20th-century industrial chemistry.
The process entails treatment of an alkene typically with high pressures (between 10 and 100 atmospheres) of carbon monoxide and hydrogen at temperatures between 40 and 200 °C. In one variation, formaldehyde is used in place of synthesis gas. Transition metal catalysts are required. Invariably, the catalyst dissolves in the reaction medium, i.e. hydroformylation is an example of homogeneous catalysis.
History
The process was discovered by the German chemist Otto Roelen in 1938 in the course of investigations of the Fischer–Tropsch process. Aldehydes and diethylketone were obtained when ethylene was added to an F-T reactor. Through these studies, Roelen discovered the utility of cobalt catalysts. HCo(CO)4, which had been isolated only a few years prior to Roelen's work, was shown to be an excellent catalyst. The term oxo synthesis was coined by the Ruhrchemie patent department, who expected the process to be applicable to the preparation of both aldehydes and ketones. Subsequent work demonstrated that the ligand tributylphosphine (PBu3) improved the selectivity of the cobalt-catalysed process. The mechanism of Co-catalyzed hydroformylation was elucidated by Richard F. Heck and David Breslow in the 1960s.
In 1968, highly active rhodium-based catalysts were reported. Since the 1970s, most hydroformylation relies on catalysts based on rhodium. Water-soluble catalysts have been developed. They facilitate the separation of the products from the catalyst.
Mechanism
Selectivity
A key consideration of hydroformylation is the "normal" vs. "iso" selectivity. For example, the hydroformylation of propylene can afford two isomeric products, butyraldehyde or isobutyraldehyde:
These isomers reflect the regiochemistry of the insertion of the alkene into the M–H bond. Since both products are not equally desirable (normal is more stable than iso), much research was dedicated to the quest for catalyst that favored the normal isomer.
Steric effects
Markovnikov's rule addition of the cobalt hydride to primary alkenes is disfavored by steric hindrance between the cobalt centre and the secondary alkyl ligand. Bulky ligands exacerbate this steric hindrance. Hence, the mixed carbonyl/phosphine complexes offer a greater selectivity for anti-Markovnikov addition, thus favoring straight chain products (n-) aldehydes. Modern catalysts rely increasingly on chelating ligands, especially diphosphites.
Electronic effects
Additionally, electron-rich the hydride complex are less proton-like. Thus, as a result, the electronic effects that normally favour the Markovnikov addition to an alkene are less applicable. Thus, electron-rich hydrides are more selective.
Acyl formation
To suppress competing isomerization of the alkene, the rate of migratory insertion of the carbonyl into the carbon-metal bond of the alkyl must be relatively fast. The rate of insertion of the carbonyl carbon into the C-M bond is likely to be greater than the rate of beta-hydride elimination.
Asymmetric hydroformylation
Hydroformylation of prochiral alkenes creates new stereocenters. Using chiral phosphine ligands, the hydroformylation can be tailored to favor one enantiomer. Thus, for example, dexibuprofen, the (+)−(S)-enantiomer of ibuprofen, can be produced by enantioselective hydroformylation followed by oxidation.
Processes
The industrial processes vary depending on the chain length of the olefin to be hydroformylated, the catalyst metal and ligands, and the recovery of the catalyst. The original Ruhrchemie process produced propanal from ethene and syngas using cobalt tetracarbonyl hydride. Today, industrial processes based on cobalt catalysts are mainly used for the production of medium- to long-chain olefins, whereas the rhodium-based catalysts are usually used for the hydroformylation of propene. The rhodium catalysts are significantly more expensive than cobalt catalysts. In the hydroformylation of higher molecular weight olefins the separation of the catalyst from the produced aldehydes is difficult.
BASF-oxo process
The BASF-oxo process starts mostly with higher olefins and relies on cobalt carbonyl-based catalyst. By conducting the reaction at low temperatures, one observes increased selectivity favoring the linear product. The process is carried out at a pressure of about 30 MPa and in a temperature range of 150 to 170 °C. The cobalt is recovered from the liquid product by oxidation to water-soluble Co2 +, followed by the addition of aqueous formic or acetic acids. This process gives an aqueous phase of cobalt, which can then be recycled. Losses are compensated by the addition of cobalt salts.
Exxon process
The Exxon process, also Kuhlmann- or PCUK – oxo process, is used for the hydroformylation of C6–C12 olefins. The process relies on cobalt catalysts. In order to recover the catalyst, an aqueous sodium hydroxide solution or sodium carbonate is added to the organic phase. By extraction with olefin and neutralization by addition of sulfuric acid solution under carbon monoxide pressure the metal carbonyl hydride can recovered. This is stripped out with syngas, absorbed by the olefin, and returned to the reactor. Similar to the BASF process, the Exxon process is carried out at a pressure of about 30 MPa and at a temperature of about 160 to 180 °C.
Shell process
The Shell process uses cobalt complexes modified with phosphine ligands for the hydroformylation of C7–C14 olefins. The resulting aldehydes are directly hydrogenated to the fatty alcohols, which are separated by distillation, which allows the catalyst to be recycled. The process has good selectivity to linear products, which find use as feedstock for detergents. The process is carried out at a pressure of about 4 to 8 MPa and at a temperature range of about 150–190 °C.
Union Carbide process
The Union Carbide (UCC) process, also known as low-pressure oxo process (LPO), relies on a rhodium catalyst dissolved in high-boiling thick oil, a higher molecular weight condensation product of the primary aldehydes, for the hydroformylation of propene. The reaction mixture is separated in a falling film evaporator from volatile components. The liquid phase is distilled and butyraldehyde is removed as head product while the catalyst containing bottom product is recycled to the process. The process is carried out at about 1.8 MPa and 95–100 °C.
Ruhrchemie/Rhone–Poulenc process
The Ruhrchemie/Rhone–Poulenc process (RCRPP) relies on a rhodium catalyst with water-soluble TPPTS as ligand (Kuntz Cornils catalyst) for the hydroformylation of propene. The tri-sulfonation of triphenylphosphane ligand provides hydrophilic properties to the organometallic complex. The catalyst complex carries nine sulfonate-groups and is highly soluble in water (about 1 kg L−1), but not in the emerging product phase. The water-soluble TPPTS is used in about 50-fold excess, whereby the leaching of the catalyst is effectively suppressed. Reactants are propene and syngas consisting of hydrogen and carbon monoxide in a ratio of 1.1:1. A mixture of butyraldehyde and isobutyraldehyde in the ratio 96:4 is generated with few by-products such as alcohols, esters and higher boiling fractions. The Ruhrchemie/Rhone-Poulenc-process is the first commercially available two-phase system in which the catalyst is present in the aqueous phase.
In the progress of the reaction an organic product phase is formed which is separated continuously by means of phase separation, wherein the aqueous catalyst phase remains in the reactor.
The process is carried out in a stirred tank reactor where the olefin and the syngas are entrained from the bottom of the reactor through the catalyst phase under intensive stirring. The resulting crude aldehyde phase is separated at the top from the aqueous phase. The aqueous catalyst-containing solution is re-heated via a heat exchanger and pumped back into the reactor. The excess olefin and syngas is separated from the aldehyde phase in a stripper and fed back to the reactor. The generated heat is used for the generation of process steam, which is used for subsequent distillation of the organic phase to separate into butyraldehyde and isobutyraldehyde. Potential catalyst poisons coming from the synthesis gas migrate into the organic phase and removed from the reaction with the aldehyde. Thus, poisons do not accumulate, and the elaborate fine purification of the syngas can be omitted.
A plant was built in Oberhausen in 1984, which was debottlenecked in 1988 and again in 1998 up to a production capacity of 500,000 t/a butanal. The conversion rate of propene is 98% and the selectivity to n-butanal is high. During the life time of a catalyst batch in the process less than 1 ppb rhodium is lost.
Laboratory process
Recipes have been developed for the hydroformylation on a laboratory scale, e.g. of cyclohexene.
Substrates other than alkenes
Cobalt carbonyl and rhodium complexes catalyse the hydroformylation of formaldehyde and ethylene oxide to give hydroxyacetaldehyde and 3-hydroxypropanal, which can then be hydrogenated to ethylene glycol and propane-1,3-diol, respectively. The reactions work best when the solvent is basic (such as pyridine).
In the case of dicobalt octacarbonyl or Co2(CO)8 as a catalyst, pentan-3-one can arise from ethene and CO, in the absence of hydrogen. A proposed intermediate is the ethylene-propionyl species [CH3C(O)Co(CO)3(ethene)] which undergoes a migratory insertion to form [CH3COCH2CH2Co(CO)3]. The required hydrogen arises from the water shift reaction. For details, see
If the water shift reaction is not operative, the reaction affords a polymer containing alternating carbon monoxide and ethylene units. Such aliphatic polyketones are more conventionally prepared using palladium catalysts.
Functionalized olefins such as allyl alcohol can be hydroformylated. The target product 1,4-butanediol and its isomer is obtained with isomerization free catalysts such as rhodium-triphenylphosphine complexes. The use of the cobalt complex leads by isomerization of the double bond to n-propanal. The hydroformylation of alkenyl ethers and alkenyl esters occurs usually in the α-position to the ether or ester function.
The hydroformylation of acrylic acid and methacrylic acid in the rhodium-catalyzed process leads to the Markovnikov product in the first step. By variation of the reaction conditions the reaction can be directed to different products. A high reaction temperature and low carbon monoxide pressure favors the isomerization of the Markovnikov product to the thermodynamically more stable β-isomer, which leads to the n-aldehyde. Low temperatures and high carbon monoxide pressure and an excess of phosphine, which blocks free coordination sites, can lead to faster hydroformylation in the α-position to the ester group and suppress the isomerization.
Side- and consecutive reactions
Tandem carbonylation-water gas shift reactions
Side reactions of the alkenes are the isomerization and hydrogenation of the double bond. While the alkanes resulting from hydrogenation of the double bond do not participate further in the reaction, the isomerization of the double bond with subsequent formation of the n-alkyl complexes is a desired reaction. The hydrogenation is usually of minor importance; However, cobalt-phosphine-modified catalysts can have an increased hydrogenation activity, where up to 15% of the alkene is hydrogenated.
Tandem hydroformylation-hydrogenation
Using tandem catalysis, systems have been developed for the one-pot conversion of akenes to alcohols. The first step is hydroformylation.
Ligand degradation
Conditions for hydroformylation catalysis can induce degradation of supporting organophosphorus ligands. Triphenylphosphine is subject to hydrogenolysis, releasing benzene and diphenylphosphine. The insertion of carbon monoxide in an intermediate metal-phenyl bond can lead to the formation of benzaldehyde or by subsequent hydrogenation to benzyl alcohol. One of the ligands phenyl-groups can be replaced by propene, and the resulting diphenylpropylphosphine ligand can inhibit the hydroformylation reaction due to its increased basicity.
Metals
Although the original hydroformylation catalysts were based on cobalt, most modern processes rely on rhodium, which is expensive. There has therefore been interest in finding alternative metal catalysts. Examples of alternative metals include iron and ruthenium.
See also
Koch reaction - related reaction of alkenes and CO to form carboxylic acids
References
Further reading
"Applied Homogeneous Catalysis with Organometallic Compounds: A Comprehensive Handbook in Two Volumes (Paperback) by Boy Cornils (Editor), W. A. Herrmann (Editor).
"Rhodium Catalyzed Hydroformylation" P. W. N. M. van Leeuwen, C. Claver Eds.; Springer; (2002).
"Homogeneous Catalysis: Understanding the Art" by Piet W. N. M. van Leeuwen Springer; 2005.
Imyanitov N.S./ Hydroformylation of Olefins with Rhodium Complexes // Rhodium Express. 1995. No 10–11 (May). pp. 3–62 (Eng)
Organometallic chemistry
Homogeneous catalysis
Formylation reactions
Carbon monoxide | Hydroformylation | [
"Chemistry"
] | 3,264 | [
"Catalysis",
"Homogeneous catalysis",
"Organometallic chemistry"
] |
1,221,403 | https://en.wikipedia.org/wiki/Arming%20yeast | Arming yeast is a tool in biotechnology and biological research where a protein of interest is expressed on the surface of yeast cells. This is used in industrial settings for expression of enzymes to serve as catalysts in reactions, as well as in pharmaceutical settings for screening drug candidates. Saccharomyces cerevisiae is most commonly used as arming yeast because it is easy to grow, can be genetically manipulated, and is generally recognized as safe by the U.S. Food and Drug Administration.
Mechanisms
The most common mechanism of arming yeast is to fuse a protein of interest to the extracellular domain of the yeast mating protein α-agglutinin.
Uses
Arming yeast have been used for a variety of industrial and research processes. S. cerevisiae armed with a glucoamylase from Rhizopus oryzae have been used to break down starches in the production of ethanol. Similarly, yeast expressing endoglucanase from Trichoderma reesei as well as β-glucosidase from Aspergillus aculeatus were used to break down agricultural waste into material which can be turned into ethanol.
See also
Autodisplay
References
Genetically modified organisms
Yeasts | Arming yeast | [
"Engineering",
"Biology"
] | 249 | [
"Yeasts",
"Fungi",
"Genetic engineering",
"Genetically modified organisms"
] |
1,221,448 | https://en.wikipedia.org/wiki/Prompt%20neutron | In nuclear engineering, a prompt neutron is a neutron immediately emitted (neutron emission) by a nuclear fission event, as opposed to a delayed neutron decay which can occur within the same context, emitted after beta decay of one of the fission products anytime from a few milliseconds to a few minutes later.
Prompt neutrons emerge from the fission of an unstable fissionable or fissile heavy nucleus almost instantaneously. There are different definitions for how long it takes for a prompt neutron to emerge. For example, the United States Department of Energy defines a prompt neutron as a neutron born from fission within 10−13 seconds after the fission event. The U.S. Nuclear Regulatory Commission defines a prompt neutron as a neutron emerging from fission within 10−14 seconds.
This emission is controlled by the nuclear force and is extremely fast. By contrast, so-called delayed neutrons are delayed by the time delay associated with beta decay (mediated by the weak force) to the precursor excited nuclide, after which neutron emission happens on a prompt time scale (i.e., almost immediately).
Principle
Using uranium-235 as an example, this nucleus absorbs a thermal neutron, and the immediate mass products of a fission event are two large fission fragments, which are remnants of the formed uranium-236 nucleus. These fragments emit two or three free neutrons (2.5 on average), called prompt neutrons. A subsequent fission fragment occasionally undergoes a stage of radioactive decay that yields an additional neutron, called a delayed neutron. These neutron-emitting fission fragments are called delayed neutron precursor atoms.
Delayed neutrons are associated with the beta decay of the fission products. After prompt fission neutron emission the residual fragments are still neutron rich and undergo a beta decay chain. The more neutron rich the fragment, the more energetic and faster the beta decay. In some cases the available energy in the beta decay is high enough to leave the residual nucleus in such a highly excited state that neutron emission instead of gamma emission occurs.
Importance in nuclear fission basic research
The standard deviation of the final kinetic energy distribution as a function of mass of final fragments from low energy fission of uranium 234 and uranium 236, presents a peak around light fragment masses region and another on heavy fragment masses region. Simulation by Monte Carlo method of these experiments suggests that those peaks are produced by prompt neutron emission. This effect of prompt neutron emission does not provide a primary mass and kinetic distribution which is important to study fission dynamics from saddle to scission point.
Importance in nuclear reactors
If a nuclear reactor happened to be prompt critical - even very slightly - the number of neutrons and power output would increase exponentially at a high rate. The response time of mechanical systems like control rods is far too slow to moderate this kind of power surge. The control of the power rise would then be left to its intrinsic physical stability factors, like the thermal dilatation of the core, or the increased resonance absorptions of neutrons, that usually tend to decrease the reactor's reactivity when temperature rises; but the reactor would run the risk of being damaged or destroyed by heat.
However, thanks to the delayed neutrons, it is possible to leave the reactor in a subcritical state as far as only prompt neutrons are concerned: the delayed neutrons come a moment later, just in time to sustain the chain reaction when it is going to die out. In that regime, neutron production overall still grows exponentially, but on a time scale that is governed by the delayed neutron production, which is slow enough to be controlled (just as an otherwise unstable bicycle can be balanced because human reflexes are quick enough on the time scale of its instability). Thus, by widening the margins of non-operation and supercriticality and allowing more time to regulate the reactor, the delayed neutrons are essential to inherent reactor safety and even in reactors requiring active control.
Fraction definitions
The factor β is defined as:
and it is equal to 0.0064 for U-235.
The delayed neutron fraction (DNF) is defined as:
These two factors, β and DNF, are not the same thing in case of a rapid change in the number of neutrons in the reactor.
Another concept, is the effective fraction of delayed neutrons, which is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. This concept arises because delayed neutrons are emitted with an energy spectrum more thermalized relative to prompt neutrons. For low enriched uranium fuel working on a thermal neutron spectrum, the difference between the average and effective delayed neutron fractions can reach 50 pcm (1 pcm = 1e-5).
See also
Prompt criticality
Critical mass
Nuclear chain reaction
References
External links
Hybrid nuclear reactors:delayed neutrons
Beta is not the delayed neutron (population) fraction
Nuclear technology
Prompt | Prompt neutron | [
"Physics"
] | 984 | [
"Nuclear technology",
"Nuclear physics"
] |
1,221,457 | https://en.wikipedia.org/wiki/Delayed%20neutron | In nuclear engineering, a delayed neutron is a neutron emitted after a nuclear fission event, by one of the fission products (or actually, a fission product daughter after beta decay), any time from a few milliseconds to a few minutes after the fission event. Neutrons born within 10−14 seconds of the fission are termed "prompt neutrons".
In a nuclear reactor large nuclides fission into two neutron-rich fission products (i.e. unstable nuclides) and free neutrons (prompt neutrons). Many of these fission products then undergo radioactive decay (usually beta decay) and the resulting nuclides are unstable with respect to beta decay. A small fraction of them are excited enough to be able to beta-decay by emitting a delayed neutron in addition to the beta. The moment of beta decay of the precursor nuclides – which are the precursors of the delayed neutrons – happens orders of magnitude later compared to the emission of the prompt neutrons. Hence the neutron that originates from the precursor's decay is termed a delayed neutron. The "delay" in the neutron emission is due to the delay in beta decay (which is slower since controlled by the weak force), since neutron emission, like gamma emission, is controlled by the strong nuclear force and thus either happens at fission, or nearly simultaneously with the beta decay, immediately after it. The various half lives of these decays that finally result in neutron emission, are thus the beta decay half lives of the precursor radionuclides.
Delayed neutrons play an important role in nuclear reactor control and safety analysis.
Principle
Delayed neutrons are associated with the beta decay of the fission products. After prompt fission neutron emission the residual fragments are still neutron rich and undergo a beta decay chain. The more neutron rich the fragment, the more energetic and faster the beta decay. In some cases the available energy in the beta decay is high enough to leave the residual nucleus in such a highly excited state that neutron emission instead of gamma emission occurs.
Using U-235 as an example, this nucleus absorbs thermal neutrons, and the immediate mass products of a fission event are two large fission fragments, which are remnants of the formed U-236 nucleus. These fragments emit, on average, two or three free neutrons (in average 2.47), called "prompt" neutrons. A subsequent fission fragment occasionally undergoes a stage of radioactive decay (which is a beta minus decay) that yields a new nucleus (the emitter nucleus) in an excited state that emits an additional neutron, called a "delayed" neutron, to get to ground state. These neutron-emitting fission fragments are called delayed neutron precursor atoms.
Delayed Neutron Data for Thermal Fission in U-235
Importance in nuclear reactors
If a nuclear reactor happened to be prompt critical – even very slightly – the number of neutrons would increase exponentially at a high rate, and very quickly the reactor would become uncontrollable by means of external mechanisms. The control of the power rise would then be left to its intrinsic physical stability factors, like the thermal dilatation of the core, or the increased resonance absorptions of neutrons, that usually tend to decrease the reactor's reactivity when temperature rises; but the reactor would run the risk of being damaged or destroyed by heat.
However, thanks to the delayed neutrons, it is possible to leave the reactor in a subcritical state as far as only prompt neutrons are concerned: the delayed neutrons come a moment later, just in time to sustain the chain reaction when it is going to die out. In that regime, neutron production overall still grows exponentially, but on a time scale that is governed by the delayed neutron production, which is slow enough to be controlled (just as an otherwise unstable bicycle can be balanced because human reflexes are quick enough on the time scale of its instability). Thus, by widening the margins of non-operation and supercriticality and allowing more time to regulate the reactor, the delayed neutrons are essential to inherent reactor safety, even in reactors requiring active control.
The lower percentage of delayed neutrons makes the use of large percentages of plutonium in nuclear reactors more challenging.
Fraction definitions
The precursor yield fraction β is defined as:
and it is equal to 0.0064 for U-235.
The delayed neutron fraction (DNF) is defined as:
These two factors, β and DNF, are almost the same thing, but not quite; they differ in the case a rapid (faster than the decay time of the precursor atoms) change in the number of neutrons in the reactor.
Another concept, is the effective fraction of delayed neutrons βeff, which is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. This concept arises because delayed neutrons are emitted with an energy spectrum more thermalized relative to prompt neutrons. For low enriched uranium fuel working on a thermal neutron spectrum, the difference between the average and effective delayed neutron fractions can reach 50 pcm.
See also
Prompt critical
Critical mass
Nuclear chain reaction
Dollar (reactivity)
References
External links
Hybrid nuclear reactors:delayed neutrons
Beta is not the delayed neutron (population) fraction
Nuclear technology
Neutron | Delayed neutron | [
"Physics"
] | 1,074 | [
"Nuclear technology",
"Nuclear physics"
] |
1,221,466 | https://en.wikipedia.org/wiki/Coulometry | In analytical electrochemistry, coulometry is the measure of charge (coulombs) transfer during an electrochemical redox reaction. It can be used for precision measurements of charge, but coulometry is mainly used for analytical applications to determine the amount of matter transformed.
There are two main categories of coulometric techniques. Amperostatic coulometry, or coulometric titration keeps the current constant using an amperostat. Potentiostatic coulometry holds the electric potential constant during the reaction using a potentiostat.
History
The term coulometry was introduced in 1938 by Hungarian chemist László Szebellédy and Zoltan Somogyi. Coulometry is the measure of charge, thus named after its unit the coulomb.
Michael Faraday, known for his work in electricity and magnetism, made critical contributions to the field of electrochemistry. He discovered the laws of electrolysis, and in his recognition is the eponym of the Faraday constant. In the earliest developments of coulometry, Faraday proposed the first instrument to measure charge by utilizing the electrolysis of water.
Surface coulometry, the method of determining metallic layers or oxide films on metals, was first applied by American Chemist G. G. Grower in 1917 by checking the quality of tinned copper wire.
Coulometric methods were used widely in the middle of the twentieth century but voltammetric methods and non-electrochemical analytical methods took over decreasing the use for coulometry, but one method widely used today is the Karl Fischer method.
Potentiostatic coulometry
Potentiostatic coulometry utilizes a constant electric potential and is a technique most commonly referred to as "bulk electrolysis". Also called direct coulometry, the analyte is oxidized or reduced at the working electrode without intermediate reactions. The working electrode is kept at a constant potential and the current that flows through the circuit is measured. This constant potential is applied long enough to fully reduce or oxidize all of the electroactive species in a given solution. As the electroactive molecules are consumed, the current also decreases, approaching zero when the conversion is complete. The sample mass, molecular mass, number of electrons in the electrode reaction, and number of electrons passed during the experiment are all related by Faraday's laws. It follows that, if three of the values are known, then the fourth can be calculated.
Bulk electrolysis is often used to unambiguously assign the number of electrons consumed in a reaction observed through voltammetry. It also has the added benefit of producing a solution of a species (oxidation state) which may not be accessible through chemical routes. This species can then be isolated or further characterized while in solution.
The rate of such reactions is not determined by the concentration of the solution, but rather the mass transfer of the electroactive species in the solution to the electrode surface. Rates will increase when the volume of the solution is decreased, the solution is stirred more rapidly, or the area of the working electrode is increased. Since mass transfer is so important the solution is stirred during a bulk electrolysis. However, this technique is generally not considered a hydrodynamic technique, since a laminar flow of solution against the electrode is neither the objective nor outcome of the stirring.
The extent to which a reaction goes to completion is also related to how much greater the applied potential is than the reduction potential of interest. In the case where multiple reduction potentials are of interest, it is often difficult to set an electrolysis potential a "safe" distance (such as 200 mV) past a redox event. The result is incomplete conversion of the substrate, or else conversion of some of the substrate to the more reduced form. This factor must be considered when analyzing the current passed and when attempting to do further analysis/isolation/experiments with the substrate solution.
An advantage to this kind of analysis over electrogravimetry is that it does not require that the product of the reaction be weighed. This is useful for reactions where the product does not deposit as a solid, such as the determination of the amount of arsenic in a sample from the electrolysis of arsenous acid (H3AsO3) to arsenic acid (H3AsO4).
Coulometric titration
Coulometric titrations under a constant current system quantifies the to analyte by measuring the duration that current passes through the sample. In indirect or secondary coulometry, the working electrode produces a titrant that reacts with the analyte. When the analyte is completely consumed, endpoint detection is employed, preferably with an instrumental method for higher precision. The total charge that has flowed through the sample can be determined from the magnitude of the current (in amperes) and the duration of the current (in seconds). Using Faraday's Law, total charge can be used to determine the moles of the unknown species in solution. When the volume of the solution is known, the molarity of the unknown species can be determined.
Advantages of Coulometric Titration
Coulometric titration has the advantage that constant current sources for the generation of titrants are relatively easy to make.
The electrochemical generation of a titrant is much more sensitive and can be much more accurately controlled than the mechanical addition of titrant using a burette drive. For example, a constant current flow of 10 μA for 100 ms is easily generated and corresponds to about 10 micrograms of titrant.
The preparation of standard solutions and titer determination is no longer necessary.
Chemical substances that are unstable or difficult to handle because of their high volatility or reactivity in solution can also very easily be used as titrants. Examples are bromine, chlorine, Ti3+, Sn2+, Cr2+, and Karl Fischer reagents (iodine).
Coulometric titration can also be performed under inert atmosphere or be remotely controlled e.g. with radioactive substances.
Complete automation is simpler.
Applications
Karl Fischer reaction to determine water content
The Karl Fischer reaction uses a coulometric titration to determine the amount of water in a sample. It can determine concentrations of water on the order of milligrams per liter. It is used to find the amount of water in substances such as butter, sugar, cheese, paper, and petroleum.
The reaction involves converting solid iodine into hydrogen iodide in the presence of sulfur dioxide and water. Methanol is most often used as the solvent, but ethylene glycol and diethylene glycol also work. Pyridine is often used to prevent the buildup of sulfuric acid, although the use of imidazole and diethanolamine for this role are becoming more common. All reagents must be anhydrous for the analysis to be quantitative. The balanced chemical equation, using methanol and pyridine, is:
In this reaction, a single molecule of water reacts with a molecule of iodine. Since this technique is used to determine the water content of samples, atmospheric humidity could alter the results. Therefore, the system is usually isolated with drying tubes or placed in an inert gas container. In addition, the solvent will undoubtedly have some water in it so the solvent's water content must be measured to compensate for this inaccuracy.
To determine the amount of water in the sample, analysis must first be performed using either back or direct titration. In the direct method, just enough of the reagents will be added to completely use up all of the water. At this point in the titration, the current approaches zero. It is then possible to relate the amount of reagents used to the amount of water in the system via stoichiometry. The back-titration method is similar, but involves the addition of an excess of the reagent. This excess is then consumed by adding a known amount of a standard solution with known water content. The result reflects the water content of the sample and the standard solution. Since the amount of water in the standard solution is known, the difference reflects the water content of the sample.
Determination of film thickness
Coulometry can be used in the determination of the thickness of metallic coatings. This method is called surface coulometry and is performed by measuring the quantity of electricity needed to dissolve a well-defined area of the coating. The film thickness is proportional to the constant current , the molecular weight of the metal, the density of the metal, and the surface area :
The electrodes for this reaction are often platinum electrode and an electrode that relates to the reaction. For tin coating on a copper wire, a tin electrode is used, while a sodium chloride-zinc sulfate electrode would be used to determine the zinc film on a piece of steel. Special cells have been created to adhere to the surface of the metal to measure its thickness. These are basically columns with the internal electrodes with magnets or weights to attach to the surface. The results obtained by this coulometric method are similar to those achieved by other chemical and metallurgic techniques.
Coulometry in Healthcare
Determination of Chloride Levels
A type of clinical chemistry is measuring chloride levels in blood samples through a Cotlove chloridometer. Kidneys are responsible for the reabsorption of chloride to maintain electrolyte homeostasis. Measuring chloride levels allows for electrolyte stability, without this feature diseases such as hyperchoremia and hypochloremia would be harder to detect leaving body functions compromised.
Determination of Antioxidant Capacity in Human Blood
Coulometry can be used to measure the total antioxidant capacity (TAC) in blood and plasma through electrogenerated bromide. A method was developed that used TAC blood sampled from patients with chronic renal disease going through hemodialysis to research changes in TAC levels that could then be applied in clinics.
Coulometers
Electronic coulometer
The electronic coulometer is based on the application of the operational amplifier in the "integrator"-type circuit. The current passed through the resistor R1 makes a potential drop which is integrated by operational amplifier on the capacitor plates; the higher current, the larger the potential drop. The current need not be constant. In such scheme Vout is proportional of the passed charge. Sensitivity of the coulometer can be changed by choosing of the appropriate value of R1.
Electrochemical coulometers
There are three common types of coulometers based on electrochemical processes:
Copper coulometer
Mercury coulometer
Hofmann voltameter
"Voltameter" is a synonym for "coulometer".
Coulometric Microtitrators
An acid-base microtitorator utilizes the electrolysis of water, where protons or hydroxide ions are produced at the working electrode. The analyte reacts with the generated reagent, buffering the overall rate of reagent generation. A pH gradient forms from the diffusion of these reagents, where a pH sensor will determine the endpoint.
Some advantages of using a microtitrator include the fast completion time of the titration due to the micro-scale. Additionally, a negligibly small amount of the sample is consumed, so titrations can be repeatedly analyzed with the same sample. On the contrary, microtitrators require calibration because diffusion is variable, and thus this method is not absolute.
References
Bibliography
External links
IUPAC Gold Book: coulometric detection method
Coulometry at the University of Akron
Electroanalytical methods | Coulometry | [
"Chemistry"
] | 2,396 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
1,221,884 | https://en.wikipedia.org/wiki/Dioptrics | Dioptrics is the branch of optics dealing with refraction, especially by lenses. In contrast, the branch dealing with mirrors is known as catoptrics. Telescopes that create their image with an objective that is a convex lens (refractors) are said to be "dioptric" telescopes.
An early study of dioptrics was conducted by Ptolemy in relationship to the human eye as well as refraction in media such as water. The understanding of the principles of dioptrics was further expanded by Alhazen, considered the father of modern optics.
See also
Diopter/Dioptre (unit of measurement)
Dioptrice (work by Johannes Kepler)
Catoptrics (study of and optical systems utilizing reflection)
Catadioptrics (study of and optical systems utilizing reflection and refraction)
Optical telescope
List of telescope types
Image-forming optical system
References
Telescopes
Optics | Dioptrics | [
"Physics",
"Chemistry",
"Astronomy"
] | 181 | [
"Applied and interdisciplinary physics",
"Optics",
"Telescopes",
" molecular",
"Astronomical instruments",
"Atomic",
" and optical physics"
] |
1,221,919 | https://en.wikipedia.org/wiki/Sturm%27s%20theorem | In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of .
Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials.
For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers.
The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829.
The theorem
The Sturm chain or Sturm sequence of a univariate polynomial with real coefficients is the sequence of polynomials such that
for , where is the derivative of , and is the remainder of the Euclidean division of by The length of the Sturm sequence is at most the degree of .
The number of sign variations at of the Sturm sequence of is the number of sign changes (ignoring zeros) in the sequence of real numbers
This number of sign variations is denoted here .
Sturm's theorem states that, if is a square-free polynomial, the number of distinct real roots of in the half-open interval is (here, and are real numbers such that ).
The theorem extends to unbounded intervals by defining the sign at of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree.
In the case of a non-square-free polynomial, if neither nor is a multiple root of , then is the number of distinct real roots of .
The proof of the theorem is as follows: when the value of increases from to , it may pass through a zero of some (); when this occurs, the number of sign variations of does not change. When passes through a root of the number of sign variations of decreases from 1 to 0. These are the only values of where some sign may change.
Example
Suppose we wish to find the number of roots in some range for the polynomial . So
The remainder of the Euclidean division of by is multiplying it by we obtain
.
Next dividing by and multiplying the remainder by , we obtain
.
Now dividing by and multiplying the remainder by , we obtain
.
As this is a constant, this finishes the computation of the Sturm sequence.
To find the number of real roots of one has to evaluate the sequences of the signs of these polynomials at and , which are respectively and . Thus
where denotes the number of sign changes in the sequence, which shows that has two real roots.
This can be verified by noting that can be factored as , where the first factor has the roots and , and second factor has no real roots. This last assertion results from the quadratic formula, and also from Sturm's theorem, which gives the sign sequences at and at .
Generalization
Sturm sequences have been generalized in two directions. To define each polynomial in the sequence, Sturm used the negative of the remainder of the Euclidean division of the two preceding ones. The theorem remains true if one replaces the negative of the remainder by its product or quotient by a positive constant or the square of a polynomial. It is also useful (see below) to consider sequences where the second polynomial is not the derivative of the first one.
A generalized Sturm sequence is a finite sequence of polynomials with real coefficients
such that
the degrees are decreasing after the first one: for ;
does not have any real root or has no sign changes near its real roots.
if for and a real number, then .
The last condition implies that two consecutive polynomials do not have any common real root. In particular the original Sturm sequence is a generalized Sturm sequence, if (and only if) the polynomial has no multiple real root (otherwise the first two polynomials of its Sturm sequence have a common root).
When computing the original Sturm sequence by Euclidean division, it may happen that one encounters a polynomial that has a factor that is never negative, such a or . In this case, if one continues the computation with the polynomial replaced by its quotient by the nonnegative factor, one gets a generalized Sturm sequence, which may also be used for computing the number of real roots, since the proof of Sturm's theorem still applies (because of the third condition). This may sometimes simplify the computation, although it is generally difficult to find such nonnegative factors, except for even powers of .
Use of pseudo-remainder sequences
In computer algebra, the polynomials that are considered have integer coefficients or may be transformed to have integer coefficients. The Sturm sequence of a polynomial with integer coefficients generally contains polynomials whose coefficients are not integers (see above example).
To avoid computation with rational numbers, a common method is to replace Euclidean division by pseudo-division for computing polynomial greatest common divisors. This amounts to replacing the remainder sequence of the Euclidean algorithm by a pseudo-remainder sequence, a pseudo remainder sequence being a sequence of polynomials such that there are constants and such that is the remainder of the Euclidean division of by (The different kinds of pseudo-remainder sequences are defined by the choice of and typically, is chosen for not introducing denominators during Euclidean division, and is a common divisor of the coefficients of the resulting remainder; see Pseudo-remainder sequence for details.)
For example, the remainder sequence of the Euclidean algorithm is a pseudo-remainder sequence with for every , and the Sturm sequence of a polynomial is a pseudo-remainder sequence with and for every .
Various pseudo-remainder sequences have been designed for computing greatest common divisors of polynomials with integer coefficients without introducing denominators (see Pseudo-remainder sequence). They can all be made generalized Sturm sequences by choosing the sign of the to be the opposite of the sign of the This allows the use of Sturm's theorem with pseudo-remainder sequences.
Root isolation
For a polynomial with real coefficients, root isolation consists of finding, for each real root, an interval that contains this root, and no other roots.
This is useful for root finding, allowing the selection of the root to be found and providing a good starting point for fast numerical algorithms such as Newton's method; it is also useful for certifying the result, as if Newton's method converge outside the interval one may immediately deduce that it converges to the wrong root.
Root isolation is also useful for computing with algebraic numbers. For computing with algebraic numbers, a common method is to represent them as a pair of a polynomial to which the algebraic number is a root, and an isolation interval. For example may be unambiguously represented by
Sturm's theorem provides a way for isolating real roots that is less efficient (for polynomials with integer coefficients) than other methods involving Descartes' rule of signs. However, it remains useful in some circumstances, mainly for theoretical purposes, for example for algorithms of real algebraic geometry that involve infinitesimals.
For isolating the real roots, one starts from an interval containing all the real roots, or the roots of interest (often, typically in physical problems, only positive roots are interesting), and one computes and For defining this starting interval, one may use bounds on the size of the roots (see ). Then, one divides this interval in two, by choosing in the middle of The computation of provides the number of real roots in and and one may repeat the same operation on each subinterval. When one encounters, during this process an interval that does not contain any root, it may be suppressed from the list of intervals to consider. When one encounters an interval containing exactly one root, one may stop dividing it, as it is an isolation interval. The process stops eventually, when only isolating intervals remain.
This isolating process may be used with any method for computing the number of real roots in an interval. Theoretical complexity analysis and practical experiences show that methods based on Descartes' rule of signs are more efficient. It follows that, nowadays, Sturm sequences are rarely used for root isolation.
Application
Generalized Sturm sequences allow counting the roots of a polynomial where another polynomial is positive (or negative), without computing these root explicitly. If one knows an isolating interval for a root of the first polynomial, this allows also finding the sign of the second polynomial at this particular root of the first polynomial, without computing a better approximation of the root.
Let and be two polynomials with real coefficients such that and have no common root and has no multiple roots. In other words, and are coprime polynomials. This restriction does not really affect the generality of what follows as GCD computations allows reducing the general case to this case, and the cost of the computation of a Sturm sequence is the same as that of a GCD.
Let denote the number of sign variations at of a generalized Sturm sequence starting from and . If are two real numbers, then is the number of roots of in the interval such that minus the number of roots in the same interval such that . Combined with the total number of roots of in the same interval given by Sturm's theorem, this gives the number of roots of such that and the number of roots of such that .
See also
Routh–Hurwitz theorem
Hurwitz's theorem (complex analysis)
Descartes' rule of signs
Rouché's theorem
Properties of polynomial roots
Gauss–Lucas theorem
Turán's inequalities
References
Baumol, William. Economic Dynamics, chapter 12, Section 3, "Qualitative information on real roots"
D.G. Hook and P. R. McAree, "Using Sturm Sequences To Bracket Real Roots of Polynomial Equations" in Graphic Gems I (A. Glassner ed.), Academic Press, pp. 416–422, 1990.
Theorems in real analysis
Articles containing proofs
Theorems about polynomials
Computer algebra
Real algebraic geometry | Sturm's theorem | [
"Mathematics",
"Technology"
] | 2,235 | [
"Theorems in mathematical analysis",
"Theorems in algebra",
"Theorems in real analysis",
"Computer algebra",
"Computational mathematics",
"Computer science",
"Theorems about polynomials",
"Articles containing proofs",
"Algebra"
] |
15,552,594 | https://en.wikipedia.org/wiki/Intraperitoneal%20injection | Intraperitoneal injection or IP injection is the injection of a substance into the peritoneum (body cavity). It is more often applied to non-human animals than to humans. In general, it is preferred when large amounts of blood replacement fluids are needed or when low blood pressure or other problems prevent the use of a suitable blood vessel for intravenous injection.
In humans, the method is widely used to administer chemotherapy drugs to treat some cancers, particularly ovarian cancer. Although controversial, intraperitoneal use in ovarian cancer has been recommended as a standard of care. Fluids are injected intraperitoneally in infants, also used for peritoneal dialysis.
Intraperitoneal injections are a way to administer therapeutics and drugs through a peritoneal route (body cavity). They are one of the few ways drugs can be administered through injection, and have uses in research involving animals, drug administration to treat ovarian cancers, and much more. Understanding when intraperitoneal injections can be utilized and in what applications is beneficial to advance current drug delivery methods and provide avenues for further research. The benefit of administering drugs intraperitoneally is the ability for the peritoneal cavity to absorb large amounts of a drug quickly. A disadvantage of using intraperitoneal injections is that they can have a large variability in effectiveness and misinjection. Intraperitoneal injections can be similar to oral administration in that hepatic metabolism could occur in both.
History
There are few accounts of the use of intraperitoneal injections prior to 1970. One of the earliest recorded uses of IP injections involved the insemination of a guinea-pig in 1957. The study however did not find an increase in conception rate when compared to mating. In that same year, a study injected egg whites intraperitoneally into rats to study changes in "droplet" fractions in kidney cells. The study showed that the number of small droplets decreased after administration of the egg whites, indicating that they have been changed to large droplets. In 1964, a study delivered chemical agents such as acetic acid, bradykinin, and kaolin to mice intraperitoneally in order to study a "squirming" response. In 1967, the production of amnesia was studied through an injection of physostigmine. In 1968, melatonin was delivered to rats intraperitoneally in order to study how brain serotonin would be affected in the midbrain. In 1969, errors depending on a variety of techniques of administering IP injections were analyzed, and a 12% error in placement was found when using a one-man procedure versus a 1.2% error when using a two-man procedure.
A good example of how intraperitoneal injections work is depicted through "The distribution of salicylate in mouse tissues after intraperitoneal injection" because it includes information on how a drug can travel to the blood, liver, brain, kidney, heart, spleen, diaphragm, and skeletal muscle once it has been injected intraperitoneally.
These early uses of Intraperitoneal injections provide good examples of how the delivery method can be used, and provides a base for future studies on how to properly inject mice for research.
Use in humans
Currently, there are a handful of drugs that are delivered through intraperitoneal injection for chemotherapy. They are mitomycin C, cisplatin, carboplatin, oxaliplatin, irinotecan, 5-fluorouracil, gemcitabine, paclitaxel, docetaxel, doxorubicin, premetrexed, and melphalan. There needs to be more research done to determine appropriate dosing and combinations of these drugs to advance intraperitoneal drug delivery.
There are few examples of the use of intraperitoneal injections in humans cited in literature because it is mainly used to study the effects of drugs in mice. The few examples that do exist pertain to the treatment of pancreatic/ovarian cancers and injections of other drugs in clinical trials. One study utilized IP injections to study pain in the abdomen after a hysterectomy when administering anesthetic continuously vs patient-controlled. The results depicted that ketobemidone consumption was significantly lower when patients controlled anesthetic through IP. This led to the patients being able to be discharged earlier than when anesthesia was administered continuously. These findings could be advanced by studying how the route of injection affects the organs in the peritoneal cavity.
In another Phase I clinical trial, patients with ovarian cancer were injected intraperitoneally with dl1520 in order to study the effects of a replication-competent/-selective virus. The effects of this study were the onset of flu-like symptoms, emesis, and abdominal pain. The study overall defines appropriate doses and toxicity levels of dl1520 when injected intraperitoneally.
One study attempted to diagnose hepatic hydrothorax with the use of injecting Sonazoid intraperitoneally. Sonazoid was utilized to aid with contrast-enhanced ultrasonography by enhancing the peritoneal and pleural cavities. This study demonstrates how intraperitoneal injections can be used to help diagnose diseases by providing direct access to the peritoneal cavity and affecting the organs in the cavity.
In a case of a ruptured hepatocellular carcinoma, it was reported that the patient was treated successfully through the use of an intraperitoneal injection of OK-432, which is an immunomodulatory agent. The patient was a 51-year-old male who was hospitalized. The delivery of OK-432 occurred a total of four times in a span of one week. The results of this IP injection were the disappearance of the ascites associated with the rupture. This case is a good example of how IP injections can be used to deliver a drug that can help to treat or cure a medical diagnosis over the use of other routes of delivery. The results set a precedent of how other drugs may be delivered in this way to treat other similar medical issues after further research.
In 2018, a patient with stage IV ovarian cancer and peritoneal metastases was injected intraperitoneally with 12g of mixed cannabinoid before later being hospitalized. The symptoms of this included impairment of cognitive and psychomotor abilities. Because of the injection of cannabis, the patient was predicted to have some level of THC in the blood from absorption. This case presents the question of how THC is absorbed in the peritoneal cavity. It also shows how easily substances are absorbed through the peritoneal cavity after an IP injection.
Overall, this section provides a few examples of the effects and uses of intraperitoneal injections in human patients. There are a variety of uses and possibilities for many more in the future with further research and approval.
Use in laboratory animals
Intraperitoneal injections are the preferred method of administration in many experimental studies due to the quick onset of effects post injection. This allows researchers to observe the effects of a drug in a shorter period of time, and allows them to study the effects of drugs on multiple organs that are in the peritoneal cavity at once. In order to effectively administer drugs through IP injections, the stomach of the animal is exposed, and the injection is given in the lower abdomen. The most efficient method to inject small animals is a two-person method where one holds the rodent and the other person injects the rodent at about 10 to 20 degrees in mice and 20 to 45 degrees in rats. The holder retains the arms of the animal and tilts the head lower than the abdomen to create optimal space in the peritoneal cavity.
There has been some debate on whether intraperitoneal injections are the best route of administration for experimental animal studies. It was concluded in a review article that utilizing IP injections to administer drugs to laboratory rodents in experimental studies is acceptable when being applied to proof-of-concept studies.
A study was conducted to determine the best route of administration to transplant mesenchymal stem cells for colitis. This study compared intraperitoneal injections, intravenous injections, and anal injections. It was concluded that the intraperitoneal injection had the highest survival rate of 87.5%. This study shows how intraperitoneal injections can be more effective and beneficial than other traditional routes of administration.
One article reviews the injection of sodium pentobarbital to euthanize rodents intraperitoneally. Killing the rodent through an intraperitoneal route was originally recommended over other routes such as inhalants because it was thought to be more efficient and ethical. The article overviews whether IP is the best option for euthanization based on evidence associated with welfare implications. It was concluded that there is evidence that IP may not be the best method of euthenasia due to possibilities of missinjection.
Another example of how intraperitoneal injections are used in studies involving rodents is the use of IP for micro-CT contrast enhanced detection of liver tumors. Contrast agents were administered intraperitoneally instead of intravenously to avoid errors and challenges. It was determined that IP injections are a good option for Fenestra to quantify liver tumors in mice.
An example of how intraperitoneal injections can be optimized is depicted in a study where IP injections are used to deliver anesthesia to mice. This study goes over the dosages, adverse effects, and more of using intraperitoneal injections of anesthesia.
An example of when intraperitoneal injections are not ideal is given in a study where the best route of administration was determined for cancer biotherapy. It was concluded that IP administration should not be used over intravenous therapy due to high radiation absorption in the intestines. This shows an important limitation to the use of IP therapy.
The provided examples show a variety of uses for intraperitoneal injections in animals for in vitro studies. Some of the examples depict situations where IP injections are not ideal, while others prove the advantageous uses if this delivery method. Overall, many studies utilize IP injections to deliver therapeutics to lab animals due to the efficiency of the administration route.
References
Medical treatments
Routes of administration
Dosage forms
Digestive system procedures | Intraperitoneal injection | [
"Chemistry"
] | 2,212 | [
"Pharmacology",
"Routes of administration"
] |
15,553,696 | https://en.wikipedia.org/wiki/Volta%20potential | The Volta potential (also called Volta potential difference, contact potential difference, outer potential difference, Δψ, or "delta psi") in electrochemistry, is the electrostatic potential difference between two metals (or one metal and one electrolyte) that are in contact and are in thermodynamic equilibrium. Specifically, it is the potential difference between a point close to the surface of the first metal and a point close to the surface of the second metal (or electrolyte).
The Volta potential is named after Alessandro Volta.
Volta potential between two metals
When two metals are electrically isolated from each other, an arbitrary potential difference may exist between them. However, when two different neutral metal surfaces are brought into electrical contact (even indirectly, say, through a long electro-conductive wire), electrons will flow from the metal with the higher Fermi level to the metal with the lower Fermi level until the Fermi levels in the two phases are equal.
Once this has occurred, the metals are in thermodynamic equilibrium with each other (the actual number of electrons that passes between the two phases is usually small).
Just because the Fermi levels are equal, however, does not mean that the electric potentials are equal. The electric potential outside each material is controlled by its work function, and so dissimilar metals can show an electric potential difference even at equilibrium.
The Volta potential is not an intrinsic property of the two bulk metals under consideration, but rather is determined by work function differences between the metals' surfaces. Just like the work function, the Volta potential depends sensitively on surface state, contamination, and so on.
Measurement of Volta potential (Kelvin probe)
The Volta potential can be significant (of order 1 volt) but it cannot be measured directly by an ordinary voltmeter.
A voltmeter does not measure vacuum electrostatic potentials, but instead the difference in Fermi level between the two materials, a difference that is exactly zero at equilibrium.
The Volta potential, however, corresponds to a real electric field in the spaces between and around the two metal objects, a field generated by the accumulation of charges at their surfaces. The total charge over each object's surface depends on the capacitance between the two objects, by the relation , where is the Volta potential. It follows therefore that the value of the potential can be measured by varying the capacitance between the materials by a known amount (e.g., by moving the objects further from each other) and measuring the displaced charge that flows through the wire that connects them.
The Volta potential difference between a metal and an electrolyte can be measured in a similar fashion.
The Volta potential of a metal surface can be mapped on very small scales by use of a Kelvin probe force microscope, based on atomic force microscopy. Over larger areas on the order of millimeters to centimeters, a scanning Kelvin probe (SKP), which uses a wire probe of tens to hundreds of microns in size, can be used. In either case the capacitance change is not known—instead, a compensating DC voltage is added to cancel the Volta potential so that no current is induced by the change in capacitance. This compensating voltage is the negative of the Volta potential.
See also
Electrode potential
Absolute electrode potential
Electric potential
Galvani potential
Potential difference (voltage)
Band bending
Volt
Volta effect
References
Electrochemical concepts
Electrochemical potentials
Alessandro Volta | Volta potential | [
"Chemistry"
] | 707 | [
"Electrochemistry",
"Electrochemical concepts",
"Electrochemical potentials"
] |
15,556,352 | https://en.wikipedia.org/wiki/Gravity%20separation | Gravity separation is an industrial method of separating two components, either a suspension, or dry granular mixture where separating the components with gravity is sufficiently practical: i.e. the components of the mixture have different specific weight. Every gravitational method uses gravity as the primary force for separation. One type of gravity separator lifts the material by vacuum over an inclined vibrating screen covered deck.
This results in the material being suspended in air while the heavier impurities are left behind on the screen and are discharged from the stone outlet. Gravity separation is used in a wide variety of industries, and can be most simply differentiated by the characteristics of the mixture to be separated - principally that of 'wet' i.e. - a suspension versus 'dry' -a mixture of granular product. Often other methods are applied to make the separation faster and more efficient, such as flocculation, coagulation and suction. The most notable advantages of the gravitational methods are their cost effectiveness and in some cases excellent reduction. Gravity separation is an attractive unit operation as it generally has low capital and operating costs, uses few if any chemicals that might cause environmental concerns and the recent development of new equipment enhances the range of separations possible.
Examples of application
Agriculture-
Gravity separation tables are used for the removal of impurities, admixture, insect damage and immature kernels from the following examples: wheat, barley, oilseed rape, peas, beans, cocoa beans, linseed. They can be used to separate and standardize coffee beans, cocoa beans, peanuts, corn, peas, rice, wheat, sesame and other food grains.
The gravity separator separates products of same size but with difference in specific weight. It has a vibrating rectangular deck, which makes it easy for the product to travel a longer distance, ensuring improved quality of the end product. The pressurized air in the deck enables the material to split according to its specific weight. As a result, the heavier particles travel to the higher level while the lighter particles travel to the lower level of the deck. It comes with easily adjustable air fans to control the volume of air distribution at different areas of the vibrating deck to meet the air supply needs of the deck. The table inclination, speed of eccentric motion and the feed rate can be precisely adjusted to achieve smooth operation of the machine.
Preferential flotation
Heavy liquids such as tetrabromoethane can be used to separate ores from supporting rocks by preferential flotation. The rocks are crushed, and while sand, limestone, dolomite, and other types of rock material will float on TBE, ores such as sphalerite, galena and pyrite will sink.
Clarification/thickening
Clarification is a name for the method of separating fluid from solid particles. Often clarification is used along with flocculation to make the solid particles sink faster to the bottom of the clarification pool while fluid is obtained from the surface which is free of solid particles.
Thickening is the same as clarification except reverse. Solids that sink to the bottom are obtained and fluid is rejected from the surface.
The difference of these methods could be demonstrated with the methods used in waste water processing: in the clarification phase, sludge sinks to the bottom of the pool and clear water flows over the clear water grooves and continues its journey. The obtained sludge is then pumped into the thickeners, where sludge thickens farther and is then obtained to be pumped into digestion to be prepared into fertilizer.
Sinking chamber
When clearing gases, an often used and mostly working method for clearing large particles is to blow it into a large chamber where the gas's velocity decreases and the solid particles start sinking to the bottom. This method is used mostly because of its cheap cost.
Types of gravity separators
Conventional jigs
Pinched sluices
Reichert Cones
Spirals
Centrifugal jigs
Shaking tables
References
Separation processes | Gravity separation | [
"Chemistry"
] | 808 | [
"nan",
"Separation processes"
] |
15,560,529 | https://en.wikipedia.org/wiki/Relative%20biological%20effectiveness | In radiobiology, the relative biological effectiveness (often abbreviated as RBE) is the ratio of biological effectiveness of one type of ionizing radiation relative to another, given the same amount of absorbed energy. The RBE is an empirical value that varies depending on the type of ionizing radiation, the energies involved, the biological effects being considered such as cell death, and the oxygen tension of the tissues or so-called oxygen effect.
Application
The absorbed dose can be a poor indicator of the biological effect of radiation, as the biological effect can depend on many other factors, including the type of radiation, energy, and type of tissue. The relative biological effectiveness can help give a better measure of the biological effect of radiation. The relative biological effectiveness for radiation of type R on a tissue is defined as the ratio
where DX is a reference absorbed dose of radiation of a standard type X, and DR is the absorbed dose of radiation of type R that causes the same amount of biological damage. Both doses are quantified by the amount of energy absorbed in the cells.
Different types of radiation have different biological effectiveness mainly because they transfer their energy to the tissue in different ways. Photons and beta particles have a low linear energy transfer (LET) coefficient, meaning that they ionize atoms in the tissue that are spaced by several hundred nanometers (several tenths of a micrometer) apart, along their path. In contrast, the much more massive alpha particles and neutrons leave a denser trail of ionized atoms in their wake, spaced about one tenth of a nanometer apart (i.e., less than one-thousandth of the typical distance between ionizations for photons and beta particles).
RBEs can be used for either cancer/hereditary risks (stochastic) or for harmful tissue reactions (deterministic) effects. Tissues have different RBEs depending on the type of effect. For high LET radiation (i.e., alphas and neutrons), the RBEs for deterministic effects tend to be lower than those for stochastic effects.
The concept of RBE is relevant in medicine, such as in radiology and radiotherapy, and to the evaluation of risks and consequences of radioactive contamination in various contexts, such as nuclear power plant operation, nuclear fuel disposal and reprocessing, nuclear weapons, uranium mining, and ionizing radiation safety.
Relation to radiation weighting factors (WR)
For the purposes of computing the equivalent dose to an organ or tissue, the International Commission on Radiological Protection (ICRP) has defined a standard set of radiation weighting factors (WR), formerly termed the quality factor (Q). The radiation weighting factors convert absorbed dose (measured in SI units of grays or non-SI rads) into formal biological equivalent dose for radiation exposure (measured in units of sieverts or rem). However, ICRP states:
"The quantities equivalent dose and effective dose should not be used to quantify higher radiation doses or to make decisions on the need for any treatment related to tissue reactions [i.e., deterministic effects]. For such purposes, doses should be evaluated in terms of absorbed dose (in gray, Gy), and where high-LET radiations (e.g., neutrons or alpha particles) are involved, an absorbed dose, weighted with an appropriate RBE, should be used"
Radiation weighting factors are largely based on the RBE of radiation for stochastic health risks. However, for simplicity, the radiation weighting factors are not dependent on the type of tissue, and the values are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, with respect to external (external to the cell) sources. Radiation weighting factors have not been developed for internal sources of heavy ions, such as a recoil nucleus.
The ICRP 2007 standard values for relative effectiveness are given below. The higher radiation weighting factor for a type of radiation, the more damaging it is, and this is incorporated into the calculation to convert from gray to sievert units.
Radiation weighting factors that go from physical energy to biological effect must not be confused with tissue weighting factors. The tissue weighting factors are used to convert an equivalent dose to a given tissue in the body, to an effective dose, a number that provides an estimation of total danger to the whole organism, as a result of the radiation dose to part of the body.
Experimental methods
Typically the evaluation of relative biological effectiveness is done on various types of living cells grown in culture medium, including prokaryotic cells such as bacteria, simple eukaryotic cells such as single celled plants, and advanced eukaryotic cells derived from organisms such as rats. By irradiating batches of cells with different doses and types of radiation, a relationship between dose and the fraction of cells that die can be found, and then used to find the doses corresponding to some common survival rate. The ratio of these doses is the RBE of R. Instead of death, the endpoint might be the fraction of cells that become unable to undergo mitotic division (or, for bacteria, binary fission), thus being effectively sterilized — even if they can still carry out other cellular functions.
The types R of ionizing radiation most considered in RBE evaluation are X-rays and gamma radiation (both consisting of photons), alpha radiations (helium-4 nuclei), beta radiation (electrons and positrons), neutron radiation, and heavy nuclei, including the fragments of nuclear fission. For some kinds of radiation, the RBE is strongly dependent on the energy of the individual particles.
Dependence on tissue type
Early on it was found that X-rays, gamma rays, and beta radiation were essentially equivalent for all cell types. Therefore, the standard radiation type X is generally an X-ray beam with 250 keV photons or cobalt-60 gamma rays. As a result, the relative biological effectiveness of beta and photon radiation is essentially 1.
For other radiation types, the RBE is not a well-defined physical quantity, since it varies somewhat with the type of tissue and with the precise place of absorption within the cell. Thus, for example, the RBE for alpha radiation is 2–3 when measured on bacteria, 4–6 for simple eukaryotic cells, and 6–8 for higher eukaryotic cells. According to one source it may be much higher (6500 with X rays as the reference) on ovocytes. The RBE of neutrons is 4–6 for bacteria, 8–12 for simple eukaryotic cells, and 12–16 for higher eukaryotic cells.
Dependence on source location
In the early experiments, the sources of radiation were all external to the cells that were irradiated. However, since alpha particles cannot traverse the outermost dead layer of human skin, they can do significant damage only if they come from the decay of atoms inside the body. Since the range of an alpha particle is typically about the diameter of a single eukaryotic cell, the precise location of the emitting atom in the tissue cells becomes significant.
For this reason, it has been suggested that the health impact of contamination by alpha emitters might have been substantially underestimated. Measurements of RBE with external sources also neglect the ionization caused by the recoil of the parent-nucleus due to the alpha decay. While the recoil of the parent-nucleus of the decaying atom typically carries only about 2% of the energy of the alpha-particle that is emitted by the decaying atom, its range is extremely short (about 2–3 angstroms), due to its high electric charge and high mass. The parent nucleus is required to recoil, upon emission of an alpha particle, with a discrete kinetic energy due to conservation of momentum. Thus, all of the ionization energy from the recoil-nucleus is deposited in an extremely small volume near its original location, typically in the cell nucleus on the chromosomes, which have an affinity for heavy metals. The bulk of studies, using sources that are external to the cell, have yielded RBEs between 10 and 20. Since most of the ionization damage from the travel of the alpha particle is deposited in the cytoplasm, whereas from the travel of the recoil-nucleus is on the DNA itself, it is likely greater damage is caused by the recoil nucleus than by the alpha particle itself.
History
In 1931, Failla and Henshaw reported on determination of the relative biological effectiveness (RBE) of x rays and γ rays. This appears to be the first use of the term ‘RBE’. The authors noted that RBE was dependent on the experimental system being studied. Somewhat later, it was pointed out by Zirkle et al. (1952) that the biological effectiveness depends on the spatial distribution of the energy imparted and the density of ionisations per unit path length of the ionising particles. Zirkle et al. coined the term ‘linear energy transfer (LET)’ to be used in radiobiology for the stopping power, i.e. the energy loss per unit path length of a charged particle. The concept was introduced in the 1950s, at a time when the deployment of nuclear weapons and nuclear reactors spurred research on the biological effects of artificial radioactivity. It had been noticed that those effects depended both on the type and energy spectrum of the radiation, and on the kind of living tissue. The first systematic experiments to determine the RBE were conducted in that decade.
See also
Background radiation
Linear energy transfer (LET)
Theory of dual radiation action
References
External links
Relative Biological Effectiveness in Ion Beam Therapy
Radiation health effects | Relative biological effectiveness | [
"Chemistry",
"Materials_science"
] | 1,979 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
18,473,510 | https://en.wikipedia.org/wiki/Ethylene-responsive%20element%20binding%20protein | Ethylene-responsive element binding protein (EREBP) is a homeobox gene from Arabidopsis thaliana and other plants which encodes a transcription factor. EREBP is responsible in part for mediating the response in plants to the plant hormone ethylene.
References
External links
Transcription factors | Ethylene-responsive element binding protein | [
"Chemistry",
"Biology"
] | 65 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
18,475,546 | https://en.wikipedia.org/wiki/Multivariate%20adaptive%20regression%20spline | In statistics, multivariate adaptive regression splines (MARS) is a form of regression analysis introduced by Jerome H. Friedman in 1991. It is a non-parametric regression technique and can be seen as an extension of linear models that automatically models nonlinearities and interactions between variables.
The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".
The basics
This section introduces MARS using a few examples. We start with a set of data: a matrix of input variables x, and a vector of the observed responses y, with a response for each row in x. For example, the data could be:
Here there is only one independent variable, so the x matrix is just a single column. Given these measurements, we would like to build a model which predicts the expected y for a given x.
A linear model for the above data is
The hat on the indicates that is estimated from the data. The figure on the right shows a plot of this function:
a line giving the predicted versus x, with the original values of y shown as red dots.
The data at the extremes of x indicates that the relationship between y and x may be non-linear (look at the red dots relative to the regression line at low and high values of x). We thus turn to MARS to automatically build a model taking into account non-linearities. MARS software constructs a model from the given x and y as follows
The figure on the right shows a plot of this function: the predicted versus x, with the original values of y once again shown as red dots. The predicted response is now a better fit to the original y values.
MARS has automatically produced a kink in the predicted y to take into account non-linearity. The kink is produced by hinge functions. The hinge functions are the expressions starting with (where is if , else ). Hinge functions are described in more detail below.
In this simple example, we can easily see from the plot that y has a non-linear relationship with x (and might perhaps guess that y varies with the square of x). However, in general there will be multiple independent variables, and the relationship between y and these variables will be unclear and not easily visible by plotting. We can use MARS to discover that non-linear relationship.
An example MARS expression with multiple variables is
This expression models air pollution (the ozone level) as a function of the temperature and a few other variables. Note that the last term in the formula (on the last line) incorporates an interaction between and .
The figure on the right plots the predicted as and vary, with the other variables fixed at their median values. The figure shows that wind does not affect the ozone level unless visibility is low. We see that MARS can build quite flexible regression surfaces by combining hinge functions.
To obtain the above expression, the MARS model building procedure automatically selects which variables to use (some variables are important, others not), the positions of the kinks in the hinge functions, and how the hinge functions are combined.
The MARS model
MARS builds models of the form
The model is a weighted sum of basis functions
.
Each is a constant coefficient.
For example, each line in the formula for ozone above is one basis function
multiplied by its coefficient.
Each basis function takes one of the following three forms:
1) a constant 1. There is just one such term, the intercept.
In the ozone formula above, the intercept term is 5.2.
2) a hinge function. A hinge function has the form or . MARS automatically selects variables and values of those variables for knots of the hinge functions. Examples of such basis functions can be seen in the middle three lines of the ozone formula.
3) a product of two or more hinge functions.
These basis functions can model interaction between two or more variables.
An example is the last line of the ozone formula.
Hinge functions
A key part of MARS models are hinge functions taking the form
or
where is a constant, called the knot.
The figure on the right shows a mirrored pair of hinge functions with a knot at 3.1.
A hinge function is zero for part of its range, so can be used to partition the data into disjoint regions, each of which can be treated independently. Thus for example a mirrored pair of hinge functions in the expression
creates the piecewise linear graph shown for the simple MARS model in the previous section.
One might assume that only piecewise linear functions can be formed from hinge functions, but hinge functions can be multiplied together to form non-linear functions.
Hinge functions are also called ramp, hockey stick, or rectifier functions. Instead of the notation used in this article, hinge functions are often represented by where means take the positive part.
The model building process
MARS builds a model in two phases:
the forward and the backward pass.
This two-stage approach is the same as that used by
recursive partitioning trees.
The forward pass
MARS starts with a model which consists of just the intercept term
(which is the mean of the response values).
MARS then repeatedly adds basis function in pairs to the model. At each step it finds the pair of basis functions that gives the maximum reduction in sum-of-squares residual error (it is a greedy algorithm). The two basis functions in the pair are identical except that a different side of a mirrored hinge function is used for each function. Each new basis function consists of a term already in the model (which could perhaps be the intercept term) multiplied by a new hinge function. A hinge function is defined by a variable and a knot, so to add a new basis function, MARS must search over all combinations of the following:
1) existing terms (called parent terms in this context)
2) all variables (to select one for the new basis function)
3) all values of each variable (for the knot of the new hinge function).
To calculate the coefficient of each term, MARS applies a linear regression over the terms.
This process of adding terms continues until the change in residual error is too small to continue or until the maximum number of terms is reached. The maximum number of terms is specified by the user before model building starts.
The search at each step is usually done in a brute-force fashion, but a key aspect of MARS is that because of the nature of hinge functions, the search can be done quickly using a fast least-squares update technique. Brute-force search can be sped up by using a heuristic that reduces the number of parent terms considered at each step ("Fast MARS").
The backward pass
The forward pass usually overfits the model. To build a model with better generalization ability, the backward pass prunes the model, deleting the least effective term at each step until it finds the best submodel. Model subsets are compared using the Generalized cross validation (GCV) criterion described below.
The backward pass has an advantage over the forward pass: at any step it can choose any term to delete, whereas the forward pass at each step can only see the next pair of terms.
The forward pass adds terms in pairs, but the backward pass typically discards one side of the pair and so terms are often not seen in pairs in the final model. A paired hinge can be seen in the equation for in the first MARS example above; there are no complete pairs retained in the ozone example.
Generalized cross validation
The backward pass compares the performance of different models using Generalized Cross-Validation (GCV), a minor variant on the Akaike information criterion that approximates the leave-one-out cross-validation score in the special case where errors are Gaussian, or where the squared error loss function is used. GCV was introduced by Craven and Wahba and extended by Friedman for MARS; lower values of GCV indicate better models. The formula for the GCV is
GCV = RSS / (N · (1 − (effective number of parameters) / N)2)
where RSS is the residual sum-of-squares measured on the training data and N is the number of observations (the number of rows in the x matrix).
The effective number of parameters is defined as
(effective number of parameters) = (number of mars terms) + (penalty) · ((number of Mars terms) − 1 ) / 2
where penalty is typically 2 (giving results equivalent to the Akaike information criterion) but can be increased by the user if they so desire.
Note that
(number of Mars terms − 1 ) / 2
is the number of hinge-function knots, so the formula penalizes the addition of knots. Thus the GCV formula adjusts (i.e. increases) the training RSS to penalize more complex models. We penalize flexibility because models that are too flexible will model the specific realization of noise in the data instead of just the systematic structure of the data.
Constraints
One constraint has already been mentioned: the user
can specify the maximum number of terms in the forward pass.
A further constraint can be placed on the forward pass
by specifying a maximum allowable degree of interaction.
Typically only one or two degrees of interaction are allowed,
but higher degrees can be used when the data warrants it.
The maximum degree of interaction in the first MARS example
above is one (i.e. no interactions or an additive model);
in the ozone example it is two.
Other constraints on the forward pass are possible.
For example, the user can specify that interactions are allowed
only for certain input variables.
Such constraints could make sense because of knowledge
of the process that generated the data.
Pros and cons
No regression modeling technique is best for all situations.
The guidelines below are intended to give an idea of the pros and cons of MARS,
but there will be exceptions to the guidelines.
It is useful to compare MARS to recursive partitioning and this is done below.
(Recursive partitioning is also commonly called regression trees,
decision trees, or CART;
see the recursive partitioning article for details).
MARS models are more flexible than linear regression models.
MARS models are simple to understand and interpret. Compare the equation for ozone concentration above to, say, the innards of a trained neural network or a random forest.
MARS can handle both continuous and categorical data. MARS tends to be better than recursive partitioning for numeric data because hinges are more appropriate for numeric variables than the piecewise constant segmentation used by recursive partitioning.
Building MARS models often requires little or no data preparation. The hinge functions automatically partition the input data, so the effect of outliers is contained. In this respect MARS is similar to recursive partitioning which also partitions the data into disjoint regions, although using a different method.
MARS (like recursive partitioning) does automatic variable selection (meaning it includes important variables in the model and excludes unimportant ones). However, there can be some arbitrariness in the selection, especially when there are correlated predictors, and this can affect interpretability.
MARS models tend to have a good bias-variance trade-off. The models are flexible enough to model non-linearity and variable interactions (thus MARS models have fairly low bias), yet the constrained form of MARS basis functions prevents too much flexibility (thus MARS models have fairly low variance).
MARS is suitable for handling large datasets, and implementations run very quickly. However, recursive partitioning can be faster than MARS.
With MARS models, as with any non-parametric regression, parameter confidence intervals and other checks on the model cannot be calculated directly (unlike linear regression models). Cross-validation and related techniques must be used for validating the model instead.
The earth, mda, and polspline implementations do not allow missing values in predictors, but free implementations of regression trees (such as rpart and party) do allow missing values using a technique called surrogate splits.
MARS models can make predictions very quickly, as they only require evaluating a linear function of the predictors.
The resulting fitted function is continuous, unlike recursive partitioning, which can give a more realistic model in some situations. (However, the model is not smooth or differentiable).
Extensions and related concepts
Generalized linear models (GLMs) can be incorporated into MARS models by applying a link function after the MARS model is built. Thus, for example, MARS models can incorporate logistic regression to predict probabilities.
Non-linear regression is used when the underlying form of the function is known and regression is used only to estimate the parameters of that function. MARS, on the other hand, estimates the functions themselves, albeit with severe constraints on the nature of the functions. (These constraints are necessary because discovering a model from the data is an inverse problem that is not well-posed without constraints on the model.)
Recursive partitioning (commonly called CART). MARS can be seen as a generalization of recursive partitioning that allows for continuous models, which can provide a better fit for numerical data.
Generalized additive models. Unlike MARS, GAMs fit smooth loess or polynomial splines rather than hinge functions, and they do not automatically model variable interactions. The smoother fit and lack of regression terms reduces variance when compared to MARS, but ignoring variable interactions can worsen the bias.
TSMARS. Time Series Mars is the term used when MARS models are applied in a time series context. Typically in this set up the predictors are the lagged time series values resulting in autoregressive spline models. These models and extensions to include moving average spline models are described in "Univariate Time Series Modelling and Forecasting using TSMARS: A study of threshold time series autoregressive, seasonal and moving average models using TSMARS".
Bayesian MARS (BMARS) uses the same model form, but builds the model using a Bayesian approach. It may arrive at different optimal MARS models because the model building approach is different. The result of BMARS is typically an ensemble of posterior samples of MARS models, which allows for probabilistic prediction.
See also
Linear regression
Local regression
Rational function modeling
Segmented regression
Spline interpolation
Spline regression
References
Further reading
Hastie T., Tibshirani R., and Friedman J.H. (2009) The Elements of Statistical Learning, 2nd edition. Springer, (has a section on MARS)
Faraway J. (2005) Extending the Linear Model with R, CRC, (has an example using MARS with R)
Heping Zhang and Burton H. Singer (2010) Recursive Partitioning and Applications, 2nd edition. Springer, (has a chapter on MARS and discusses some tweaks to the algorithm)
Denison D.G.T., Holmes C.C., Mallick B.K., and Smith A.F.M. (2004) Bayesian Methods for Nonlinear Classification and Regression, Wiley,
Berk R.A. (2008) Statistical learning from a regression perspective, Springer,
External links
Several free and commercial software packages are available for fitting MARS-type models.
Free software
R packages:
earth function in the earth package
mars function in the mda package
polymars function in the polspline package. Not Friedman's MARS.
bass function in the BASS package for Bayesian MARS.
Matlab code:
ARESLab: Adaptive Regression Splines toolbox for Matlab
Code from the book Bayesian Methods for Nonlinear Classification and Regression for Bayesian MARS.
Python
Earth – Multivariate adaptive regression splines
py-earth
pyBASS for Bayesian MARS.
Commercial software
MARS from Salford Systems. Based on Friedman's implementation.
STATISTICA Data Miner from StatSoft
ADAPTIVEREG from SAS.
Nonparametric regression
Machine learning | Multivariate adaptive regression spline | [
"Engineering"
] | 3,287 | [
"Artificial intelligence engineering",
"Machine learning"
] |
18,476,911 | https://en.wikipedia.org/wiki/Thenoyltrifluoroacetone | Thenoyltrifluoroacetone, C8H5F3O2S, is a chemical compound used pharmacologically as a chelating agent. It is an inhibitor of cellular respiration by blocking the respiratory chain at complex II.
Perhaps the first report of TTFA as an inhibitor of respiration was by A. L. Tappel in 1960. Tappel had the (erroneous) idea that inhibitors like antimycin and alkyl hydroxyquinoline-N-oxide might work by chelating iron in the hydrophobic milieu of respiratory membrane proteins, so he tested a series of hydrophobic chelating agents. TTFA was a potent inhibitor, but not because of its chelating ability. TTFA binds at the quinone reduction site in Complex II, preventing ubiquinone from binding. The first x-ray structure of Complex II showing how TTFA binds, 1ZP0, was published in 2005
.
Thenoyltrifluoroacetone can be made in a Claisen condensation of ethyl trifluoroacetate and 2-acetylthiophene.
References
Thiophenes
Diketones
Chelating agents
Trifluoromethyl ketones | Thenoyltrifluoroacetone | [
"Chemistry"
] | 258 | [
"Chelating agents",
"Process chemicals"
] |
6,133,004 | https://en.wikipedia.org/wiki/Sialogogue | A sialogogue (also spelled sialagogue, ptysmagogue or ptyalagogue) is a substance, especially a medication, that increases the flow rate of saliva. The definition focuses on substances that promote production or secretion of saliva (proximal causation) rather than any food that is mouthwatering (distal causation that triggers proximal causation).
Sialogogues can be used in the treatment of xerostomia (the subjective feeling of having a dry mouth), to stimulate any functioning salivary gland tissue to produce more saliva. Saliva has a bactericidal effect, so when low levels of it are secreted, the risk of caries increases. Not only this, but fungal infections such as oral candidosis also can be a consequence of low salivary flow rates. The buffer effect of saliva is also important, neutralising acids that cause tooth enamel demineralisation.
Usage in dentistry
The following are used in dentistry to treat xerostomia:
Parasympathomimetic drugs act on parasympathetic muscarinic receptors to induce an increased saliva flow. The M3 receptor has been identified as the principal target to increase salivary flow rates. Pilocarpine is an example; the maximum dose of this drug is 30 mg/day. Contraindications include many lung conditions, such as asthma, cardiac problems, epilepsy and Parkinson's disease; side effects include flushing, increased urination, increase perspiration, and GI disturbances.
Chewing gum induces stimulated saliva secretion of the minor salivary glands in the oral cavity. During mastication (chewing), the resultant compression forces acting on the periodontal ligament cause the stimulated release of gingival crevicular fluid. Further salivation can be also achieved by the stimulation of taste receptors (parasympathetic fibers from the chorda tympani and the lingual nerve are involved).
Malic and ascorbic acid are effective sialogogues, but are not ideal as they cause demineralisation of tooth enamel.
Historical source from plants
A tincture is prepared from the root of the pyrethrium (pyrethrum) or pellitory (a number of plants in the Chrysanthemum family). It is found growing in Levant and parts of Limerick and Clare in Ireland. The root powder was used as flavouring in tooth powders in the past.
Herbs with sialogogue action
Bloodroot (Sanguinaria canadensis)
Blue flag (Iris versicolor)
Cayenne pepper (Capsicum annuum)
Centaury (Centaurium erythraea)
Chilcoatl / Azteca gold root (Heliopsis longipes)
Great yellow gentian (Gentiana lutea)
Jambu (Acmella oleracea)
Ginger (Zingiber officinale)
Northern prickly-ash (Zanthoxylum americanum)
Senega (Polygala senega)
See also
Hypersalivation
References
Drugs
Dentistry
Otorhinolaryngology | Sialogogue | [
"Chemistry"
] | 647 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
6,133,005 | https://en.wikipedia.org/wiki/Generator%20matrix | In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix.
Terminology
If G is a matrix, it generates the codewords of a linear code C by
where w is a codeword of the linear code C, and s is any input vector. Both w and s are assumed to be row vectors. A generator matrix for a linear -code has format , where n is the length of a codeword, k is the number of information bits (the dimension of C as a vector subspace), d is the minimum distance of the code, and q is size of the finite field, that is, the number of symbols in the alphabet (thus, q = 2 indicates a binary code, etc.). The number of redundant bits is denoted by .
The standard form for a generator matrix is,
,
where is the identity matrix and P is a matrix. When the generator matrix is in standard form, the code C is systematic in its first k coordinate positions.
A generator matrix can be used to construct the parity check matrix for a code (and vice versa). If the generator matrix G is in standard form, , then the parity check matrix for C is
,
where is the transpose of the matrix . This is a consequence of the fact that a parity check matrix of is a generator matrix of the dual code .
G is a matrix, while H is a matrix.
Equivalent codes
Codes C1 and C2 are equivalent (denoted C1 ~ C2) if one code can be obtained from the other via the following two transformations:
arbitrarily permute the components, and
independently scale by a non-zero element any components.
Equivalent codes have the same minimum distance.
The generator matrices of equivalent codes can be obtained from one another via the following elementary operations:
permute rows
scale rows by a nonzero scalar
add rows to other rows
permute columns, and
scale columns by a nonzero scalar.
Thus, we can perform Gaussian elimination on G. Indeed, this allows us to assume that the generator matrix is in the standard form. More precisely, for any matrix G we can find an invertible matrix U such that , where G and generate equivalent codes.
See also
Hamming code (7,4)
Notes
References
Further reading
External links
Generator Matrix at MathWorld
Coding theory | Generator matrix | [
"Mathematics"
] | 507 | [
"Discrete mathematics",
"Coding theory"
] |
6,135,790 | https://en.wikipedia.org/wiki/Cevian | In geometry, a cevian is a line segment which joins a vertex of a triangle to a point on the opposite side of the triangle. Medians and angle bisectors are special cases of cevians. The name "cevian" comes from the Italian mathematician Giovanni Ceva, who proved a well-known theorem about cevians which also bears his name.
Length
Stewart's theorem
The length of a cevian can be determined by Stewart's theorem: in the diagram, the cevian length is given by the formula
Less commonly, this is also represented (with some rearrangement) by the following mnemonic:
Median
If the cevian happens to be a median (thus bisecting a side), its length can be determined from the formula
or
since
Hence in this case
Angle bisector
If the cevian happens to be an angle bisector, its length obeys the formulas
and
and
where the semiperimeter
The side of length is divided in the proportion .
Altitude
If the cevian happens to be an altitude and thus perpendicular to a side, its length obeys the formulas
and
where the semiperimeter
Ratio properties
There are various properties of the ratios of lengths formed by three cevians all passing through the same arbitrary interior point: Referring to the diagram at right,
The first property is known as Ceva's theorem. The last two properties are equivalent because summing the two equations gives the identity .
Splitter
A splitter of a triangle is a cevian that bisects the perimeter. The three splitters concur at the Nagel point of the triangle.
Area bisectors
Three of the area bisectors of a triangle are its medians, which connect the vertices to the opposite side midpoints. Thus a uniform-density triangle would in principle balance on a razor supporting any of the medians.
Angle trisectors
If from each vertex of a triangle two cevians are drawn so as to trisect the angle (divide it into three equal angles), then the six cevians intersect in pairs to form an equilateral triangle, called the Morley triangle.
Area of inner triangle formed by cevians
Routh's theorem determines the ratio of the area of a given triangle to that of a triangle formed by the pairwise intersections of three cevians, one from each vertex.
See also
Mass point geometry
Menelaus' theorem
Notes
References
Ross Honsberger (1995). Episodes in Nineteenth and Twentieth Century Euclidean Geometry, pages 13 and 137. Mathematical Association of America.
Vladimir Karapetoff (1929). "Some properties of correlative vertex lines in a plane triangle." American Mathematical Monthly 36: 476–479.
Indika Shameera Amarasinghe (2011). “A New Theorem on any Right-angled Cevian Triangle.” Journal of the World Federation of National Mathematics Competitions, Vol 24 (02), pp. 29–37.
Straight lines defined for a triangle | Cevian | [
"Mathematics"
] | 608 | [
"Line (geometry)",
"Straight lines defined for a triangle"
] |
22,155,040 | https://en.wikipedia.org/wiki/Compton%E2%80%93Getting%20effect | The Compton–Getting effect is an apparent anisotropy in the intensity of radiation or particles due to the relative motion between the observer and the source. This effect was first identified in the intensity of cosmic rays by Arthur Compton and Ivan A. Getting in 1935. Gleeson and Axford provide a full derivation of the equations relevant to this effect.
The original application of the Compton–Getting effect predicted that the intensity of cosmic rays should be higher coming from the direction in which Earth is moving. For the case of cosmic rays the Compton–Getting effect only applies to those that are unaffected by the Solar wind such as extremely high energy particles. It has been calculated that the speed of the Earth within the galaxy () would result in a difference between the strongest and weakest cosmic ray intensities of about 0.1%.
This small difference is within the capabilities of modern instruments to detect, and was observed in 1986.
Forman (1970) derives the Compton–Getting effect anisotropy from the Lorentz invariance of the phase space distribution function. Ipavich (1974) furthers this general derivation to derive count rates with respect to the flow vector.
This Compton–Getting effect is apparent in plasma data in Earth's magnetotail. The Compton–Getting effect has also been utilized for analyzing energetic neutral atom (ENA) data returned by the Cassini-Huygens spacecraft at Saturn.
Notes
Cosmic rays | Compton–Getting effect | [
"Physics"
] | 290 | [
"Astrophysics",
"Physical phenomena",
"Radiation",
"Cosmic rays"
] |
22,159,309 | https://en.wikipedia.org/wiki/Rigid-band%20model | The Rigid-Band Model (or RBM) is one of the models used to describe the behavior of metal alloys. In some cases the model is even used for non-metal alloys such as Si alloys. According to the RBM the shape of the constant energy surfaces (hence the Fermi surface as well) and curve of density of states of the alloy are the same as those of the solvent metal under the following conditions:
The excess charge of the solute atoms localizes around them.
The mean free path of the electrons is much greater than the lattice spacing of the alloy.
The electron states of interest in the pure solvent are all in one energy band, which is greatly separated in energy from the other bands.
The only effect of the addition of the solute, given that its valence is greater than that of the solvent, is the addition of electrons to the valence band. This results to swelling the Fermi surface and filling the density of states curve to a higher energy.
Theory
In a pure metal, because of the periodicity of the lattice, the features of its electronic structure are well known. The single-particle states can be described in terms of Bloch states, the energy structure is characterized by Brillouin zone boundaries, energy gaps and energy bands. In reality though no metal is perfectly pure. When the amount of the foreign element is dilute, the added atoms may be treated as impurities. But when its concentration exceeds several atomic %, an alloy is formed and the interaction among the added atoms can no longer be neglected.
Before giving a more mathematical outline of the RBM it is convenient to give somewhat of a visualization of what happens to a metal upon alloying it. In a pure metal, we'll take silver as an example, all lattice sites are occupied by silver atoms. When different kind of atoms are dissolved into it, for example 10% of copper, some random lattice sites become occupied by copper atoms. Since silver has a valence of 1 and copper has a valence of 2, the alloy will now have a valence of 1.1. Most lattice sites however are still occupied by silver atoms and consequently the changes in electronic structure are minimal.
Basic concepts behind the Rigid-Band model
In a pure metal of valence Z1, all atoms become positive ions with the valence +Z1 by releasing the outermost Z1 electrons per atom to form the valence band. As a result, conduction electrons carrying negative charges are uniformly distributed over any atomic site with equal probability densities and maintain charge neutrality with the array of ions with positive charges. When an impurity atom of valence Z2 is introduced, the periodic potential is disturbed, conduction electrons are scattered and a screening potential is formed
where U(r) is the potential of the electrons in distance r, 1/λ is the screening radius and .
The Fermi surface of the pure metal is constructed under the assumption that the wave vector k of the Bloch electron is a good quantum number. But alloying destroys the periodicity of the lattice potential and thus results in scattering of the Bloch electron. The wave vector k changes upon scattering of the Bloch electron and can no longer be taken as a good quantum number. In spite of such fundamental difficulties, experimental and theoretical works have provided ample evidence that the concept of the Fermi surface and Brillouin zone is still valid even in concentrated crystalline alloys
In an alloy of atoms A and B, an intermetallic compound super-lattice structure tends to be formed. The chemical bonding between the unlike atoms leads to a very strong potential of the form
where is the potential on position due to ion X, whose position is specified by . X here stands for either A or B, so that indicates the potential of ion A. The RBM assumes , hence ignores the difference in the potential of ions A and B. Thus, the electronic structure of the pure metal A is assumed to be the same as that of the pure metal B or any compositions in the alloy A–B. The Fermi level is then chosen so as to be consistent with the electron concentration of the alloy. It is convenient to divide the predictions of the rigid-band model into two categories, geometric and density of states. The geometric predictions are those that use only the geometric properties of the constant energy surfaces. The density-of-states predictions are related to those properties which depend on the density of states at the Fermi energy such as the electronic specific heat.
Geometric structure
In a pure metal the eigenstates are the Bloch functions Ψk with energies ek. When the periodicity of the pure metal is destroyed by alloying, these Bloch states are no longer eigenstates and their energy becomes complex
The imaginary part Γk shows that the Bloch state in the alloy is no longer an eigenstate but scatters into other states with a lifetime of the order of (2Γk)−1. However, if , where Δ is the width of the band, then the Bloch states are approximately eigenstates and they can be used to calculate the properties of the alloys. In this case we can ignore Γk . The change in the energy of a Bloch state with alloying is then
When the perturbation is fairly localized about the solute site (which is one of the conditions of the RBM), ΔEk depends only on ek and not on k and thus . Therefore, the plot of versus k for the alloy will have the same shape of constant energy surfaces as for the plot of versus k for the pure solvent. A given energy surface of the alloy will naturally correspond to a different energy value from that of the same shaped surface of the pure solvent, but the shapes will remain exactly the same.
Density of states
According to the Rigid Band Model is constant (for a given energy level) and the density of states of the alloy has the shame shape as that of the pure solvent, displaced by . When the concentration of the solute a is small, is also small and the density of states of the alloy at constant a is
where is the density of states of the pure solvent.
In the case when is constant we get
meaning that the shape of the density of states will be the same, only displaced by .
References
Electronic band structures | Rigid-band model | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,281 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
10,432,703 | https://en.wikipedia.org/wiki/Mouse%20Genome%20Informatics | Mouse Genome Informatics (MGI) is a free, online database and bioinformatics resource hosted by The Jackson Laboratory, with funding by the National Human Genome Research Institute (NHGRI), the National Cancer Institute (NCI), and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). MGI provides access to data on the genetics, genomics and biology of the laboratory mouse to facilitate the study of human health and disease. The database integrates multiple projects, with the two largest contributions coming from the Mouse Genome Database and Mouse Gene Expression Database (GXD). , MGI contains data curated from over 230,000 publications.
The MGI resource was first published online in 1994 and is a collection of data, tools, and analyses created and tailored for use in the laboratory mouse, a widely used model organism. It is "the authoritative source of official names for mouse genes, alleles, and strains", which follow the guidelines established by the International Committee on Standardized Genetic Nomenclature for Mice. The history and focus of Jackson Laboratory research and production facilities generates tremendous knowledge and depth which researchers can mine to advance their research. A dedicated community of mouse researchers, worldwide enhances and contributes to the knowledge as well. This is an indispensable tool for any researcher using the mouse as a model organism for their research, and for researchers interested in genes that share homology with the mouse genes. Various mouse research support resources including animal collections and free colony management software are also available at the MGI site.
Mouse Genome Database
The Mouse Genome Database collects and curates comprehensive phenotype and functional annotations for mouse genes and alleles. This is an NHGRI-funded project which contributes to the Mouse Genome Informatics database.
Mouse gene expression database
The Gene Expression Database is a community resource of mouse developmental expression information.
History
MGI evolved from a project funded by the National Center for Human Genome Research in 1989 to combine the databases of several Jackson Laboratory scientists and create a tool for visualizing data on the mouse genome. The result of that project, led by Joseph H. Nadeau, Larry E. Mobraaten, and Janan T. Eppig, was called the "Encyclopedia of the Mouse Genome" and distributed via floppy disk semi-annually to around 300 scientists around the world. In 1992, that group joined with the team responsible for developing the "Genomic Database for Mouse", led by Muriel T. Davisson and Thomas H. Roderick, to start the "Mouse Genome Informatics" project. That project resulted in the first online release of the "Mouse Genome Database" in 1994.
See also
FlyBase
Rat Genome Database
Saccharomyces Genome Database
WormBase
Xenbase
Zebrafish Information Network
References
External links
Mouse Genome Informatics home page
Mouse Genome Informatics Publications
Tutorial from the Open Door workshop
Gene eXpression Database (GXD)
Mouse genetics
Model organism databases
Gene expression | Mouse Genome Informatics | [
"Chemistry",
"Biology"
] | 600 | [
"Model organism databases",
"Gene expression",
"Model organisms",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
10,433,833 | https://en.wikipedia.org/wiki/Computational%20mathematics | Computational mathematics is the study of the interaction between mathematics and calculations done by a computer.
A large part of computational mathematics consists roughly of using mathematics for allowing and improving computer computation in areas of science and engineering where mathematics are useful. This involves in particular algorithm design, computational complexity, numerical methods and computer algebra.
Computational mathematics refers also to the use of computers for mathematics itself. This includes mathematical experimentation for establishing conjectures (particularly in number theory), the use of computers for proving theorems (for example the four color theorem), and the design and use of proof assistants.
Areas of computational mathematics
Computational mathematics emerged as a distinct part of applied mathematics by the early 1950s. Currently, computational mathematics can refer to or include:
Computational sciences, also known as scientific computation or computational engineering
Systems sciences, for which directly requires the mathematical models from Systems engineering
Solving mathematical problems by computer simulation as opposed to traditional engineering methods.
Numerical methods used in scientific computation, for example numerical linear algebra and numerical solution of partial differential equations
Stochastic methods, such as Monte Carlo methods and other representations of uncertainty in scientific computation
The mathematics of scientific computation, in particular numerical analysis, the theory of numerical methods
Computational complexity
Computer algebra and computer algebra systems
Computer-assisted research in various areas of mathematics, such as logic (automated theorem proving), discrete mathematics, combinatorics, number theory, and computational algebraic topology
Cryptography and computer security, which involve, in particular, research on primality testing, factorization, elliptic curves, and mathematics of blockchain
Computational linguistics, the use of mathematical and computer techniques in natural languages
Computational algebraic geometry
Computational group theory
Computational geometry
Computational number theory
Computational topology
Computational statistics
Algorithmic information theory
Algorithmic game theory
Mathematical economics, the use of mathematics in economics, finance and, to certain extents, of accounting.
Experimental mathematics
Journals
Journals that publish contributions from computational mathematics include
ACM Transactions on Mathematical Software
Mathematics of Computation
SIAM Journal on Scientific Computing
SIAM Journal on Numerical Analysis
See also
References
Further reading
External links
Foundations of Computational Mathematics, a non-profit organization
International Journal of Computer Discovered Mathematics
Applied mathematics
Computational science | Computational mathematics | [
"Mathematics"
] | 421 | [
"Computational science",
"Applied mathematics",
"Computational mathematics"
] |
10,438,744 | https://en.wikipedia.org/wiki/Gene%20targeting | Gene targeting is a biotechnological tool used to change the DNA sequence of an organism (hence it is a form of Genome Editing). It is based on the natural DNA-repair mechanism of Homology Directed Repair (HDR), including Homologous Recombination. Gene targeting can be used to make a range of sizes of DNA edits, from larger DNA edits such as inserting entire new genes into an organism, through to much smaller changes to the existing DNA such as a single base-pair change. Gene targeting relies on the presence of a repair template to introduce the user-defined edits to the DNA. The user (usually a scientist) will design the repair template to contain the desired edit, flanked by DNA sequence corresponding (homologous) to the region of DNA that the user wants to edit; hence the edit is targeted to a particular genomic region. In this way Gene Targeting is distinct from natural homology-directed repair, during which the ‘natural’ DNA repair template of the sister chromatid is used to repair broken DNA (the sister chromatid is the second copy of the gene). The alteration of DNA sequence in an organism can be useful in both a research context – for example to understand the biological role of a gene – and in biotechnology, for example to alter the traits of an organism (e.g. to improve crop plants).
Methods
To create a gene-targeted organism, DNA must be introduced into its cells. This DNA must contain all of the parts necessary to complete the gene targeting. At a minimum this is the homology repair template, containing the desired edit flanked by regions of DNA homologous (identical in sequence to) the targeted region (these homologous regions are called “homology arms” ). Often a reporter gene and/or a selectable marker is also required, to help identify and select for cells (or “events”) where GT has actually occurred. It is also common practice to increase GT rates by causing a double-strand-break (DSB) in the targeted DNA region. Hence the genes encoding for the site-specific-nuclease of interest may also be transformed along with the repair template. These genetic elements required for GT may be assembled through conventional molecular cloning in bacteria.
Gene targeting methods are established for several model organisms and may vary depending on the species used. To target genes in mice, the DNA is inserted into mouse embryonic stem cells in culture. Cells with the insertion can contribute to a mouse's tissue via embryo injection. Finally, chimeric mice where the modified cells make up the reproductive organs are bred. After this step the entire body of the mouse is based on the selected embryonic stem cell.
To target genes in moss, the DNA is incubated together with freshly isolated protoplasts and with polyethylene glycol. As mosses are haploid organisms, moss filaments (protonema) can be directly screened for the target, either by treatment with antibiotics or with PCR. Unique among plants, this procedure for reverse genetics is as efficient as in yeast. Gene targeting has been successfully applied to cattle, sheep, swine and many fungi.
The frequency of gene targeting can be significantly enhanced through the use of site-specific endonucleases such as zinc finger nucleases, engineered homing endonucleases, TALENS, or most commonly the CRISPR-Cas system. This method has been applied to species including Drosophila melanogaster, tobacco, corn, human cells, mice and rats.
Comparison to other forms of genetic engineering
The relationship between gene targeting, gene editing and genetic modification is outlined in the Venn diagram below. It displays how 'Genetic engineering' encompasses all 3 of these techniques. Genome editing is characterised by making small edits to the genome at a specific location, often following cutting of the target DNA region by a site-specific-nuclease such as CRISPR. Genetic modification usually describes the insertion of a transgene (foreign DNA, i.e. a gene from another species) into a random location within the genome. Gene-targeting is a specific biotechnological tool that can lead to small changes to the genome at a specific site - in which case the edits caused by gene-targeting would count as genome editing. However gene targeting is also capable of inserting entire genes (such as transgenes) at the target site if the transgene is incorporated into the homology repair template that is used during gene-targeting. In such cases the edits caused by gene-targeting would, in some jurisdictions, be considered as equivalent to Genetic Modification as insertion of foreign DNA has occurred.
Gene targeting is one specific form of genome editing tool. Other genome editing tools include targeted mutagenesis, base editing and prime editing, all of which create edits to the endogenous DNA (DNA already present in the organism) at a specific genomic location. This site-specific or ‘targeted’ nature of genome editing is typically what makes genome-editing different to traditional ‘genetic modification’ which inserts a transgene at a non-specific location in the organisms' genome, as well as gene-editing making small edits to the DNA already present in the organisms, verses genetic modification insertion 'foreign' DNA from another species.
Because gene editing makes smaller changes to endogenous DNA, many mutations created through genome-editing could in theory occur through natural mutagenesis or, in the context of plants, through mutation breeding which is part of conventional breeding (in contrast the insertion of a transgene to create a Genetically Modified Organism (GMO) could not occur naturally). However, there are exceptions to this general rule; as explained in the introduction, GT can introduce a range of possible size of edits to DNA; from very small edits such as changing, inserting or deleting 1 base-pair, through to inserting much longer DNA sequences, which could in theory include insertion of an entire transgene. However, in practice GT is more commonly used to insert smaller sequences. The range of edits possible through GT can make it challenging to regulate (see Regulation).
The two most established forms of gene editing are gene-targeting and targeted-mutagenesis. While gene targeting relies on the Homology Directed Repair (HDR) (also called Homologous Recombination, HR) DNA repair pathway, targeted-mutagenesis uses Non-Homologous-End-Joining (NHEJ) of broken DNA. NHEJ is an error-prone DNA repair pathway, meaning that when it repairs the broken DNA it can insert or delete DNA bases, creating insertions or deletions (indels). The user cannot specify what these random indels will be, hence they cannot control exactly what edits are made at the target site. However they can control where these edits will occur (i.e. dictate the target site) through using a site-specific nuclease (previously Zinc Finger Nucleases & TALENs, now commonly CRISPR) to break the DNA at the target site. A summary of gene-targeting through HDR (also called Homologous Recombination) and targeted mutagenesis through NHEJ is shown in the figure below.
The more newly developed gene-editing techniques of prime editing and base editing, based on CRISPR-Cas methods, are alternatives to gene targeting, which can also create user-defined edits at targeted genomic locations. However each is limited in the length of DNA sequence insertion possible; base editing is limited to single base pair conversions while prime editing can only insert sequences of up to ~44bp. Hence GT remains the primary method of targeted (location-specific) insertion of long DNA sequences for genome engineering.
Comparison with gene trapping
Gene trapping is based on random insertion of a cassette, while gene targeting manipulates a specific gene. Cassettes can be used for many different things while the flanking homology regions of gene targeting cassettes need to be adapted for each gene. This makes gene trapping more easily amenable for large scale projects than targeting. On the other hand, gene targeting can be used for genes with low transcriptions that would go undetected in a trap screen. The probability of trapping increases with intron size, while for gene targeting, small genes are just as easily altered.
Applications
Applications in mammalian systems
Gene targeting was developed in mammalian cells in the 1980s, with diverse applications possible as a result of being able to make specific sequence changes at a target genomic site, such as the study of gene function or human disease, particularly in mice models. Indeed, gene targeting has been widely used to study human genetic diseases by removing ("knocking out"), or adding ("knocking in"), specific mutations of interest. Previously used to engineer rat cell models, advances in gene targeting technologies enable a new wave of isogenic human disease models. These models are the most accurate in vitro models available to researchers and facilitate the development of personalized drugs and diagnostics, particularly in oncology. Gene targeting has also been investigated for gene therapy to correct disease-causing mutations. However the low efficiency of delivery of the gene-targeting machinery into cells has hindered this, with research conducted into viral vectors for gene targeting to try and address these challenges.
Applications in yeast and moss
Gene targeting is relatively high efficiency in yeast, bacterial and moss (but is rare in higher eukaryotes). Hence gene targeting has been used in reverse genetics approaches to study gene function in these systems.
Applications in plant genome engineering
Gene targeting (GT), or homology-directed repair (HDR), is used routinely in plant genome engineering to insert specific sequences, with the first published example of GT in plants in the 1980s. However, gene targeting is particularly challenging in higher plants due to the low rates of Homologous Recombination, or Homology Directed Repair, in higher plants and the low rate of transformation (DNA uptake) by many plant species. However, there has been much effort to increase the frequencies of gene targeting in plants in the past decades, as it is very useful to be able to introduce specific sequences in the plant genome for plant genome engineering. The most significant improvement to gene targeting frequencies in plants was the induction of double-strand-breaks through site specific nucleases such as CRISPR, as described above. Other strategies include in planta gene targeting, whereby the homology repair template is embedded within the plant genome and then liberated using CRISPR cutting; upregulation of genes involved in the homologous recombination pathway; downregulation of the competing Non-Homologous-End-Joining pathway; increasing copy numbers of the homologous repair template; and engineering Cas variants to be optimised for plant tissue culture. Some of these approaches have also been used to improve gene targeting efficiencies in mammalian cells.
Plants that have been gene-targeted include Arabidopsis thaliana (the most commonly used model plant), rice, tomato, maize, tobacco and wheat.
Technical challenges
Gene targeting holds enormous promise to make targeted, user-defined sequence changes or sequence insertions in the genome. However its primary applications - human disease modelling and plant genome engineering - are hindered by the low efficiency of homologous recombination in comparison to the competing non-homologous end joining in mammalian and higher plant cells. As described above, there are strategies that can be employed to increase the frequencies of gene targeting in plants and mammalian cells. In addition, robust selection methods that allow the selection or specific enrichment of cells where gene targeting has occurred can increase the rates of recovery of gene-targeted cells.
2007 Nobel Prize
Mario R. Capecchi, Martin J. Evans and Oliver Smithies were awarded the 2007 Nobel Prize in Physiology or Medicine for their work on "principles for introducing specific gene modifications in mice by the use of embryonic stem cells", or gene targeting.
Regulation of Gene Targeted organisms
As explained above, Gene Targeting is technically capable of creating a range of sizes of genetic changes; from single base-pair mutations through to insertion of longer sequences, including potentially transgenes. This means that products of gene targeting can be indistinguishable from natural mutation, or can be equivalent to GMOs due to their insertion of a transgene (see Venn diagram above). Hence regulating products of Gene Targeting can be challenging and different countries have taken different approaches or are reviewing how to do so as part of broader regulatory reviews into the products of gene-editing. Broadly adopted classifications split gene-edited organisms into 3 classes of "SDN1-3", referring to Site Directed Nucleases (such as CRISPR-Cas) that are used to generate gene-edited organisms. These SDN classifications can guide national regulations as to which class of SDN they will consider to be ‘GMOs’ and therefore which are subject to potentially strict regulations.
SDN1 = organisms created through Non-homologous End Joining of an SDN-catalysed break in the DNA. Hence random mutations have occurred through the error prone NHEJ, and no repair template has been used (hence is not Gene-Targeting). Often subject to less stringent regulatory oversight due to the lack of use of a DNA repair template and equivalence to conventional breeding techniques (in the case of plant breeding).
SDN2 = one or several specific mutations have been introduced into the target gene at the SDN cut-site through use of a homology-repair template (hence this is Gene Targeting).
SDN3 = longer sequences have been inserted at the cut-site, via homologous recombination (i.e. Gene Targeting) or through NHEJ. "Longer sequences" typically refer to entire genetic elements such as promoters or protein-coding regions. These are often considered transgenic and therefore often classed as GMO.
Historically the European Union (EU) has broadly been opposed to Genetic Modification technology, on grounds of its precautionary principle. In 2018 the European Court of Justice (ECJ) ruled that gene-edited crops (including gene-targeted crops) should be considered as genetically modified and therefore were subject to the GMO Directive, which places significant regulatory burdens on GMO use. However this decision was received negatively by the European scientific community. In 2021 the European Commission deemed that current EU legislation governing Genetic Modification and Gene-Editing techniques (or NGTs – New Genomic Techniques) was ‘not fit for purpose’ and needed adapting to reflect scientific and technological progress. In July 2023 the European Commission published a proposal to change rules for certain products of gene-editing to reduce the regulatory requirements for organisms developed with gene-editing that contained genetic changes that could have occurred naturally.
See also
Cre recombinase
Cre-Lox recombination
FLP-FRT recombination
Genetic recombination
Recombinase-mediated cassette exchange (exchange of a preexisting "gene cassette" for a "gene of interest")
Regulation of genetic engineering
Site-specific recombinase technology
Toll-like receptor (example of a gene targeted for analysis)
Mus musculus (house mouse; common model organism)
Physcomitrella patens (only plant in which gene targeting is available, as of 1998)
References
External links
Outline of gene targeting by the University of Michigan
Gene targeting in mouse diagram & summary by Heydari lab, Wayne State University
Research highlights on reporter genes used in gene targeting
Targeted gene replacement in barley
Molecular biology
Genetic engineering
Genetics techniques
Biological engineering
Genetically modified organisms | Gene targeting | [
"Chemistry",
"Engineering",
"Biology"
] | 3,217 | [
"Genetics techniques",
"Biological engineering",
"Genetically modified organisms",
"Genetic engineering",
"Molecular biology",
"Biochemistry"
] |
10,440,599 | https://en.wikipedia.org/wiki/Skyline%20matrix | In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal (called half-bandwidth) are stored. In column-oriented skyline storage, only the entries from the first nonzero entry to the last nonzero entry in each column are stored. There is also row oriented skyline storage, and, for symmetric matrices, only one triangle is usually stored.
Skyline storage has become very popular in the finite element codes for structural mechanics, because the skyline is preserved by Cholesky decomposition (a method of solving systems of linear equations with a symmetric, positive-definite matrix; all fill-in falls within the skyline), and systems of equations from finite elements have a relatively small skyline. In addition, the effort of coding skyline Cholesky is about same as for Cholesky for banded matrices (available for banded matrices, e.g. in LAPACK; for a prototype skyline code, see ).
Before storing a matrix in skyline format, the rows and columns are typically renumbered to reduce the size of the skyline (the number of nonzero entries stored) and to decrease the number of operations in the skyline Cholesky algorithm. The same heuristic renumbering algorithm that reduce the bandwidth are also used to reduce the skyline. The basic and one of the earliest algorithms to do that is reverse Cuthill–McKee algorithm.
However, skyline storage is not as popular for very large systems (many millions of equations) because skyline Cholesky is not so easily adapted for massively parallel computing, and general sparse methods, which store only the nonzero entries of the matrix, become more efficient for very large problems due to much less fill-in.
See also
Sparse matrix
Band matrix
Frontal solver
Packed storage matrix
References
Sparse matrices | Skyline matrix | [
"Mathematics"
] | 401 | [
"Matrices (mathematics)",
"Sparse matrices",
"Mathematical objects",
"Combinatorics"
] |
10,442,256 | https://en.wikipedia.org/wiki/Marks%27%20Standard%20Handbook%20for%20Mechanical%20Engineers | Marks' Standard Handbook for Mechanical Engineers is a comprehensive handbook for the field of mechanical engineering. Originally based on the even older German , it was first published in 1916 by Lionel Simeon Marks. In 2017, its 12th edition, published by McGraw-Hill, marked the 100th anniversary of the work. The handbook was translated into several languages.
Lionel S. Marks was a professor of mechanical engineering at Harvard University and Massachusetts Institute of Technology in the early 1900s.
Topics
The 11th edition consists of 20 sections:
Mathematical Tables and Measuring Units
Mathematics
Mechanics of Solids and Fluids
Heat
Strength of Materials
Materials of Engineering
Fuels and Furnaces
Machine Elements
Power Generation
Materials Handling
Transportation
Building Construction and Equipment
Manufacturing Processes
Fans, Pumps, and Compressors
Electrical and Electronics Engineering
Instruments and Controls
Industrial Engineering
The Regulatory Environment
Refrigeration, Cryogenics, and Optics
Emerging Technologies
Editions
English editions:
1st edition, 1916, edited by Lionel Simeon Marks, based on the German
2nd edition, 1924, edited by Lionel Simeon Marks
3rd edition, 1930, Editor-in-Chief Lionel S. Marks, total issue 103,500, McGraw-Hill Book Co. Inc.
1941, edited by Lionel Peabody Marks
1951, edited by Lionel Peabody Marks and Alison Peabody Marks
1967, edited by Theodore Baumeister III
6th edition, 1958, edited by Eugene A. Avallone, Theodore Baumeister III
7th edition, golden (50th) anniversary, 1976?, edited by Theodore Baumeister III
8th edition, edited by Theodore Baumeister III, Eugene A. Avallone
9th edition
10th edition, 80th anniversary, 1997, edited by Eugene A. Avallone, Theodore Baumeister III,
11th edition, 90th anniversary, 2007, edited by Eugene A. Avallone, Theodore Baumeister III, Ali M. Sadegh
12th edition, 100th anniversary, 2017, edited by Ali M. Sadegh, William M. Worek, Eugene A. Avallone
See also
References
External links
Publisher's description
Mechanical engineering
Handbooks and manuals
1916 non-fiction books | Marks' Standard Handbook for Mechanical Engineers | [
"Physics",
"Engineering"
] | 413 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
10,443,283 | https://en.wikipedia.org/wiki/Homing%20endonuclease | The homing endonucleases are a collection of endonucleases encoded either as freestanding genes within introns, as fusions with host proteins, or as self-splicing inteins. They catalyze the hydrolysis of genomic DNA within the cells that synthesize them, but do so at very few, or even singular, locations. Repair of the hydrolyzed DNA by the host cell frequently results in the gene encoding the homing endonuclease having been copied into the cleavage site, hence the term 'homing' to describe the movement of these genes. Homing endonucleases can thereby transmit their genes horizontally within a host population, increasing their allele frequency at greater than Mendelian rates.
Origin and mechanism
Although the origin and function of homing endonucleases is still being researched, the most established hypothesis considers them as selfish genetic elements, similar to transposons, because they facilitate the perpetuation of the genetic elements that encode them independent of providing a functional attribute to the host organism.
Homing endonuclease recognition sequences are long enough to occur randomly only with a very low probability (approximately once every ), and are normally found in one or very few instances per genome. Generally, owing to the homing mechanism, the gene encoding the endonuclease (the HEG, "homing endonuclease gene") is located within the recognition sequence which the enzyme cuts, thus interrupting the homing endonuclease recognition sequence and limiting DNA cutting only to sites that do not (yet) carry the HEG.
Prior to transmission, one allele carries the gene (HEG+) while the other does not (HEG−), and is therefore susceptible to being cut by the enzyme. Once the enzyme is synthesized, it breaks the chromosome in the HEG− allele, initiating a response from the cellular DNA repair system. The damage is repaired using recombination, taking the pattern of the opposite, undamaged DNA allele, HEG+, that contains the gene for the endonuclease. Thus, the gene is copied to the allele that initially did not have it and it is propagated through successive generations. This process is called "homing".
Nomenclature
Homing endonucleases are always indicated with a prefix that identifies their genomic origin, followed by a hyphen: "I-" for homing endonucleases encoded within an intron, "PI-" (for "protein insert") for those encoded within an intein. Some authors have proposed using the prefix "F-" ("freestanding") for viral enzymes and other natural enzymes not encoded by introns nor inteins, and "H-" ("hybrid") for enzymes synthesized in a laboratory. Next, a three-letter name is derived from the binominal name of the organism, taking one uppercase letter from the genus name and two lowercase letters from the specific name. (Some mixing is usually done for hybrid enzymes.) Finally, a Roman numeral distinguishes different enzymes found in the same organism:
PI-TliII () is the second-identified enzyme encoded by an intein found in the archaea Thermococcus litoralis.
H-DreI () is the first synthetic homing endonuclease, created in a laboratory from the enzymes I-DmoI () and I-CreI (), taken respectively from Desulfurococcus mobilis and Chlamydomonas reinhardtii.
Comparison to restriction enzymes
Homing endonucleases differ from Type II restriction enzymes in the several respects:
Whereas Type II restriction enzymes bind short, usually symmetric, recognition sequences of 4 to 8 bp, homing endonucleases bind very long and in many cases asymmetric recognition sequences spanning 12 to 40 bp.
Homing endonucleases are generally more tolerant of substitutions in the recognition sequence. Minor variations in the recognition sequence usually decrease the activity of homing endonucleases, but often do not completely abolish it as often occurs with restriction enzymes.
Homing endonucleases share structural motifs that suggest there are four families, whereas it has not been possible to determine simply recognisable and distinguishable families of Type II restriction enzymes.
Homing endonucleases act as monomers or homodimers, and often require associated proteins to regulate their activity or form ribonucleoprotein complexes, wherein RNA is an integral component of the catalytic apparatus. Type II restriction enzymes can also function alone, as monomers or homodimers, or with additional protein subunits, but the accessory subunits differ from those of the homing endonucleases. Thus, they can require restriction, modification, and specificity subunits for their action.
Finally, homing endonucleases have a broader phylogenetic distribution, occurring in all three biological domains—the archaea, bacteria and eukarya. Type II restriction enzymes occur only in archaea, bacteria and certain viruses. Homing endonucleases are also expressed in all three compartments of the eukaryotic cell: nuclei, mitochondria and chloroplasts. Open reading frames encoding homing endonucleases have been found in introns, inteins, and in freestanding form between genes, whereas genes encoding Type II restriction enzyme genes have been found only in freestanding form, almost always in close association with genes encoding cognate DNA modifying enzymes. Thus, while the Type II restriction enzymes and homing endonucleases share the function of cleaving double-stranded DNA, they appear to have evolved independently.
Structural families
Currently there are six known structural families. Their conserved structural motifs are:
LAGLIDADG: Every polypeptide has 1 or 2 LAGLIDADG motifs. The sequence LAGLIDADG is a conserved sequence of amino acids where each letter is a code that identifies a specific residue. This sequence is directly involved in the DNA cutting process. Those enzymes that have only one motif work as homodimers, creating a saddle that interacts with the major groove of each DNA half-site. The LAGLIDADG motifs contribute amino acid residues to both the protein-protein interface between protein domains or subunits, and to the enzyme's active sites. Enzymes that possess two motifs in a single protein chain act as monomers, creating the saddle in a similar way. The first structures to be determined of homing endonucleases (of PI-SceI and I-CreI, both reported in 1997) were both from the LAGLIDADG structural family., The following year, the first structure of a homing endonuclease (I-CreI) bound to its DNA target site was also reported.
GIY-YIG: These have only one GIY-YIG motif, in the N-terminal region, that interacts with the DNA in the cutting site. The prototypic enzyme of this family is I-TevI which acts as a monomer. Separate structural studies have been reported of the DNA-binding and catalytic domains of I-TevI, the former bound to its DNA target and the latter in the absence of DNA.
His-Cys box (): These enzymes possess a region of 30 amino acids that includes 5 conserved residues: two histidines and three cysteins. They co-ordinate the metal cation needed for catalysis. I-PpoI is the best characterized enzyme of this family and acts as a homodimer. Its structure was reported in 1998. It is possibly related to the H-N-H family, as they share common features.
H-N-H: (): These have a consensus sequence of approximately 30 amino acids. It includes two pairs of conserved histidines and one asparagine that create a zinc finger domain. I-HmuI () is the best characterized enzyme of this family, and acts as a monomer. Its structure was reported in 2004 ().
PD-(D/E)xK (): These enzymes contain a canonical nuclease catalytic domain typically found in type II restriction endonucleases. The best characterized enzyme in this family, I-Ssp6803I (), acts as a tetramer. Its structure was reported in 2007 (). The overall fold is conserved in many endonuclease families, all of which belong to the PD-(D/E)xK superfamily.
Vsr-like/EDxHD (DUF559, ): These enzymes were discovered in the Global Ocean Sampling Metagenomic Database and first described in 2009. The term 'Vsr-like' refers to the presence of a C-terminal nuclease domain that displays recognizable homology to bacterial Very short patch repair (Vsr) endonucleases. The structure has been solved in 2011, confirming the Vsr homology. Is considered part of the PD-(D/E)xk superfamily.
Domain architecture
The yeast homing endonuclease PI-Sce is a LAGLIDADG-type endonuclease encoded as an intein that splices itself out of another protein (). The high-resolution structure reveals two domains: an endonucleolytic centre resembling the C-terminal domain of Hedgehog proteins, and a Hint domain (Hedgehog/Intein) containing the protein-splicing active site.
See also
REBASE, a comprehensive restriction enzyme database from New England Biolabs with links to related literature.
List of homing endonuclease cutting sites
I-CreI homing endonuclease
Meganucleases
Restriction enzyme
Introns and inteins
Intragenomic conflict: Homing endonuclease genes
Transposon
References
External links
Protein domains
Molecular biology
Biotechnology
Restriction enzymes | Homing endonuclease | [
"Chemistry",
"Biology"
] | 2,071 | [
"Genetics techniques",
"Protein classification",
"Biotechnology",
"Protein domains",
"nan",
"Molecular biology",
"Biochemistry",
"Restriction enzymes"
] |
10,443,327 | https://en.wikipedia.org/wiki/Medea%20gene | Medea is a gene from the fruit fly Drosophila melanogaster that was one of the first two Smad genes discovered. For both genes, the maternal effect lethality was the basis for the selection of their names. Medea was named for the mythological Greek Medea, who killed her progeny fathered by Jason.
Both Medea and Mothers against dpp were identified in a genetic screen for maternal effect mutations that caused lethality of heterozygous decapentaplegic progeny. Because decapentaplegic is a bone morphogenetic protein in the transforming growth factor beta superfamily, identification of the fly Smad genes provided a much-needed clue to understand the signal transduction pathway for this diverse family of extracellular proteins. Humans, mice, and other vertebrates have a gene with the same function as Medea, called SMAD4. An overview of the biology of Medea is found at The Interactive Fly, and the details of Medeas genetics and molecular biology are curated on FlyBase.
Another laboratory used Medea as an acronym to describe a synthetic gene causing maternal effect dominant embryonic arrest. The formal genetic designation for maternal effect dominant embryonic arrest is P{Medea.myd88}; more details are in FlyBase.
References
Transcription factors
Proteins
Medea | Medea gene | [
"Chemistry",
"Biology"
] | 271 | [
"Biomolecules by chemical classification",
"Molecular and cellular biology stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Molecular biology",
"Proteins",
"Transcription factors"
] |
12,734,062 | https://en.wikipedia.org/wiki/Regulated%20rewriting | Regulated rewriting is a specific area of formal languages studying grammatical systems which are able to take some kind of control over the production applied in a derivation step. For this reason, the grammatical systems studied in Regulated Rewriting theory are also called "Grammars with Controlled Derivations". Among such grammars can be noticed:
Matrix Grammars
Basic concepts
Definition
A Matrix Grammar, , is a four-tuple where
1.- is an alphabet of non-terminal symbols
2.- is an alphabet of terminal symbols disjoint with
3.- is a finite set of matrices, which are non-empty sequences
,
with , and
, where each
, is an ordered pair
being
these pairs are called "productions", and are denoted
. In these conditions the matrices can be written down as
4.- S is the start symbol
Definition
Let be a matrix grammar and let
the collection of all productions on matrices of .
We said that is of type according to Chomsky's hierarchy with , or "increasing length"
or "linear" or "without -productions" if and only if the grammar has the corresponding property.
The classic example
Note: taken from Abraham 1965, with change of nonterminals names
The context-sensitive language
is generated by the
where
is the non-terminal set,
is the terminal set,
and the set of matrices is defined as
,
,
,
.
Time Variant Grammars
Basic concepts
Definition
A Time Variant Grammar is a pair where
is a grammar and is a function from the set of natural
numbers to the class of subsets of the set of productions.
Programmed Grammars
Basic concepts
Definition
A Programmed Grammar is a pair where
is a grammar and are the success and fail functions from the set of productions
to the class of subsets of the set of productions.
Grammars with regular control language
Basic concepts
Definition
A Grammar With Regular Control Language,
, is a pair where
is a grammar and is a regular expression over the alphabet of the set of productions.
A naive example
Consider the CFG
where
is the non-terminal set,
is the terminal set,
and the productions set is defined as
being
,
,
,
, and
.
Clearly,
.
Now, considering the productions set
as an alphabet (since it is a finite set),
define the regular expression over :
.
Combining the CFG grammar and the regular expression
, we obtain the CFGWRCL
which generates the language
.
Besides there are other grammars with regulated rewriting, the four cited above are good examples of how to extend context-free grammars with some kind of control mechanism to obtain a Turing machine powerful grammatical device.
References
Salomaa, Arto (1973) Formal languages. Academic Press, ACM monograph series
Rozenberg, G.; Salomaa, A. (eds.) 1997, Handbook of formal languages. Berlin; New York : Springer (set) (3540604200 : v. 1; 3540606483 : v. 2; 3540606491: v. 3)
Dassow, Jürgen; Paun, G. 1990, Regulated Rewriting in Formal Language Theory . Springer-Verlag New York, Inc. Secaucus, New Jersey, USA, Pages: 308. Medium: Hardcover.
Dassow, Jürgen, Grammars with Regulated Rewriting. Lecture in the 5th PhD Program "Formal Languages and Applications", Tarragona, Spain, 2006.
Abraham, S. 1965. Some questions of language theory, Proceedings of the 1965 International Conference On Computational Linguistics, pp. 1–11, Bonn, Germany,
Formal languages
Formal methods | Regulated rewriting | [
"Mathematics",
"Engineering"
] | 725 | [
"Software engineering",
"Formal languages",
"Mathematical logic",
"Formal methods"
] |
12,735,789 | https://en.wikipedia.org/wiki/Zero-ohm%20link | A zero-ohm link or zero-ohm resistor is a wire link packaged in the same physical package format as a resistor. It is used to connect traces on a printed circuit board (PCB). This format allows it to be placed on the circuit board using the same automated equipment used to place other resistors, instead of requiring a separate machine to install a jumper or other wire. Zero-ohm resistors may be packaged like cylindrical resistors, or like surface-mount resistors.
Use
One use is to allow traces on the same side of a PCB to cross: one trace has a zero-ohm resistor while other traces can run in between the leads/pads of the zero-ohm resistor, avoiding contact with the first trace. Zero ohm resistors can also be used as configuration jumpers or in places where it should be easy to disconnect and reconnect electrical connections within a PCB to diagnose problems.
The resistance is only approximately zero; only a maximum is specified, which is typically in the range of 10–50 mΩ. However variants with ultra low resistance of under 0.5 mΩ are available. A percentage tolerance would not make sense, as it would be specified as a percentage of the ideal value of zero ohms (which would always be zero). However, it is common practice for manufacturers and retailers to list zero ohm resistors with a percentage tolerance. In cases where a percentage tolerance is specified, this value generally refers to the tolerance class that should be referred to within the datasheet or parts catalogue in order to find the absolute maximum resistance specification for the 0Ω part.
An axial-lead through-hole zero-ohm resistor is generally marked with a single black band, the symbol for "0" in the resistor color code. Surface-mount zero-ohm resistors are usually marked with a single or multiple "0" (if size allows marking), where the number of digits can indicate the tolerance or maximum resistance rating, as is the case with regular resistors. They are often implemented as thick film resistors.
See also
Jumper (electrical)
Shunt (electrical)
Fuse (electrical)
Bead (electrical)
References
External links
Why Zero-Ohm Resistors?
Resistive components | Zero-ohm link | [
"Physics"
] | 472 | [
"Resistive components",
"Physical quantities",
"Electrical resistance and conductance"
] |
12,740,323 | https://en.wikipedia.org/wiki/Self-propelled%20modular%20transporter | A self-propelled modular transporter or sometimes self-propelled modular trailer (SPMT) is a platform heavy hauler with a large array of wheels which is an upgraded version of a hydraulic modular trailer. SPMTs are used for transporting massive objects, such as large bridge sections, oil refining equipment, cranes, motors, spacecraft and other objects that are too big or heavy for trucks. Ballast tractors can however provide traction and braking for the SPMTs on inclines and descents.
SPMTs are used in many industry sectors worldwide such as the construction and oil industries, in the shipyard and offshore industry, for road transportation, on plant construction sites and even for moving oil platforms. They have begun to be used to replace bridge spans in the United States, Europe, Asia and more recently Canada.
Specifications
A typical SPMT has a grid of computer-controlled axles, usually 2 axles across and 4–8 axles along. When two (or more) axles are placed in series, this is called an axle line. All axles are individually controllable, in order to evenly distribute weight and to steer accurately. Each axle can swivel through 270°, with some manufacturers offering up to a full 360° of motion. The axles are coordinated by the control system to allow the SPMT to turn, move sideways or even rotate in place. Some SPMTs allow the axles to telescope independently of each other so that the load can be kept flat and evenly distributed while moving over uneven terrain. Each axle can also contain a hydrostatic drive unit.
A hydraulic power pack can be attached to the SPMT to provide power for steering, suspension and drive functions. This power pack is driven by an internal combustion engine. A single power pack can drive a string of SPMTs. As SPMTs often carry the world's heaviest loads on wheeled vehicles, they are very slow, often moving at under while fully loaded. Some SPMTs are controlled by a worker with a hand-held control panel, while others have a driver cabin. Multiple SPMTs can be linked (lengthwise and side-by-side) to transport massive building-sized objects. The linked SPMTs can be controlled from a single control panel.
History
The first modular self-propelled trailers were built in the 1970s. In the early 1980s, heavy haulage company Mammoet refined the concept into the form seen today. They set the width of the modules at 2.44 m, so the modules would fit on an ISO container flatrack. They also added 360° steering. They commissioned Scheuerle to develop and build the first units. Deliveries started in 1983. The two companies defined the standard units: a 4-axle SPMT, a 6-axle SPMT and a hydraulic power pack. Over the years, new types of modules were added to this system to accommodate a range of payloads.
In 2016 ESTA (the European Association of Abnormal Load Transport and Mobile Cranes) published the first SPMT best practice guide to help address the problem of trailers occasionally tipping over, which happened even when the operating rules and stability calculations had been precisely followed.
Some shipbuilding companies have started to use SPMT instead of gantry cranes for carrying ship sections. This has reduced the cost of transporting huge loads by millions of dollars.In 2022 Mammoet and Scheuerle developed and employed world's first electric SPMT. This was done with the help of an Electric power pack unit (EPPU) which replaced the gas powered PPU. The ESPMTs help to reduce carbon footprint of the companies and also the haulage industry. These electric modules are safer and quieter when compared with the diesel modules, which can benefit operations which are held in mines and energy plants.
ESTA has plans to develop European Trailer Operator's License (ETOL) for SPMT operators, this idea is backed by top company operating in heavy haulage sector like Goldhofer and Tii Group. There will be training and practice to obtain this specific license which the SPMT operators have to complete before handling these heavy machines on public roads, but this will improve the safety standards of the industry.
Achievements
Executing the salvage operation of the sunken ferry MV Sewol in the East China Sea in 2017, the company ALE used SPMTs equivalent to a 600-axle line and a load weight of , exceeding two world records.
In December 2022 Shell plc a London based oil company ordered decommission of their 20,300ton FPSO Curlew ship when it reached the end of its operational life. This operation was assigned to AF Offshore Decom, a decommissioning specialist company based in Oslo which partnered with Mammoet of Utrecht to load-in and set-down the structure in Norway with the help of 748 SPMT axle line. This claimed to break two world records, one for the heaviest SPMT movement and another for most SPMT axle lines used for transportation.
In February 2023 Sinotrans Heavy-Lift a China based heavy transport company moved a hotel building 500 meters in Sanya, Hainan using 254 axle lines of Scheuerle SPMT with the help of 15 power packs. This was claimed to be the world's heaviest building transportation ever. The building in the subject was almost 300ft long, 115 ft wide, 65 ft high and weighed 7,500 tons. The relocation was done to comply with the environmental regulations of the state.
In December 2023 China Shipping Vastwin Project Logistic a China based logistics company a subsidiary of China based multinational company COSCO Shipping moved five number of buildings at the Ningxia Saishang Jiangnan Museum located in Ningxia based in Northern China. The relocation was done to adhere with the environmental regulations. The buildings in subject were 11,450 tonnes in total of five with the main building weighing 10,000 tonnes, 43 mtr high, 36.9 mtr long and 31.5 mtr wide which was moved on 300 lines of SPMT and ten powerpacks. This resulted in breaking three records of most heights, heaviest building transportation over the longest distance.
Notable manufacturers
Enerpac
Faymonville
Italcarrelli
Mammoet
Greiner Heavy Engineering
Tracta
Transporter Industry International
Nicolas
Seyiton
Operators
Denzai(www.denzai-j.com)
ALE
Mammoet
Alstom
Sarens
Lampson International
CLP Group
Omega Morgan
Nordic Crane (Sweden,Norway & Denmark)
Allelys
See also
Heavy hauler
Applied mechanics
Hydraulic modular trailer
Ballast tractor
References
Machines
Modularity
Engineering vehicles
Heavy haulage | Self-propelled modular transporter | [
"Physics",
"Technology",
"Engineering"
] | 1,348 | [
"Physical systems",
"Engineering vehicles",
"Machines",
"Mechanical engineering"
] |
12,741,040 | https://en.wikipedia.org/wiki/Frequency-locked%20loop | A frequency-lock, or frequency-locked loop (FLL), is an electronic control system that generates a signal that is locked to the frequency of an input or "reference" signal. This circuit compares the frequency of a controlled oscillator to the reference, automatically raising or lowering the frequency of the oscillator until its frequency (but not necessarily its phase) is matched to that of the reference.
A frequency-locked loop is an example of a control system using negative feedback. Frequency-lock loops are used in radio, telecommunications, computers and other electronic applications to generate stable frequencies, or to recover a signal from a noisy communication channel.
A frequency-locked loop is similar to a phase-locked loop (PLL), but only attempts to control the derivative of phase, not the phase itself. Because it tries to do less, an FLL can acquire lock faster and over a wider range than a PLL. Sometimes the two are used in combination, with a frequency-locked loop used initially until the oscillator frequency is close enough to the reference that a PLL can take over.
Advanced applications can use both simultaneously, creating what is called an "FLL-assisted PLL" (FPLL).
See also
Phase-locked loop
References
External links
Electronic design | Frequency-locked loop | [
"Engineering"
] | 266 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
12,742,172 | https://en.wikipedia.org/wiki/Strassmann%27s%20theorem | In mathematics, Strassmann's theorem is a result in field theory. It states that, for suitable fields, suitable formal power series with coefficients in the valuation ring of the field have only finitely many zeroes.
History
It was introduced by .
Statement of the theorem
Let K be a field with a non-Archimedean absolute value | · | and let R be the valuation ring of K. Let f(x) be a formal power series with coefficients in R other than the zero series, with coefficients an converging to zero with respect to | · |. Then f(x) has only finitely many zeroes in R. More precisely, the number of zeros is at most N, where N is the largest index with |aN| = max |an|.
As a corollary, there is no analogue of Euler's identity, e2πi = 1, in Cp, the field of p-adic complex numbers.
See also
p-adic exponential function
References
External links
Field (mathematics)
Theorems in abstract algebra | Strassmann's theorem | [
"Mathematics"
] | 222 | [
"Theorems in algebra",
"Theorems in abstract algebra"
] |
12,742,710 | https://en.wikipedia.org/wiki/Clofibride | Clofibride is a fibrate. Clofibride is a derivative of clofibrate. In the body it is converted into 4-chlorophenoxyisobutyric acid (clofibric acid), which is the true hypolipidemic agent. So clofibride, just like clofibrate is a prodrug of clofibric acid.
References
2-Methyl-2-phenoxypropanoic acid derivatives
Prodrugs
Chloroarenes
Carboxamides | Clofibride | [
"Chemistry"
] | 116 | [
"Chemicals in medicine",
"Prodrugs"
] |
12,743,856 | https://en.wikipedia.org/wiki/Polymer%20engineering | Polymer engineering is generally an engineering field that designs, analyses, and modifies polymer materials. Polymer engineering covers aspects of the petrochemical industry, polymerization, structure and characterization of polymers, properties of polymers, compounding and processing of polymers and description of major polymers, structure property relations and applications.
History
The word “polymer” was introduced by the Swedish chemist J. J. Berzelius. He considered, for example, benzene (C6H6) to be a polymer of ethyne (C2H2). Later, this definition underwent a subtle modification.
The history of human use of polymers has been long since the mid-19th century, when it entered the chemical modification of natural polymers. In 1839, Charles Goodyear found a critical advance in the research of rubber vulcanization, which has turned natural rubber into a practical engineering material. In 1870, J. W. Hyatt uses camphor to plasticize nitrocellulose to make nitrocellulose plastics industrial. 1907 L. Baekeland reported the synthesis of the first thermosetting phenolic resin, which was industrialized in the 1920s, the first synthetic plastic product. In 1920, H. Standinger proposed that polymers are long-chain molecules that are connected by structural units through common covalent bonds. This conclusion laid the foundation for the establishment of modern polymer science. Subsequently, Carothers divided the synthetic polymers into two broad categories, namely a polycondensate obtained by a polycondensation reaction and an addition polymer obtained by a polyaddition reaction. In the 1950s, K. Ziegler and G. Natta discovered a coordination polymerization catalyst and pioneered the era of synthesis of stereoregular polymers. In the decades after the establishment of the concept of macromolecules, the synthesis of high polymers has achieved rapid development, and many important polymers have been industrialized one after another.
Classification
The basic division of polymers into thermoplastics, elastomers and thermosets helps define their areas of application.
Thermoplastics
Thermoplastic refers to a plastic that has heat softening and cooling hardening properties. Most of the plastics we use in our daily lives fall into this category. It becomes soft and even flows when heated, and the cooling becomes hard. This process is reversible and can be repeated. Thermoplastics have relatively low tensile moduli, but also have lower densities and properties such as transparency which make them ideal for consumer products and medical products. They include polyethylene, polypropylene, nylon, acetal resin, polycarbonate and PET, all of which are widely used materials.
Elastomers
An elastomer generally refers to a material that can be restored to its original state after removal of an external force, whereas a material having elasticity is not necessarily an elastomer. The elastomer is only deformed under weak stress, and the stress can be quickly restored to a polymer material close to the original state and size. Elastomers are polymers which have very low moduli and show reversible extension when strained, a valuable property for vibration absorption and damping. They may either be thermoplastic (in which case they are known as Thermoplastic elastomers) or crosslinked, as in most conventional rubber products such as tyres. Typical rubbers used conventionally include natural rubber, nitrile rubber, polychloroprene, polybutadiene, styrene-butadiene and fluorinated rubbers.
Thermosets
A thermosetting resin is used as a main component, and a plastic which forms a product is formed by a cross-linking curing process in combination with various necessary additives. It is liquid in the early stage of the manufacturing or molding process, and it is insoluble and infusible after curing, and it cannot be melted or softened again. Common thermosetting plastics are phenolic plastics, epoxy plastics, aminoplasts, unsaturated polyesters, alkyd plastics, and the like. Thermoset plastics and thermoplastics together constitute the two major components of synthetic plastics. Thermosetting plastics are divided into two types: formaldehyde cross-linking type and other cross-linking type.
Thermosets includes phenolic resins, polyesters and epoxy resins, all of which are used widely in composite materials when reinforced with stiff fibers such as fiberglass and aramids. Since crosslinking stabilises the thermoset polymer matrix of these materials, they have physical properties more similar to traditional engineering materials like steel. However, their very much lower densities compared with metals makes them ideal for lightweight structures. In addition, they suffer less from fatigue, so are ideal for safety-critical parts which are stressed regularly in service.
Materials
Plastic
Plastic is a polymer compound which is polymerized by polyaddition polymerization and polycondensation. It is free to change the composition and shape. It is made up of synthetic resins and fillers, plasticizers, stabilizers, lubricants, colorants and other additives. The main component of plastic is resin. Resin means that the polymer compound has not been added with various additives. The term resin was originally named for the secretion of oil from plants and animals, such as rosin and shellac. Resin accounts for approximately 40% - 100% of the total weight of the plastic. The basic properties of plastics are mainly determined by the nature of the resin, but additives also play an important role. Some plastics are basically made of synthetic resins, with or without additives such as plexiglass, polystyrene, etc.
Fiber
Fiber refers to a continuous or discontinuous filament of one substance. Animals and plant fibers play an important role in maintaining tissue. Fibers are widely used and can be woven into good threads, thread ends and hemp ropes. They can also be woven into fibrous layers when making paper or feel. They are also commonly used to make other materials together with other materials to form composites. Therefore, whether it is natural or synthetic fiber filamentous material. In modern life, the application of fiber is ubiquitous, and there are many high-tech products.
Rubber
Rubber refers to highly elastic polymer materials and reversible shapes. It is elastic at room temperature and can be deformed with a small external force. After removing the external force, it can return to the original state. Rubber is a completely amorphous polymer with a low glass transition temperature and a large molecular weight, often greater than several hundred thousand. Highly elastic polymer compounds can be classified into natural rubber and synthetic rubber. Natural rubber processing extracts gum rubber and grass rubber from plants; synthetic rubber is polymerized by various monomers. Rubber can be used as elastic, insulating, water-impermeable air-resistant materials.
Applications
Polyethylene
Commonly used polyethylenes can be classified into low density polyethylene (LDPE), high density polyethylene (HDPE), and linear low density polyethylene (LLDPE). Among them, HDPE has better thermal, electrical and mechanical properties, while LDPE and LLDPE have better flexibility, impact properties and film forming properties. LDPE and LLDPE are mainly used for plastic bags, plastic wraps, bottles, pipes and containers; HDPE is widely used in various fields such as film, pipelines and daily necessities because its resistance to many different solvents.
Polypropylene
Polypropylene is widely used in various applications due to its good chemical resistance and weldability. It has lowest density among commodity plastics. It is commonly used in packaging applications, consumer goods, automatic applications and medical applications. Polypropylene sheets are widely used in industrial sector to produce acid and chemical tanks, sheets, pipes, Returnable Transport Packaging (RTP), etc. because of its properties like high tensile strength, resistance to high temperatures and corrosion resistance.
Composites
Typical uses of composites are monocoque structures for aerospace and automobiles, as well as more mundane products like fishing rods and bicycles. The stealth bomber was the first all-composite aircraft, but many passenger aircraft like the Airbus and the Boeing 787 use an increasing proportion of composites in their fuselages, such as hydrophobic melamine foam. The quite different physical properties of composites gives designers much greater freedom in shaping parts, which is why composite products often look different from conventional products. On the other hand, some products such as drive shafts, helicopter rotor blades, and propellers look identical to metal precursors owing to the basic functional needs of such components.
Biomedical applications
Biodegradable polymers are widely used materials for many biomedical and pharmaceutical applications. These polymers are considered very promising for controlled drug delivery devices. Biodegradable polymers also offer great potential for wound management, orthopaedic devices, dental applications and tissue engineering. Not like non biodegradable polymers, they won't require a second step of a removal from body. Biodegradable polymers will break down and are absorbed by the body after they served their purpose. Since 1960, polymers prepared from glycolic acid and lactic acid have found a multitude of uses in the medical industry. Polylactates (PLAs) are popular for drug delivery system due to their fast and adjustable degradation rate.
Membrane technologies
Membrane techniques are successfully used in the separation in the liquid and gas systems for years, and the polymeric membranes are used most commonly because they have lower cost to produce and are easy to modify their surface, which make them suitable in different separation processes. Polymers helps in many fields including the application for separation of biological active compounds, proton exchange membranes for fuel cells and membrane contractors for carbon dioxide capture process.
Related Major
Petroleum / Chemical / Mineral / Geology
Raw materials and processing
New energy
Automobiles and spare parts
Other industries
Electronic Technology / Semiconductor / Integrated Circuit
Machinery / Equipment / Heavy Industry
Medical equipment / instruments
See also
Materials science
Polymer science
Polymers
Medical grade silicone
:Category:Polymer scientists and engineers
References
Bibliography
Lewis, Peter Rhys, and Gagg, C, Forensic Polymer Engineering: Why polymer products fail in service, Woodhead/CRC Press (2010).
Polymers
Engineering disciplines | Polymer engineering | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,152 | [
"Polymers",
"nan",
"Polymer chemistry"
] |
20,685,630 | https://en.wikipedia.org/wiki/Peierls%20transition | A Peierls transition or Peierls distortion is a distortion of the periodic lattice of a one-dimensional crystal. Atomic positions oscillate, so that the perfect order of the 1-D crystal is broken.
Peierls’ theorem
Peierls' theorem states that a one-dimensional equally spaced chain with one electron per ion is unstable.
This theorem was first espoused in the 1930s by Rudolf Peierls. It can be proven using a simple model of the potential for an electron in a 1-D crystal with lattice spacing . The periodicity of the crystal creates energy band gaps in the diagram at the edge of the Brillouin zone (similar to the result of the Kronig–Penney model, which helps to explain the origin of band gaps in semiconductors). If the ions each contribute one electron, then the band will be half-filled, up to values of in the ground state.
Imagine a lattice distortion where every other ion moves closer to one neighbor and further away from the other, the unfavourable energy of the long bond between ions is outweighed by the energy gain of the short bond. The period has just doubled from to . In essence, the proof relies on the fact that doubling the period would introduce new band gaps located at multiples of ; see the figure in the right. This would cause small energy savings, based on the distortion of the bands in the vicinity of the new gaps. Approaching , the distortion due to the introduction of the new band gap will cause the electrons to be at a lower energy than they would be in the perfect crystal. Therefore, this lattice distortion becomes energetically favorable when the energy savings due to the new band gaps outweighs the elastic energy cost of rearranging the ions. Of course, this effect will be noticeable only when the electrons are arranged close to their ground state – in other words, thermal excitation should be minimized. Therefore, the Peierls transition should be seen at low temperature. This is the basic argument for the occurrence of the Peierls transition, sometimes called dimerization.
Historical background
The earliest written record of the Peierls transition was presented at the 1954 École de physique des Houches. These lecture notes (shown below) contain Rudolf Peierls' handwritten equations and figures, and can be viewed in the library of the Institut Laue–Langevin, in Grenoble, France.
Peierls’ discovery gained experimental backing during the effort to find new superconducting materials. In 1964, Dr. William Little of the Stanford University Department of Physics theorized that a certain class of polymer chains may experience a high Tc superconducting transition. The basis for his assertion was that the lattice distortions that lead to pairing of electrons in the BCS theory of superconductivity could be replaced instead by rearranging the electron density in a series of side chains. This means that now electrons would be responsible for creating the Cooper pairs instead of ions. Because the transition temperature is inversely proportional to the square root of the mass of the charged particle responsible for the distortions, the Tc should be improved by a corresponding factor:
The subscript i represents "ion", while e represents "electron". The predicted benefit in superconducting transition temperature was therefore a factor of about 300.
In the 1970s, various organic materials such as TTF-TCNQ were synthesized. What was found is that these materials underwent an insulating transition rather than a superconducting one. Eventually it was realized that these were the first experimental observations of the Peierls transition. With the introduction of new band gaps after the lattice becomes distorted, electrons must overcome this new energy barrier in order to become free to conduct. The simple model of the Peierls distortion as a rearrangement of ions in a 1-D chain could describe why these materials became insulators rather than superconductors.
Related physical consequences
Peierls predicted that the rearrangement of the ion cores in a Peierls transition would produce periodic fluctuations in the electron density. These are commonly called charge density waves, and they are an example of collective charge transport. Several materials systems have verified the existence of these waves. Good candidates are weakly coupled molecular chains, where electrons can move freely along the direction of the chains, but motion is restricted perpendicular to the chains. NbSe3 and K0.3MoO3 are two examples in which charge density waves have been observed at relatively high temperatures of 145 K and 180 K respectively.
Furthermore, the 1-D nature of the material causes a breakdown of the Fermi liquid theory for electron behavior. Therefore, a 1-D conductor should behave as a Luttinger liquid instead. A Luttinger liquid is a paramagnetic one-dimensional metal without Landau quasi-particle excitations.
Research topics
1-D metals have been the subject of much research. Here are a few examples of both theoretical and experimental research efforts to illustrate the broad range of topics:
Theory has shown that polymer chains that have been looped and formed into rings undergo a Peierls transition. These rings demonstrate a persistent current, and the Peierls distortion can be modified by modulating the magnetic flux through the loop.
Density functional theory has been used to calculate the bond length alterations predicted in increasingly long chains of organic oligomers. The selection of which hybrid functional to use is paramount in obtaining an accurate estimate of the bond length alteration caused by Peierls distortions, as some functionals have been shown to overestimate the oscillation, while others underestimate it.
Gold deposited on a stepped Si(553) surface has shown evidence of two simultaneous Peierls transitions. The lattice period is distorted by factors of 2 and 3, and energy gaps open for nearly 1/2-filled and 1/3–1/4 filled bands. The distortions have been studied and imaged using LEED and STM, while the energy bands were studied with ARP.
Luttinger liquids have a power-law dependence of resistance on temperature. This has been shown for purple bronze (Li0.9Mo6O17). Purple bronze may prove to be a very interesting material, since it has shown an anomalous renormalization exponent for the near-Fermi-energy density of states.
The dependence of resonant tunneling through island barriers in a 1-D wire has been studied, and is also found to be a power-law dependence. This offers additional evidence of Luttinger liquid behavior.
See also
Landau–Peierls instability
Charge density wave
Luttinger liquid
Su–Schrieffer–Heeger model
References
Superconductivity
Phase transitions | Peierls transition | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,380 | [
"Physical phenomena",
"Phase transitions",
"Matter",
"Physical quantities",
"Superconductivity",
"Critical phenomena",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Electrical resistance and conductance"
] |
19,486,387 | https://en.wikipedia.org/wiki/Jordan%27s%20totient%20function | In number theory, Jordan's totient function, denoted as , where is a positive integer, is a function of a positive integer, , that equals the number of -tuples of positive integers that are less than or equal to and that together with form a coprime set of integers.
Jordan's totient function is a generalization of Euler's totient function, which is the same as . The function is named after Camille Jordan.
Definition
For each positive integer , Jordan's totient function is multiplicative and may be evaluated as
, where ranges through the prime divisors of .
Properties
which may be written in the language of Dirichlet convolutions as
and via Möbius inversion as
.
Since the Dirichlet generating function of is and the Dirichlet generating function of is , the series for becomes
.
An average order of is
.
The Dedekind psi function is
,
and by inspection of the definition (recognizing that each factor in the product over the primes is a cyclotomic polynomial of ), the arithmetic functions defined by or can also be shown to be integer-valued multiplicative functions.
.
Order of matrix groups
The general linear group of matrices of order over has order
The special linear group of matrices of order over has order
The symplectic group of matrices of order over has order
The first two formulas were discovered by Jordan.
Examples
Explicit lists in the OEIS are J2 in , J3 in , J4 in , J5 in , J6 up to J10 in up to .
Multiplicative functions defined by ratios are J2(n)/J1(n) in , J3(n)/J1(n) in , J4(n)/J1(n) in , J5(n)/J1(n) in , J6(n)/J1(n) in , J7(n)/J1(n) in , J8(n)/J1(n) in , J9(n)/J1(n) in , J10(n)/J1(n) in , J11(n)/J1(n) in .
Examples of the ratios J2k(n)/Jk(n) are J4(n)/J2(n) in , J6(n)/J3(n) in , and J8(n)/J4(n) in .
Notes
References
External links
Modular arithmetic
Multiplicative functions | Jordan's totient function | [
"Mathematics"
] | 528 | [
"Multiplicative functions",
"Arithmetic",
"Modular arithmetic",
"Number theory"
] |
1,827,146 | https://en.wikipedia.org/wiki/End%20group | End groups are an important aspect of polymer synthesis and characterization. In polymer chemistry, they are functional groups that are at the very ends of a macromolecule or oligomer (IUPAC). In polymer synthesis, like condensation polymerization and free-radical types of polymerization, end-groups are commonly used and can be analyzed by nuclear magnetic resonance (NMR) to determine the average length of the polymer. Other methods for characterization of polymers where end-groups are used are mass spectrometry and vibrational spectrometry, like infrared and raman spectroscopy. These groups are important for the analysis of polymers and for grafting to and from a polymer chain to create a new copolymer. One example of an end group is in the polymer poly(ethylene glycol) diacrylate where the end-groups are circled.
End groups in polymer synthesis
End groups are seen on all polymers and the functionality of those end groups can be important in determining the application of polymers. Each type of polymerization (free radical, condensation or etc.) has end groups that are typical for the polymerization, and knowledge of these can help to identify the type of polymerization method used to form the polymer.
Step-growth polymerization
Step-growth polymerization involves two monomers with bi- or multifunctionality to form polymer chains. Many polymers are synthesized via step-growth polymerization and include polyesters, polyamides, and polyurethanes. A sub-class of step-growth polymerization is condensation polymerization.
Condensation polymerization
Condensation polymerization is an important class of step-growth polymerization, which is formed simply by the reaction of two monomers and results in the release of a water molecule. Since these polymers are typically made up of two or more monomers, the resulting end groups are from the monomer functionality. Examples of condensation polymers can be seen with polyamides, polyacetals and polyesters. An example of polyester is polyethylene terephthalate (PET), which is made from the monomers terephthalic acid and ethylene glycol. If one of the components in the polymerization is in excess, then that polymers functionality will be at the ends of the polymers (a carboxylic acid or alcohol group respectively).
Free radical polymerization
The end groups that are found on polymers formed through free radical polymerization are a result from the initiators and termination method used. There are many types of initiators used in modern free radical polymerizations, and below are examples of some well-known ones. For example, azobisisobutyronitrile or AIBN forms radicals that can be used as the end groups for new starting polymer chains with styrene to form polystyrene. Once the polymer chain has formed and the reaction is terminated, the end group opposite from the initiator is a result of the terminating agent or the chain transfer agent used.
End groups in graft polymers
Graft copolymers are generated by attaching chains of one monomer to the main chain of another polymer; a branched block copolymer is formed. Furthermore, end groups play an important role in the process of initiation, propagation and termination of graft polymers. Graft polymers can be achieved by either "grafting from" or "grafting to"; these different methods are able to produce a vast array of different polymer structures, which can be tailored to the application in question. The "grafting from" approach involves, for example, generation of radicals along a polymer chain, which can then be reacted with monomers to grow a new polymer from the backbone of another. In "grafting from," the initiation sites on the backbone of the first polymer can be part of the backbone structure originally or generated in situ. The "grafting to" approach involves the reaction of functionalized monomers to a polymer backbone. In graft polymers, end groups play an important role, for example, in the "grafting to" technique the generation of the reactive functionalized monomers occurs at the end group, which is then tethered to the polymer chain. There are various methods to synthesize graft polymers some of the more common include redox reaction to produce free radicals, by free radical polymerization techniques avoiding chain termination (ATRP, RAFT, nitroxide mediated, for example) and step-growth polymerization. A schematic of "grafting from" and "grafting to" is illustrated in the figure below.
The "grafting from" technique involves the generation of radicals along the polymer backbone from an abstraction of a halogen, from either the backbone or a functional group along the backbone. Monomers are reacted with the radicals along the backbone and subsequently generate polymers which are grafted from the backbone of the first polymer. The schematic for "grafting to" shows an example using anionic polymerizations, the polymer containing the carbonyl functionalities gets attacked by the activated polymer chain and generates a polymer attached to the associated carbon along with an alcohol group, in this example. These examples show us the potential of fine tuning end groups of polymer chains to target certain copolymer structures.
Analysis of polymers using end groups
Because of the importance of end groups, there have been many analytical techniques developed for the identification of the groups. The three main methods for analyzing the identity of the end group are by NMR, mass spectrometry (MS) or vibrational spectroscopy (IR or Raman). Each technique has its advantages and disadvantages, which are details below.
NMR spectroscopy
The advantage of NMR for end groups is that it allows for not only the identification of the end group units, but also allows for the quantification of the number-average length of the polymer. End-group analysis with NMR requires that the polymer be soluble in organic or aqueous solvents. Additionally, the signal on the end-group must be visible as a distinct spectral frequency, i.e. it must not overlap with other signals. As molecular weight increases, the width of the spectral peaks also increase. As a result of this, methods which rely on resolution of the end-group signal are mostly used for polymers of low molecular weight (roughly less than 20,000 g/mol number-average molecular weight). By using the information obtained from the integration of a 1H NMR spectrum, the degree of polymerization (Xn) can be calculated. With knowledge of the identity of the end groups/repeat unit and the number of protons contained on each, the Xn can then be calculated. For this example above, once the 1H NMR has been integrated and the values have been normalized to 1, the degree of polymerization is calculated by simply dividing the normalized value for the repeat unit by the number of protons continued in the repeat unit. For this case, Xn = n = 100/2, and therefore Xn = 50, or there are 50 repeat units in this monomer.
Mass spectrometry
Mass spectrometry (MS) is helpful for the determination of the molecular weight of the polymer, structure of the polymer, etc. Although chemists utilize many kinds of MS, the two that are used most typically are matrix-assisted laser desorption ionization/time of flight (MALDI-TOF) and electrospray ionization-mass spectroscopy (ESI-MS). One of the biggest disadvantages of this technique is that much like NMR spectroscopy the polymers have to be soluble in some organic solvent. An advantage of using MALDI is that it provides the simpler data to interpret for end group identification compared with ESI, but a disadvantage is that the ionization can be rather hard and as a result some end groups do not remain intact for analysis. Because of the harsh ionization in MALDI, one of the biggest advantages of using ESI is for its "softer" ionization methods. The disadvantage of using ESI is that the data obtained can be very complex due to the mechanism of the ionization and thus can be difficult to interpret.
Vibrational spectroscopy
The vibrational spectroscopy methods used to analyze the end groups of a polymer are infrared (IR) and Raman spectroscopy. These methods are useful in fact that the polymers do not need to be soluble in a solvent and spectra can be obtained simply from solid material. A disadvantage of the technique is that only qualitative data is typically obtained on the identification end groups.
End group removal
Controlled radical polymerization, namely reversible addition−fragmentation chain-transfer polymerization (RAFT), is a common method for the polymerization of acrylates, methacrylates and acrylamides. Usually, a thiocarbonate is used in combination with an effective initiator for RAFT. The thiocarbonate moiety can be functionalized at the R-group for end group analysis. The end group is a result of the propagation of chain-transfer agents during the free-radical polymerization process. The end groups can subsequently be modified by the reaction of the thiocarbonylthio compounds with nucleophiles and ionic reducing agents.
The method for removal of thiocarbonyl containing end groups includes reacting the polymers containing the end-groups with en excess of radicals which add to the reactive C=S bond of the end group forming an intermediate radical (shown below). The remaining radical on the polymer chain can be hydrogenated by what is referred to as a trapping group and terminate; this results in a polymer that is free of the end groups at the α and ω positions.
Another method of end group removal for the thiocarbonyl containing end-groups of RAFT polymers is the addition of heat to the polymer; this is referred to as thermolysis. One method of monitoring thermolysis of RAFT polymers is by thermogravimetric analysis resulting in a weight-loss of the end group. An advantage of this technique is that no additional chemicals are required to remove the end group; however, it is required that the polymer be thermally stable to high temperature and therefore may not be effective for some polymers. Depending on the polymers sensitivity to ultraviolet radiation (UV) it has been reported in recent years that decomposition of end-groups can be effective, but preliminary data suggest that decomposition by UV leads to a change in the distribution of molecular weights of the polymer.
Surface modification using RAFT
Surface modification has gained a lot of interest in recent years for a variety of applications. An example of the application of free radical polymerizations to forming new architectures is through RAFT polymerizations which result in dithioester end groups. These dithioesters can be reduced to the thiol which can be immobilized on a metal surface; this is important for applications in electronics, sensing and catalysis. The schematic below demonstrates the immobilization of copolymers onto a gold surface as reported for poly(sodium 4-styrenesulfonate) by the McCormick group at the University of Southern Mississippi.
References
Polymer chemistry | End group | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,301 | [
"Materials science",
"Polymer chemistry"
] |
1,827,303 | https://en.wikipedia.org/wiki/Striking%20clock | A striking clock is a clock that sounds the hours audibly on a bell, gong, or other audible device. In 12-hour striking, used most commonly in striking clocks today, the clock strikes once at 1:00 am, twice at 2:00 am, continuing in this way up to twelve times at 12:00 mid-day, then starts again, striking once at 1:00 pm, twice at 2:00 pm, and the pattern continues up to twelve times at 12:00 midnight.
The striking feature of clocks was originally more important than their clock faces; the earliest clocks struck the hours, but had no dials to enable the time to be read. The development of mechanical clocks in 12th century Europe was motivated by the need to ring bells upon the canonical hours to call the community to prayer. The earliest known mechanical clocks were large striking clocks installed in towers in monasteries or public squares, so that their bells could be heard far away. Though an early striking clock in Syria was a 12-hour clock, many early clocks struck up to 24 strokes, particularly in Italy, where the 24-hour clock, keeping Italian hours, was widely used in the 14th and 15th centuries. As the modern 12-hour clock became more widespread, particularly in Great Britain and Northern Europe, 12-hour striking became more widespread and eventually became the standard. In addition to striking on the hour, many striking clocks play sequences of chimes on the quarter-hours. The most common sequence is Westminster Quarters.
Today the time-disseminating function of clock striking is almost no longer needed, and striking clocks are kept for historical, traditional, and aesthetic reasons. Historic clock towers in towns, universities, and religious institutions worldwide still strike the hours, famous examples being Big Ben in London, the Peace Tower in Ottawa, and the Kremlin Clock in Moscow. Home striking clocks, such as mantel clocks, cuckoo clocks, grandfather clocks and bracket clocks are also very common.
A typical striking clock will have two gear trains, because a striking clock must add a striking train that operates the mechanism that rings the bell in addition to the timekeeping train that measures the passage of time.
Passing strike
The most basic sort of striking clock simply sounds a bell once every hour; this is called a passing strike clock. Passing strike was simple to implement mechanically; all that must be done is to attach a cam to a shaft that rotates once per hour; the cam raises and then lets a hammer fall that strikes the bell. The first tower clocks, mounted in towers in cathedrals, abbeys, and monasteries to call the community to prayer, which originated in Medieval Europe before the invention of the mechanical clock in the 13th century, were water clocks which used the passing strike mechanism; they rang once for each canonical hour.
Before European clocks, China developed a water-driven astronomical clockwork technology, starting with the first century AD scientist Zhang Heng (78–139). The Tang dynasty Chinese Buddhist monk and inventor Yi Xing (683–727) created a rotating celestial globe that was turned by a water clock mechanism driven by a waterwheel. This featured two wooden gear jacks on its horizon surface with a drum and a bell, the bell being struck automatically every hour and the drum being struck automatically every quarter-hour. It is recorded that Confucian students in the year 730 were required to write an essay on this device in order to pass the Imperial examinations. The use of clock jacks to sound the hours were used in later clock towers of Song dynasty China, such as those designed by Zhang Sixun and Su Song in the 10th and 11th centuries, respectively.
A striking clock outside of China was the clock tower near the Umayyad Mosque in Damascus, Syria, which struck once every hour. It is the subject of a book, On the Construction of Clocks and their Use (1203), by Riḍwān ibn al-Sāʿātī, the son of a clockmaker. The Florentine writer Dante Alighieri made a reference to the gear works of striking clocks in 1319. One of the older clock towers still standing is St Mark's Clocktower in St Mark's Square, Venice. The St Mark's Clock was commissioned in 1493, from the famous clockmaker Gian Carlo Rainieri from Reggio Emilia, where his father Gian Paolo Rainieri had already constructed another famous device in 1481. In 1497, Simone Campanato moulded the great bell, which was put on the top of the tower where it is alternately beaten by the Due Mori (Two Moors), two bronze statues handling a hammer.
Counting the hours
During the great wave of tower clock building in 14th-century Europe, around the time of the invention of the mechanical clock itself, striking clocks were built that struck the bell multiple times, to count out the hours. The clock of the Beata Vergine (later San Gottardo) in Milan, built around 1330, was one of the earliest recorded that struck the hours. In 1335, Galvano Fiamma wrote:
The astronomical clock designed by Richard of Wallingford in 1327 and built around 1354, also struck 24 hours.
Some rare clocks use a form of striking known as "Roman Striking" invented by Joseph Knibb, in which a large bell or lower tone is sounded to represent "five", and a small bell or high tone is sounded to represent "one". For example, four o'clock would be sounded as a high tone followed by a low tone, whereas the hour of eleven o'clock would be sounded by two low tones followed by a high tone. The purpose is to conserve the power of the striking train. For example, "VII" would be a total of three strikes instead of seven, and "XII" would be four strikes instead of twelve. Clocks using this type of striking usually represent four o'clock on the dial with an "IV" rather than the more common "IIII", so that the Roman numerals correspond with the sequence of strikes on the high and low bells.
One small table clock of this type sold from the George Daniels collection at Sotheby's on 6 November 2012 for £1,273,250.
Countwheel
Two mechanisms have been devised by clockmakers to enable striking clocks to correctly count out the hours. The earlier, which appeared in the first striking clocks in the 14th century, is called "countwheel striking". This uses a wheel that contains notches on its side, spaced by unequal, increasing arc segments. This countwheel governs the rotation of the striking train. When the striking train is released by the timekeeping train, a lever is lifted from a notch on the countwheel; the uneven notches allow the striking train to move only far enough to sound the correct number of times, after which the lever falls back into the next notch and stops the striking train from turning further.
The countwheel has the disadvantage of being entirely independent of the timekeeping train; if the striking train winds down, or for some other reason the clock fails to strike, the countwheel will become out of synch with the time shown by the hands, and must be resynchronized by manually releasing the striking train until it moves around to the correct position.
Rack striking
In the late seventeenth century, rack striking was invented. Rack striking is so called because it is regulated by a rack and snail mechanism. The distance a rack is allowed to fall is determined by a snail-shaped cam, thereby regulating the number of times the bell is allowed to sound. There was a misconception during the 20th century that the rack and snail mechanism was invented by British clergyman Edward Barlow in 1675–6. In fact, the inventor is unknown.
The snail-shaped cam is a part of the timekeeping train that revolves every twelve hours; often the snail is attached to the same pipe on which the hour hand is mounted. The diameter of the cam is largest at the one o'clock position, permitting the rack to move only a short distance, after which the striking train is stopped; it is smallest at the 12 o'clock position, which allows the rack to move the farthest. Striking stops when the last tooth of the rack has been taken up by the gathering pallet.
Because the number of strikes on the hour is determined by the position of the snail which rotates in tandem with the hour hand, rack striking seldom becomes desynchronized. Rack striking also made possible the repeating clock, which can be made to repeat the last hour struck by pressing a button. Rack striking became the standard mechanism used in striking clocks down to the present.
Parts of mechanism
All hour striking mechanisms have these parts. The letters below refer to the diagram.
Power source – This is usually identical to the device that powers the clock's timekeeping mechanism: in weight driven clocks it is a second weight on a cord (P), in spring driven clocks it is another mainspring. Although older one-day (30-hour) clocks often used a single weight or mainspring to drive both the timekeeping and striking trains, better clocks used a separate power source, because the striking mechanism consumes a lot of power and often has to be wound more frequently, and also to isolate the delicate timekeeping train from the large movements that occur in the striking train. Winding a striking clock requires winding both the timing and striking parts separately.
Striking train – This is a gear train (G,H) that scales down the force of the power source and transmits it to the hammer mechanism which rings the gong. In antique clocks, to reduce the manufacturing cost, it was often exactly the same as the timing train that ran the clock's timekeeping part, and installed parallel to it, on the left side as one faces the clock.
Regulator – A device to prevent the striking train from running too fast, and control the speed of striking. If it wasn't present, the striking train when released would run out of control under the force of the spring or weight. In most clocks it is a simple fly fan (or fan fly) (K), a flat piece of sheet metal mounted on the fastest turning gear shaft. When the striking train turns, this beats the air, and the air friction limits the speed of the train. Striking watches and some modern clocks use a centrifugal governor instead.
Count mechanism – This is the critical part mentioned above, that releases the striking train at the proper time and counts out the proper number of strikes. It is the only part of the striking mechanism that is attached to the clock's timekeeping works. Virtually all modern clocks use the rack and snail. The snail (N) is usually mounted on the clock's center wheel shaft, which turns once every 12 hours. There is also a release lever (L) which on the hour releases the rack and allows the timing train to turn.
Hammer and gong – The hammer lever (F) is actuated by pins or teeth (G) on one of the striking train wheels. As the wheel turns the pin lifts the hammer lever, until the lever slips off the pin, allowing the hammer to drop, hitting the gong (E). Early house clocks used traditional hemispherical shaped bells. Later house clocks used gongs made of long steel tubes or bars, which have a sound more like large church bells. Mantel and other small clocks use thick hardened steel wires, which are coiled into a spiral to save space.
Clocks that have more elaborate functions than just striking the hours, such as chiming the quarter hours, or playing tunes, are called "chiming clocks" by clockmakers. The additional functions are usually run by a second complete striking mechanism separate from the (hour) striking train, called the "chiming train". These clocks have three weights or mainsprings, to power the timing train, striking train, and chiming train.
How it works
This describes how the rack and snail striking mechanism works. The labels refer to the drawing above.
The release lever (L) holds the rack (M) up when the clock is not striking. On the shaft of the minute hand (not shown), which rotates once per hour, there is a projection. As the change of the hour approaches, this projection slowly lifts the release lever, allowing the rack to fall until its point rests on the snail (N). The amount the rack can fall, and thus the number of strikes, is determined by the position of the snail. Exactly on the hour the striking train (G, H, K) is released and begins to turn. As it turns, the pins (G) repeatedly lift the hammer (F) and allow it to drop, ringing the gong (E). The gear ratios are arranged so that the gear wheel (H) makes one revolution each strike. A small pin (S) on this wheel engages the rack teeth, lifting the rack up by one tooth each turn. When the rack reaches the end of its teeth it stops the striking train from turning (using a mechanism not shown in the diagram, in such a way that gear (H) is held stationary with the pin (S) not engaging the rack, so that the rack is able to fall freely again on the next hour). So the number of strikes is equal to the number of teeth of the rack which are used, which depends on the position of the snail.
Types of striking clocks
Specialized types of striking clocks:
Chiming clock – Strikes on the hours and chimes on the quarter hours, often playing fragments of a tune such as Westminster Quarters.
Repeater – a striking clock which can repeat the strikes at the push of a lever, for telling the time in the dark.
Musical clock – plays tunes on a music box in addition to counting the time
Automaton clock – with mechanically animated figures that periodically perform various displays, usually as a part of the clock striking the hours.
Cuckoo clock – a specific type of automaton clock which originated in Germany, which displays an animated bird and plays imitation birdcalls.
Ship's bell clock – strikes the ship's bells instead of the hours.
Some quartz clocks also contain speakers and sound chips that electronically imitate the sounds of a chiming or striking clock. Other quartz striking clocks use electrical power to strike bells or gongs.
See also
Alarm clock
Repeater watches (horology)
St. Michael's chimes
Thirteenth stroke of the clock
Westminster Chimes
Whittington chimes
Water clock
Notes
Sources and further reading
Antique clockwork marvels from China's Forbidden City (YouTube video)
Articles containing video clips
Campanology
Clocks
Time signals | Striking clock | [
"Physics",
"Technology",
"Engineering"
] | 2,992 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
1,827,851 | https://en.wikipedia.org/wiki/Marine%20debris | Marine debris, also known as marine litter, is human-created solid material that has deliberately or accidentally been released in seas or the ocean. Floating oceanic debris tends to accumulate at the center of gyres and on coastlines, frequently washing aground, when it is known as beach litter or tidewrack. Deliberate disposal of wastes at sea is called ocean dumping. Naturally occurring debris, such as driftwood and drift seeds, are also present. With the increasing use of plastic, human influence has become an issue as many types of (petrochemical) plastics do not biodegrade quickly, as would natural or organic materials. The largest single type of plastic pollution (~10%) and majority of large plastic in the oceans is discarded and lost nets from the fishing industry. Waterborne plastic poses a serious threat to fish, seabirds, marine reptiles, and marine mammals, as well as to boats and coasts.
Dumping, container spillages, litter washed into storm drains and waterways and wind-blown landfill waste all contribute to this problem. This increased water pollution has caused serious negative effects such as discarded fishing nets capturing animals, concentration of plastic debris in massive marine garbage patches, and increasing concentrations of contaminants in the food chain.
In efforts to prevent and mediate marine debris and pollutants, laws and policies have been adopted internationally, with the UN including reduced marine pollution in Sustainable Development Goal 14 "Life Below Water". Depending on relevance to the issues and various levels of contribution, some countries have introduced more specified protection policies. Moreover, some non-profits, NGOs, and government organizations are developing programs to collect and remove plastics from the ocean. However, in 2017 the UN estimated that by 2050 there will be more plastic than fish in the oceans if substantial measures are not taken.
Types
Researchers classify debris as either land- or ocean-based; in 1991, the United Nations Joint Group of Experts on the Scientific Aspects of Marine Pollution estimated that up to 80% of the pollution was land-based, with the remaining 20% originating from catastrophic events or maritime sources. More recent studies have found that more than half of plastic debris found on Korean shores is ocean-based.
A wide variety of man-made objects can become marine debris; plastic bags, balloons, buoys, rope, medical waste, glass and plastic bottles, cigarette stubs, cigarette lighters, beverage cans, polystyrene, lost fishing line and nets, and various wastes from cruise ships and oil rigs are among the items commonly found to have washed ashore. Six-pack rings, in particular, are considered emblematic of the problem.
The U.S. military used ocean dumping for unused weapons and bombs, including ordinary bombs, Unexploded ordnance (UXO), landmines and chemical weapons from at least 1919 until 1970. Millions of pounds of ordnance were disposed of in the Gulf of Mexico and off the coasts of at least 16 states, from New Jersey to Hawaii (although these, of course, do not wash up onshore, and the U.S. is not the only country who has practiced this).
Eighty percent of marine debris is plastic. Plastics accumulate because they typically do not biodegrade as many other substances do. They photodegrade on exposure to sunlight, although they do so only under dry conditions, as water inhibits photolysis. In a 2014 study using computer models, scientists from the group 5 Gyres, estimated 5.25 trillion pieces of plastic weighing 269,000 tons were dispersed in oceans in similar amount in the Northern and Southern Hemispheres.
Persistent industrial marine debris
Some materials and activities used in industrial activities that do not readily degrade, that persist in the environment, and tend to accumulate over time. The activities can include fishing, boating, and aquaculture industries that harvest or use resources in the marine environment and may lose or discard gear, materials, machinery or solid wastes from industrial processes into the water or onto shorelines. This can include anything as large as a fishing boat or as small as particle from a Styrofoam lobster float. In 2003, a study was conducted to identify types, amounts, sources, and effects of persistent industrial marine debris in the coastal waters and along the shores of Charlotte County, New Brunswick, and examine any relationship between the amount and types of persistent industrial marine debris, and the types and numbers of industrial operations nearby. Materials like plastic or foam can break down into smaller particles and may look like small sea creatures to wildlife such as birds, cetaceans, and fish, and they may eat these particles. Indigestible material may accumulate in the gut creating blockages or a false sense of fullness and eventually death from lack of appropriate nutrient intake.
Ghost nets
Macroplastic
Microplastics
Deep-sea debris
Marine debris is found on the floor of the Arctic ocean. Although an increasing number of studies have been focused on plastic debris accumulation on the coasts, in off-shore surface waters, and that ingested by marine organisms that live in the upper levels of the water column, there is limited information on debris in the mesopelagic and deeper layers. Studies that have been done have conducted research through bottom sampling, video observation via remotely operated vehicles (ROVs), and submersibles. They are also mostly limited to one-off projects that do not extend long enough to show significant effects of deep-sea debris over time. Research thus far has shown that debris in the deep-ocean is in fact impacted by anthropogenic activities, and plastic has been frequently observed in the deep-sea, especially in areas off-shore of heavily populated regions, such as the Mediterranean.
Litter, made from diverse materials that are lighter than surface water (such as glasses, metals and some plastics), have been found to spread over the floor of seas and open oceans, where it can become entangled in corals and interfere with other sea-floor life, or even become buried under sediment, making clean-up extremely difficult, especially due to the wide area of its dispersal compared to shipwrecks. Plastics that are usually negatively buoyant can sink with the adherence of phytoplankton and the aggregation of other organic particles. Other oceanic processes that affect circulation, such as coastal storms and offshore convection, play a part in transferring large volumes of particles and debris. Submarine topographic features can also augment downwelling currents, leading to the retention of microplastics at certain locations.
A Deep-sea Debris database by the Global Oceanographic Data Center of the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), showing thirty years of photos and samples of marine debris since 1983, was made public in 2017. From the 5,010 dives in the database, using both ROVs and deep-sea submersibles, 3,425 man-made debris items were counted. The two most significant types of debris were macro-plastic, making up 33% of the debris found – 89% of which was single-use – and metal, making up 26%. Plastic debris was found at the bottom of the Mariana Trench, at a depth of 10,898m, and plastic bags were found entangled in hydrothermal vent and cold seep communities.
Garbage patches (gyres)
Sources
The 10 largest emitters of oceanic plastic pollution worldwide are, from the most to the least, China, Indonesia, Philippines, Vietnam, Sri Lanka, Thailand, Egypt, Malaysia, Nigeria, and Bangladesh, largely through the rivers Yangtze, Indus, Yellow, Hai, Nile, Ganges, Pearl, Amur, Niger, and the Mekong, and accounting for "90 percent of all the plastic that reaches the world's oceans."
An estimated 10,000 containers at sea each year are lost by container ships, usually during storms. One spillage occurred in the Pacific Ocean in 1992, when thousands of rubber ducks and other toys (now known as the "Friendly Floatees") went overboard during a storm. The toys have since been found all over the world, providing a better understanding of ocean currents. Similar incidents have happened before, such as when Hansa Carrier dropped 21 containers (with one notably containing buoyant Nike shoes).
In 2007, MSC Napoli beached in the English Channel, dropping hundreds of containers, most of which washed up on the Jurassic Coast, a World Heritage Site. A 2021 study following a 2014 loss of a container carrying printer cartridges calculated that some cartridges had dispersed at an average speed of between 6 cm and 13 cm per second. A 1997 accident of Tokio Express ship off the British coast resulted in loss of cargo container holding 5 million Lego pieces. Some of the pieces became valued among collectors who searched the beaches for Lego dragons. It also provided valuable insight in studying marine plastic degradation.
In Halifax Harbour, Nova Scotia, 52% of items were generated by recreational use of an urban park, 14% from sewage disposal and only 7% from shipping and fishing activities. Around four-fifths of oceanic debris is from rubbish blown onto the water from landfills, and urban runoff.
Some studies show that marine debris may be dominant in particular locations. For example, a 2016 study of Aruba found that debris found the windward side of the island was predominantly marine debris from distant sources. In 2013, debris from six beaches in Korea was collected and analyzed: 56% was found to be "ocean-based" and 44% "land-based".
In the 1987 Syringe Tide, medical waste washed ashore in New Jersey after having been blown from Fresh Kills Landfill. On the remote sub-Antarctic island of South Georgia, fishing-related debris, approximately 80% plastics, are responsible for the entanglement of large numbers of Antarctic fur seals.
Thirteen companies have an individual contribution of 1% or more of the total branded plastic observed in the audit events: The Coca-Cola Company, PepsiCo, Nestlé, Danone, Altria, Bakhresa Group, Wings, Unilever, Mayora Indah, Mondelez International, Mars, Incorporated, Salim Group, and British American Tobacco. All 13 companies produce food, beverage, or tobacco products. The top company, The Coca-Cola Company, was responsible for 11% (CI95% = 10 to 12%), significantly greater than any other company. The top 5 companies were responsible for 24% of the branded plastic; 56 companies were responsible for greater than 50% of the branded plastic; and 19,586 companies were responsible for all of the branded plastic. The contributions of the top companies may be an underestimation because there were brands that were not attributed to a company, and there were many unbranded objects.
Environmental impacts
Not all anthropogenic artifacts placed in the oceans are harmful. Iron and concrete structures typically do little damage to the environment because they generally sink to the bottom and become immobile, and at shallow depths they can even provide scaffolding for artificial reefs. Ships and subway cars have been deliberately sunk for that purpose.
Additionally, hermit crabs have been known to use pieces of beach litter as a shell when they cannot find an actual seashell of the size they need.
Impacts from plastic pollution
Many animals that live on or in the sea consume flotsam by mistake, as it often looks similar to their natural prey. Overall, 1288 marine species are known to ingest plastic debris, with fish making up the largest fraction. Bulky plastic debris may become permanently lodged in the digestive tracts of these animals, blocking the passage of food and causing death through starvation or infection. Tiny floating plastic particles also resemble zooplankton, which can lead filter feeders to consume them and cause them to enter the ocean food chain. In addition, plastic in the marine environment that contaminates the food chain can have repercussions on the viability of fish and shellfish species.
COVID-19 pandemic impacts
In Kenya, the COVID-19 pandemic has impacted the amount of marine debris found on beaches with around 55% being a pandemic-related trash items. Although the pandemic-related trash has shown up along the beaches of Kenya, it has not made its way into the water. The reduction of litter in the ocean could be a result of the closing of beaches and lack of movement during the pandemic, so less trash was likely to end up in the ocean. Additional impacts of the COVID-19 pandemic have been seen in Hong Kong, where disposable masks have ended up along the beaches of Soko’s islands. This may be attributed to the increased production of medical products (masks and gloves) during the pandemic, leading to a rise in unconventional disposal of these products.
Removal
Coastal and river clean ups
Techniques for collecting and removing marine (or riverine) debris include the use of debris skimmer boats (pictured). Devices such as these can be used where floating debris presents a danger to navigation. For example, the US Army Corps of Engineers removes 90 tons of "drifting material" from San Francisco Bay every month. The Corps has been doing this work since 1942, when a seaplane carrying Admiral Chester W. Nimitz collided with a piece of floating debris and sank, costing the life of its pilot. The Ocean cleanup has also created a vessel for cleaning up riverine debris, called Interceptor. Once debris becomes "beach litter", collection by hand and specialized beach-cleaning machines are used to gather the debris.
There are also projects that stimulate fishing boats to remove any litter they accidentally fish up while fishing for fish.
Elsewhere, "trash traps" are installed on small rivers to capture waterborne debris before it reaches the sea. For example, South Australia's Adelaide operates a number of such traps, known as "trash racks" or "gross pollutant traps" on the Torrens River, which flows (during the wet season) into Gulf St Vincent.
In lakes or near the coast, manual removal can also be used. Project AWARE for example promotes the idea of letting dive clubs clean up litter, for example as a diving exercise.
Once a year there is a diving marine debris removal operation in Scapa Flow in Orkney, run by Ghost Fishing UK, funded by World Animal Protection and Fat Face Foundation.
Cleanup of marine debris can be stymied by inadequate collaboration across levels of government, and a patchwork of regulatory authorities (responsibility often differs for the ocean surface, the seabed, and the shore). For example, there are an estimated 1600 abandoned and derelict boats in the waters of British Columbia. In 2019 Canada's federal government passed legislation to make it illegal to abandon a vessel but enforcement is hampered because it is often difficult to determine who owns an abandoned boat since owners are not required to have a license – licensing is a provincial government responsibility. The Victoria-based non-profit Dead Boats Disposal Society notes that lack of enforcement means abandoned boats are often left to sink, which increases the cleanup cost and compounds the environmental hazard (due to seepage of fuel, oil, plastics, and other pollutants).
Mid-ocean clean ups
On the sea, the removal of artificial debris (i.e. plastics) is still in its infancy. However, some projects have been started which used ships with nets (Ocean Voyages Institute/Kaisei 2009 & 2010 and New Horizon 2009) to catch some plastics, primarily for research purposes. There is also Bluebird Marine System's SeaVax which was solar- and wind-powered and had an onboard shredder and cargo hold. The Sea Cleaners' Manta ship is similar in concept.
Another method to gather artificial litter has been proposed by The Ocean Cleanup's Boyan Slat. He suggested using platforms with arms to gather the debris, situated inside the current of gyres. The SAS Ocean Phoenix ship is somewhat similar in design.
In June 2019, Ocean Voyages Institute, conducted a cleanup utilizing GPS trackers and existing maritime equipment in the North Pacific Subtropical Convergence Zone setting the record for the largest mid-ocean cleanup accomplished in the North Pacific Gyre and removed over 84,000 pounds of polymer nets and consumer plastic trash from the ocean.
In May/June 2020, Ocean Voyages Institute conducted a cleanup expedition in the Gyre and set a new record for the largest mid-ocean cleanup accomplished in the North Pacific Gyre which removed over 170 tons (340,000 pounds) of consumer plastics and ghostnets from the ocean. Utilizing custom designed GPS satellite trackers which are deployed by vessels of opportunity, Ocean Voyages Institute is able to accurately track and send cleanup vessels to remove ghostnets. The GPS Tracker technology is being combined with satellite imagery increasing the ability to locate plastic trash and ghostnets in real time via satellite imagery which will greatly increase cleanup capacity and efficiency.
Another issue is that removing marine debris from the ocean can potentially cause more harm than good. Cleaning up microplastics could also accidentally take out plankton, which are the main lower level food group for the marine food chain and over half of the photosynthesis on earth. One of the most efficient and cost effective ways to help reduce the amount of plastic entering our oceans is to not participate in using single-use plastics, avoid plastic bottled drinks such as water bottles, use reusable shopping bags, and to buy products with reusable packaging.
Laws and treaties
The ocean is a global common, so negative externalities of marine debris are not usually experienced by the producer. In the 1950s, the importance of government intervention with marine pollution protocol was recognized at the First Conference on the Law of the Sea.
Ocean dumping is controlled by international law, including:
The London Convention (1972) – a United Nations agreement to control ocean dumping This Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter consisted of twenty two articles addressing expectations of contracting parties. The three annexes defined many compounds, substances, and materials that are unacceptable to deposit into the ocean. Examples of such matter include: mercury compounds, lead, cyanides, and radioactive wastes.
MARPOL 73/78 – a convention designed to minimize pollution of the seas, including dumping, oil and exhaust pollution The original MARPOL convention did not consider dumping from ships, but was revised in 1978 to include restrictions on marine vessels.
UNCLOS – signed in 1982, but effective in 1994, United Nations Convention on the Law of the Sea emphasized the importance of protecting the entire ocean and not only specified coastal regions. UNCLOS enforced restrictions on pollution, including a stress on land-based sources.
Australian law
One of the earliest anti-dumping laws was Australia's Beaches, Fishing Grounds and Sea Routes Protection Act 1932, which prohibited the discharge of "garbage, rubbish, ashes or organic refuse" from "any vessel in Australian waters" without prior written permission from the federal government. It also required permission for scuttling. The act was passed in response to large amounts of garbage washing up on the beaches of Sydney and Newcastle from vessels outside the reach of local governments and the New South Wales government. It was repealed and replaced by the Environment Protection (Sea Dumping) Act 1981, which gave effect to the London Convention.
European law
In 1972 and 1974, conventions were held in Oslo and Paris respectively, and resulted in the passing of the OSPAR Convention, an international treaty controlling marine pollution in the north-east Atlantic Ocean. The Barcelona Convention protects the Mediterranean Sea. The Water Framework Directive of 2000 is a European Union directive committing EU member states to free inland and coastal waters from human influence. In the United Kingdom, the Marine and Coastal Access Act 2009 is designed to "ensure clean healthy, safe, productive and biologically diverse oceans and seas, by putting in place better systems for delivering sustainable development of marine and coastal environment".
In 2019, the EU parliament voted for an EU-wide ban on single-use plastic products such as plastic straws, cutlery, plates, and drink containers, polystyrene food and drink containers, plastic drink stirrers and plastic carrier bags and cotton buds. The law will take effect in 2021.
United States law
In the waters of the United States, there have been many observed consequences of pollution including: hypoxic zones, harmful agal blooms, and threatened species. In 1972, the United States Congress passed the Ocean Dumping Act, giving the Environmental Protection Agency power to monitor and regulate the dumping of sewage sludge, industrial waste, radioactive waste and biohazardous materials into the nation's territorial waters. The Act was amended sixteen years later to include medical wastes. It is illegal to dispose of any plastic in US waters.
Ownership
Property law, admiralty law and the law of the sea may be of relevance when lost, mislaid, and abandoned property is found at sea. Salvage law rewards salvors for risking life and property to rescue the property of another from peril. On land the distinction between deliberate and accidental loss led to the concept of a "treasure trove". In the United Kingdom, shipwrecked goods should be reported to a Receiver of Wreck, and if identifiable, they should be returned to their rightful owner.
Activism
A large number of groups and individuals are active in preventing or educating about marine debris. For example, 5 Gyres is an organization aimed at reducing plastics pollution in the oceans, and was one of two organizations that recently researched the Great Pacific Garbage Patch. Heal the Bay is another organization, focusing on protecting California's Santa Monica Bay, by sponsoring beach cleanup programs along with other activities. Marina DeBris is an artist focusing most of her recent work on educating people about beach trash.
Interactive sites like Adrift demonstrate where marine plastic is carried, over time, on the worlds ocean currents.
On 11 April 2013 in order to create awareness, artist Maria Cristina Finucci founded the Garbage Patch State at UNESCO –Paris in front of Director General Irina Bokova. First of a series of events under the patronage of UNESCO and of Italian Ministry of the Environment.
Forty-eight plastics manufacturers from 25 countries, are members of the Global Plastic Associations for solutions on Marine Litter, have made the pledge to help prevent marine debris and to encourage recycling.
Mitigation
Marine debris is a widespread problem, not only the result of activities in coastal regions.
Plastic debris from inland states come from two main sources: ordinary litter and materials from open dumps and landfills that blow or wash away to inland waterways and wastewater outflows. The refuse finds its way from inland waterways, rivers, streams and lakes to the ocean. Though ocean and coastal area cleanups are important, it is crucial to address plastic waste that originates from inland and landlocked states.
At the systems level, there are various ways to reduce the amount of debris entering our waterways:
Improve waste transportation to and from sites by utilizing closed container storage and shipping
Restrict open waste facilities near waterways
Promote the use of refuse-derived fuels. Used plastic with low residual value often does not get recycled and is more likely to leak into the ocean. However, turning these unwanted plastics that would otherwise stay in landfills into refuse-derived fuels allows for further use; they can be used as supplement fuels at power plants
Improve recovery rates for plastic (in 2012, the United States generated 11.46 million tons of plastic waste, of which only 6.7% was recovered
Adapt Extended Producer Responsibility strategies to make producers responsible for product management when products and their packaging become waste; encourage reusable product design to minimize negative impacts on the environment.
Ban the use of cigarette filters and establish a deposit-system for e-cigarettes (similar to the one used for propane canisters)
Consumers can help to reduce the amount of plastic entering waterways by reducing usage of single-use plastics, avoiding microbeads, participate in a river or lake beach cleanup.
See also
Citizen Science
Flotsam and jetsam
Kamilo Beach
Marine pollution
National Cleanup Day
Plastic pollution
Plastic-eating organisms
Prevented Ocean Plastic
Project Kaisei
Waste management
World Cleanup Day
References
External links
United Nations Environment Programme Marine Litter Publications
UNEP Year Book 2011: Emerging Issues in Our Global Environment Plastic debris, pp. 21–34. .
NOAA Marine Debris Program – US National Oceanic and Atmospheric Administration
Marine Research, Education and Restoration – Algalita Marine Research Foundation
UK Marine Conservation Society
Harmful Marine Debris – Australian Government
High Seas GhostNet Survey – US National Oceanic and Atmospheric Administration
Social & Economic Costs of Marine Debris – NOAA Economics
Tiny Plastic Bits Too Small To See Are Destroying The Oceans, Business Insider
Ghost net remediation program – NASA, NOAA and ATI collaborating to detect ghost nets
Aquatic ecology
Ecotoxicology
Environmental impact of fishing
Oceanographical terminology
Waste
Water pollution | Marine debris | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 5,025 | [
"Ocean pollution",
"Water pollution",
"Materials",
"Ecosystems",
"Aquatic ecology",
"Waste",
"Matter"
] |
1,827,917 | https://en.wikipedia.org/wiki/Callippic%20cycle | The Callippic cycle (or Calippic) is a particular approximate common multiple of the tropical year and the synodic month, proposed by Callippus in 330 BC. It is a period of 76 years, as an improvement of the 19-year Metonic cycle.
Description
A century before Callippus, Meton had described a cycle in which 19 years equals 235 lunations, and judged it to be 6,940 days. This exceeds 235 lunations by almost a third of a day, and 19 tropical years by four tenths of a day. It implicitly gave the solar year a duration of = 365 + = 365 + + days = 365 d 6 h 18 min 56 s.
Callippus accepted the 19-year cycle, but held that the duration of the year was more closely days (= 365 d 6 h), so he multiplied the 19-year cycle by 4 to obtain an integer number of days, and then omitted 1 day from the last 19-year cycle. Thus, he computed a cycle of 76 years that consists of 940 lunations and 27,759 days, which has been named the Callippic cycle after him. The cycle's error has been computed as one full day in 553 years, or 4.95 parts per million.
The first year of the first Callippic cycle began at the summer solstice of 330 BC (28 June in the proleptic Julian calendar), and was subsequently used by later astronomers. In Ptolemy's Almagest, for example, he cites (Almagest VII 3, H25) observations by Timocharis during the 47th year of the first Callippic cycle (283 BC), when on the eighth of Anthesterion, the Pleiades star cluster was occulted by the Moon.
The Callippic calendar originally used the names of months from the Attic calendar. Later astronomers, such as Hipparchus, preferred other calendars, including the ancient Egyptian calendar. Also Hipparchus invented his own Hipparchic calendar cycle as an improvement upon the Callippic cycle. Ptolemy's Almagest provided some conversions between the Callippic and Egyptian calendars, such as that Anthesterion 8, 47th year of the first Callippic period was equivalent to day 29 of the month of Athyr, during year 465 of Nabonassar. However, the original, complete form of the Callippic calendar is no longer known.
Equivalents
One Callippic cycle corresponds to:
940 synodic months
1,020.084 draconic months
80.084 eclipse years (160 eclipse seasons)
1,007.410 anomalistic months
The 80 eclipse years means that if there is a solar eclipse (or lunar eclipse), then after one callippic cycle a New Moon (resp. Full Moon) will take place at the same node of the orbit of the Moon, and under these circumstances another eclipse can occur.
References
Further reading
Jean Meeus, Mathematical Astronomy Morsels, Willmann-Bell, Inc., 1997 (Chapter 9, p. 51, Table 9A: Some eclipse periodicities)
External links
Online Callippic calendar converter as used in Ptolemy's Almagest
Ancient Greek astronomy
Calendars | Callippic cycle | [
"Physics"
] | 676 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
1,827,918 | https://en.wikipedia.org/wiki/Hipparchic%20cycle | The Greek astronomer Hipparchus introduced three cycles that have been named after him in later literature.
Calendar cycle
Hipparchus proposed a correction to the 76-year-long Callippic cycle, which itself was proposed as a correction to the 19-year-long Metonic cycle. He may have published it in the book "On the Length of the Year" (Περὶ ἐνιαυσίου μεγέθους), which has since been lost.
From solstice observations, Hipparchus found that the tropical year is about of a day shorter than the days that Calippus used (see Almagest III.1). So he proposed to make a 1-day correction after 4 Calippic cycles, i.e. 304 years = 3,760 lunations = 111,035 days.
Error implicit in the cycle
This is a very close approximation for an integer number of lunations in an integer number of days (with an error of only 0.014 days). However, it is in fact 1.37 days longer than 304 tropical years. The mean tropical year is actually about day (11 minutes 15 seconds) shorter than the Julian calendar year of days. These differences cannot be corrected with any cycle that is a multiple of the 19-year cycle of 235 lunations; it is an accumulation of the mismatch between years and months in the basic Metonic cycle, and the lunar months need to be shifted systematically by a day with respect to the solar year (i.e. the Metonic cycle itself needs to be corrected) after every 228 years.
Indeed, from the values of the tropical year (365.2421896698 days) and the synodic month (29.530588853) cited in the respective articles of Wikipedia, it follows that the length of 228=12×19 tropical years is about 83,275.22 days, shorter than the length of 12×235 synodic months—namely about 83,276.26 days—by one day plus about one hour. In fact, an even better correction would be two days every 437 years, rather than one day every 228 years. The length of 437=23×19 tropical years (about 159,610.837 days) is shorter than that of 23×235 synodic months (about 159,612.833 days) by almost exactly two days, up to only six minutes.
The durations between the equinoxes (and solstices) are not equal, and will cycle around each other over millennia. There are additional subtle and some imperfectly understood rates of change in both the lunar and solar cycles. The values above (such as the tropical year) depend upon the chosen zero point of the tropical year (such as the March equinox or some other astronomical date), which deviate by minutes per year.
Eclipse cycles
An eclipse cycle constructed by Hipparchus is described in Ptolemy's Almagest IV.2:
Actually, dividing 126007 days and one hour by 4267 would give 29;31,50,8,9 in sexagesimal, whereas 29;31,50,8,20 was already used in Babylonian astronomy, possibly found by Kidinnu in the fourth century BC. This period is a multiple of a Babylonians unit of time equal to one eighteenth of a minute ( seconds), which in sexagesimal is 0;0,0,8,20 days. (The true length of the month, 29.53058885 days, comes to 29;31,50,7,12 in sexagesimal, so the Babylonian value was correct to the nearest -second unit.)
Ptolemy points out that if one divides this cycle by 17, one obtains a whole number of synodic months (251) and a whole number of anomalistic months (269):
Franz Xaver Kugler in his Die Babylonische Mondrechnung claimed that the Chaldaeans could have known about this cycle of 251 months, because it falls out of their system of calculating the speed of the moon, seen in a tablet from around 100 BC. In their system, the speed of the moon at new moon varies in a zigzag, with a period of one full moon cycle, changing by 36 arc minutes each month over a span of 251 arc minutes (see graph), and this implies that after 251 months the pattern repeats, and 269 anomalistic months will have gone by.
So it is possible that Hipparchus constructed his 345-year cycle by multiplying this 20-year cycle by 17 so as to closely match an integer number of synodic months (4,267), anomalistic months (4,573), years (345), and days (a bit over 126,007). It is also close to a half-integer number of draconic months (4,630.53...), making it an eclipse period. By comparing his own eclipse observations with Babylonian records from 345 years earlier, he could verify the accuracy of the various periods that the Chaldean astronomers used.
The Hipparchic eclipse cycle is made up of 25 inex minus 21 saros periods. There are only three or four eclipses in a series of eclipses separated by Hipparchic cycles. For example, the solar eclipse of August 21, 2017 was preceded by one in 1672 and will be followed by one in 2362, but there are none before or after these.
It corresponds to:
4,267 synodic months
4,612 minus sidereal months
4,630.531 draconic months
4,573.002 anomalistic months
727 eclipse seasons
There are other eclipse intervals that also have the properties desired by Hipparchus, for example an interval of 81.2 years (four of the 251-month cycles, or 19 inex minus 26 saros) which is even closer to a whole number of anomalistic months (1076.00056), and almost equally close to a half-integer number of draconic months (1089.5366). The "tritrix" eclipse cycle, consisting of 1743 synodic months, 1891.496 draconic months, or 1867.9970 anomalistic months (140.925 years, equivalent to 3 inex plus 3 saros) is about as accurate as the interval of Hipparchus in terms of anomalistic months, but repeats many more times, around 20. An exceptionally accurate eclipse cycle from this point of view is one of 1154.5 years (43 inex minus 5 saros), which is much closer to a whole number of anomalistic months (15303.00005) than the interval of Hipparchus. At the solar eclipse of October 17, 1781, the moon had an anomaly of 0°, and similar eclipses have occurred every 1054.5 years for more than 4000 years and will continue at least 13,000 more years.
The period of Hipparchus is also accurate in the sense of always having the same length to within an hour. This is due to the fact that it is close to a whole number of anomalistic years as well as to a whole number of anomalistic months. Its average length is actually 126007.023 days, half an hour less than what Ptolemy says. This is equivalent to 345 Julian years minus 4.227 days (implying that in the Gregorian calendar the date usually goes back by just one or two days, sometimes by three), which is only about 8 days less than 345 anomalistic years. There are few eclipse periods that are so constantthe semester for example (six synodic months) can vary by a day in each direction.
Ptolemy says that Hipparchus also came up with a period of 5458 synodic months, equal to 5923 draconic months (441.3 years). This is called the Hipparchian Period, and more recently the Babylonian Period, but the latter is a misnomer as there is no evidence that the Babylonians were aware of it. It is equivalent to 14 inex plus 2 saros periods and therefore repeats many more times than the 345-year cycle. The solar eclipse of July 11, 2010, for example, is the latest in a series that has been going for more than 13,000 years and will continue for more than 8000 more.
References
Ancient Greek astronomy
Time in astronomy
Calendars
fr:Calendrier grec | Hipparchic cycle | [
"Physics",
"Astronomy"
] | 1,766 | [
"Time in astronomy",
"Calendars",
"Physical quantities",
"Time",
"Spacetime"
] |
1,828,131 | https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer%20experiment | The Davisson–Germer experiment was a 1923–1927 experiment by Clinton Davisson and Lester Germer at Western Electric (later Bell Labs), in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, and also the wave mechanics approach of the Schrödinger equation. It was an experimental milestone in the creation of quantum mechanics.
History and overview
According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy of the particle was related to the frequency of its associated wave by the Planck relation:
And that the momentum of the particle was related to its wavelength by what is now known as the de Broglie relation:
where is the Planck constant.
An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through Barkla's X-ray scattering experiments on crystalline solids.
This suggestion of Elsasser was then communicated by his senior colleague (and later Nobel Prize recipient) Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition. However the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel.
In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target. The angular dependence of the reflected electron intensity was measured and was determined to have a similar diffraction pattern as those predicted by Bragg for X-rays; some small, but significant differences were due to the average potential which Hans Bethe showed in his more complete analysis. At the same time George Paget Thomson and his student Alexander Reid independently demonstrated the same effect firing electrons through celluloid films to produce a diffraction pattern, and Davisson and Thomson shared the Nobel Prize in Physics in 1937. The exclusion of Germer from sharing the prize has puzzled physicists ever since. The Davisson–Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton (who won the Nobel Prize for Physics in 1927), established the wave–particle duality hypothesis which was a fundamental step in quantum theory.
Early experiments
Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925.
Prior to 1923, Davisson had been working with Charles H. Kunsman on detecting the effects of electron bombardment on tungsten when they noticed that 1% of the electrons bounced straight back to the electron gun in elastic scattering. This small but unexpected result led Davisson to theorize that he could examine the electron configuration of the atom in an analogous manner to how the Rutherford alpha particle scattering had examined the nucleus. They changed to a high vacuum and used nickel along with various other metals with unimpressive results.
In October 1924 when Germer joined the experiment, Davisson’s actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles. They expected that because of the small size of electrons, even the smoothest crystal surface would be too rough and thus the electron beam would experience diffused reflection.
The experiment consisted of firing an electron beam (from an electron gun, an electrostatic particle accelerator) at a nickel crystal, perpendicular to the surface of the crystal, and measuring how the number of reflected electrons varied as the angle between the detector and the nickel surface varied. The electron gun was a heated tungsten filament that released thermally excited electrons which were then accelerated through an electric potential difference, giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other atoms on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used. The detector was designed to accept only elastically scattered electrons.
During the experiment, air accidentally entered the chamber, producing an oxide film on the nickel surface. To remove the oxide, Davisson and Germer heated the specimen in a high temperature oven, not knowing that this caused the formerly polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam.
When they started the experiment again and the electrons hit the surface, they were scattered by nickel atoms in crystal planes (so the atoms were regularly spaced) of the crystal. This, in 1925, generated a diffraction pattern with unexpected and uncorrelated peaks due to the heating causing a ten crystal faceted area. They changed the experiment to a single crystal and started again.
Breakthrough
On his second honeymoon, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in summer 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used the uncorrelated diffraction curves from Davisson's 1923 research on platinum with Kunsman, using the data as confirmation of the de Broglie hypothesis of which Davisson was unaware.
Davisson then learned that in prior years, other scientists – Walter Elsasser, E. G. Dymond, and Blackett, James Chadwick, and Charles Ellis – had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low-intensity beams needed.
Returning to the United States, Davisson made modifications to the tube design and detector mounting, adding azimuth in addition to colatitude. Following experiments generated a strong signal peak at 65 V and an angle . He published a note to Nature titled, "The Scattering of Electrons by a Single Crystal of Nickel".
Questions still needed to be answered and experimentation continued through 1927, because Davisson was now familiar with the de Broglie formula and had designed the test to see if any effect could be discerned for a changed electron wavelength , according to the de Broglie relationship, which they knew should create a peak at 78 and not 65 V as their paper had shown. Because of their failure to correlate with the de Broglie formula, their paper introduced an ad hoc contraction factor of 0.7, which, however, could only explain eight of the thirteen beams.
By varying the applied voltage to the electron gun, the maximum intensity of electrons diffracted by the atomic surface was found at different angles. The highest intensity was observed at an angle with a voltage of 54 V, giving the electrons a kinetic energy of .
As Max von Laue proved in 1912, the periodic crystal structure serves as a type of three-dimensional diffraction grating. The angles of maximum reflection are given by Bragg's condition for constructive interference from an array, Bragg's law
for , , and for the spacing of the crystalline planes of nickel () obtained from previous X-ray scattering experiments on crystalline nickel.
According to the de Broglie relation, electrons with kinetic energy of have a wavelength of . The experimental outcome was via Bragg's law, which closely matched the predictions. As Davisson and Germer state in their 1928 follow-up paper to their Nobel prize winning paper, "These results, including the failure of the data to satisfy the Bragg formula, are in accord with those previously obtained in our experiments on electron diffraction. The reflection data fail to satisfy the Bragg relation for the same reason that the electron diffraction beams fail to coincide with their Laue beam analogues." However, they add, "The calculated wave-lengths are in excellent agreement with the theoretical values of h/mv as shown in the accompanying table." So although electron energy diffraction does not follow the Bragg law, it did confirm de Broglie's theory that particles behave like waves. The full explanation was provided by Hans Bethe who solved Schrödinger equation for the case of electron diffraction.
Davisson and Germer's accidental discovery of the diffraction of electrons was the first direct evidence confirming de Broglie's hypothesis that particles can have wave properties as well.
Davisson's attention to detail, his resources for conducting basic research, the expertise of colleagues, and luck all contributed to the experimental success.
Practical applications
The specific approach used by Davisson and Germer used low energy electrons, what is now called low-energy electron diffraction (LEED). It wasn't until much later that development of experimental methods exploiting ultra-high vacuum technologies (e.g. the approach described by Alpert in 1953) enabled the extensive use of LEED diffraction to explore the surfaces of crystallized elements and the spacing between atoms. Methods where higher energy electrons are used for diffraction in many different ways developed much earlier.
References
External links
Foundational quantum physics
Physics experiments
1927 in science | Davisson–Germer experiment | [
"Physics"
] | 2,043 | [
"Physics experiments",
"Foundational quantum physics",
"Experimental physics",
"Quantum mechanics"
] |
1,828,474 | https://en.wikipedia.org/wiki/Subclass%20%28set%20theory%29 | In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set. One may also call this "inclusion of classes".
That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B. In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set.
As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!)
References
Set theory | Subclass (set theory) | [
"Mathematics"
] | 238 | [
"Mathematical logic",
"Set theory"
] |
1,828,681 | https://en.wikipedia.org/wiki/Colatitude | In a spherical coordinate system, a colatitude is the complementary angle of a given latitude, i.e. the difference between a right angle and the latitude. In geography, Southern latitudes are defined to be negative, and as a result the colatitude is a non-negative quantity, ranging from zero at the North pole to 180° at the South pole.
The colatitude corresponds to the conventional 3D polar angle in spherical coordinates, as opposed to the latitude as used in cartography.
Examples
Latitude and colatitude sum up to 90°.
Astronomical use
The colatitude is most
useful in astronomy because it refers to the zenith distance of the celestial poles. For example, at latitude 42°N, for Polaris (approximately on the North celestial pole), the distance from the zenith (overhead point) to Polaris is .
Adding the declination of a star to the observer's colatitude gives the maximum altitude of that star (its angle from the horizon at culmination or upper transit). For example, if Alpha Centauri is seen with upper culmination altitude of 72° north (or 108° south) w.r.t. the observer and its declination is known (60°S), then it can be determined that the observer's colatitude is (i.e. the observer's latitude is ).
Stars whose declination absolute value exceed the observer's colatitude in the corresponding hemisphere (see culmination) are called circumpolar because they will never set as seen from that latitude. If the star's declination absolute value exceed the observer's colatitude in the opposite hemisphere, then it will never be seen from that location. For example, Alpha Centauri will always be visible at night from Perth, Western Australia (32°S) because the colatitude of Perth is , and the declination of Alpha Centauri (-60°) has an absolute value 60 which is greater than 58 in the corresponding hemisphere; on the other hand, the star will never rise in Juneau, Alaska (58°N) because Alpha Centauri's declination absolute value of 60 is more than observer's colatitude (32°) in the opposite hemisphere. Additionally, colatitude is used as part of the Schwarzschild metric in general relativity.
References
Spherical geometry
Orientation (geometry) | Colatitude | [
"Physics",
"Mathematics"
] | 513 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
1,828,922 | https://en.wikipedia.org/wiki/Prismatic%20surface | In solid geometry, a prismatic surface is a polyhedral surface
generated by all the lines that are parallel to a given line and that intersect a polygonal chain in a plane that is not parallel to the given line. The polygonal chain is the directrix of the surface; the parallel lines are its generators (or elements). If the directrix is a convex polygon, then the surface is a closed prismatic surface. The part of a closed prismatic surface between two parallel copies of the directrix is a prism.
References
Surfaces
Crystallography | Prismatic surface | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 114 | [
"Materials science",
"Crystallography",
"Condensed matter physics",
"Geometry",
"Geometry stubs"
] |
1,829,571 | https://en.wikipedia.org/wiki/FFC%20Cambridge%20process | The FFC Cambridge process is an electrochemical method for producing Titanium (Ti) from titanium oxide by electrolysis in molten calcium salts.
History
A process for electrochemical production of titanium through the reduction of titanium oxide in a calcium chloride solution was first described in a 1904 German patent, and in 1954 was awarded to Carl Marcus Olson for the production of metals like titanium by reduction of the metal oxide by a molten salt reducing agent in a specific gravity apparatus.
The FFC Cambridge process was developed by George Chen, Derek Fray, and Thomas Farthing between 1996 and 1997 at the University of Cambridge. (The name FFC derives from the first letters of the last names of the inventors). The intellectual property relating to the technology has been acquired by Metalysis, (Sheffield, UK).
Process
The process typically takes place between 900 and 1100 °C, with an anode (typically carbon) and a cathode (the oxide being reduced) in a solution of molten CaCl2. Depending on the nature of the oxide it will exist at a particular potential relative to the anode, which is dependent on the quantity of CaO present in CaCl2.
Cathode reaction mechanism
The electrocalciothermic reduction mechanism may be represented by the following sequence of reactions, where "M" represents a metal to be reduced (typically titanium).
(1)
When this reaction takes place on its own, it is referred to as the "calciothermic reduction" (or, more generally, an example of metallothermic reduction). For example, if the cathode was primarily made from TiO then calciothermic reduction would appear as:
Whilst the cathode reaction can be written as above it is in fact a gradual removal of oxygen from the oxide. For example, it has been shown that TiO2 does not simply reduce to Ti. It, in fact, reduces through the lower oxides (Ti3O5, Ti2O3, TiO etc.) to Ti.
The calcium oxide produced is then electrolyzed:
(2a)
(2b)
and
(2c)
Reaction (2b) describes the production of Ca metal from Ca2+ ions within the salt, at the cathode. The Ca would then proceed to reduce the cathode.
The net result of reactions (1) and (2) is simply the reduction of the oxide into metal plus oxygen:
(3)
Anode reaction mechanism
The use of molten CaCl2 is important because this molten salt can dissolve and transport the "O2−" ions to the anode to be discharged. The anode reaction depends on the material of the anode. Depending on the system it is possible to produce either CO or CO2 or a mixture at the carbon anode:
However, if an inert anode is used, such as that of high density SnO2, the discharge of the O2− ions leads to the evolution of oxygen gas. However the use of an inert anode has disadvantages. Firstly, when the concentration of CaO is low, Cl2 evolution at the anode becomes more favourable. In addition, when compared to a carbon anode, more energy is required to achieve the same reduced phase at the cathode. Inert anodes suffer from stability issues.
References
Further reading
External links
YouTube video:Metalysis FFC process
Metalysis Ltd. website
Chemical processes
Electrochemistry
Titanium processes | FFC Cambridge process | [
"Chemistry"
] | 700 | [
"Metallurgical processes",
"Titanium processes",
"Chemical processes",
"Electrochemistry",
"nan",
"Chemical process engineering"
] |
1,829,616 | https://en.wikipedia.org/wiki/Silazane | A silazane is a family of compounds with Si-N bonds. Usually the Si and N have organic substituents. They are analogous to siloxanes, with -NR- (R = alkyl, aryl) replacing -O-.
Examples
One illustrative family of silazanes are derived from tert-butylamine, including (CH3)3SiN(H)tBu and (CH3)2Si(N(H)tBu)2.
More structurally complex is [CH3SiN(H)tBu]2(μ-N(H)tBu)2 with bridging amides.
Reactions
The majority of silazanes are moisture sensitive. With water they convert to silanols or siloxanes.
See also
Phosphazene
Paraformaldehyde
References
Nitrogen(−III) compounds
Silicon compounds | Silazane | [
"Chemistry"
] | 188 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
1,830,354 | https://en.wikipedia.org/wiki/Heat-affected%20zone | In fusion welding, the heat-affected zone (HAZ) is the area of base material, either a metal or a thermoplastic, which is not melted but has had its microstructure and properties altered by welding or heat intensive cutting operations. The heat from the welding process and subsequent re-cooling causes this change from the weld interface to the termination of the sensitizing temperature in the base metal. The extent and magnitude of property change depends primarily on the base material, the weld filler metal, and the amount and concentration of heat input by the welding process.
The thermal diffusivity of the base material plays a large role—if the diffusivity is high, the material cooling rate is high and the HAZ is relatively small. Alternatively, a low diffusivity leads to slower cooling and a larger HAZ. The amount of heat input during the welding process also plays an important role as well, as processes like oxyfuel welding use high heat input and increase the size of the HAZ. Processes like laser beam welding and electron beam welding give a highly concentrated, limited amount of heat, resulting in a small HAZ. Arc welding falls between these two extremes, with the individual processes varying somewhat in heat input. To calculate the heat input for arc welding procedures, the following formula is used:
where = heat input (kJ/mm), = voltage (V), = current (A), and = welding speed (mm/min). The efficiency is dependent on the welding process used, with gas tungsten arc welding having a value of 0.6, shielded metal arc welding and gas metal arc welding having a value of 0.8, and submerged arc welding 1.0.
References
Weman, Klas (2003). Welding processes handbook. New York: CRC Press LLC. .
Welding | Heat-affected zone | [
"Engineering"
] | 382 | [
"Welding",
"Mechanical engineering"
] |
1,830,570 | https://en.wikipedia.org/wiki/Lead%28II%2CIV%29%20oxide | Lead(II,IV) oxide, also called red lead or minium, is the inorganic compound with the formula . A bright red or orange solid, it is used as pigment, in the manufacture of batteries, and rustproof primer paints. It is an example of a mixed valence compound, being composed of both Pb(II) and Pb(IV) in the ratio of two to one.
Structure
Lead(II,IV) oxide is lead(II) orthoplumbate(IV) . It has a tetragonal crystal structure at room temperature, which then transforms to an orthorhombic (Pearson symbol oP28, Space group Pbam, No. 55) form at temperature . This phase transition only changes the symmetry of the crystal and slightly modifies the interatomic distances and angles.
Preparation
Lead(II,IV) oxide is prepared by calcination of lead(II) oxide (PbO; also called litharge) in air at about 450–480 °C:
The resulting material is contaminated with PbO. If a pure compound is desired, PbO can be removed by a potassium hydroxide solution:
Another method of preparation relies on annealing of lead(II) carbonate (cerussite) in air:
Yet another method is oxidative annealing of white lead:
In solution, lead(II,IV) oxide can be prepared by reaction of potassium plumbate with lead(II) acetate, yielding yellow insoluble lead(II,IV) oxide monohydrate , which can be turned into the anhydrous form by gentle heating:
Natural minium is uncommon, forming only in extreme oxidizing conditions of lead ore bodies. The best known natural specimens come from Broken Hill, New South Wales, Australia, where they formed as the result of a mine fire.
Reactions
Red lead is virtually insoluble in water and in ethanol. However, it is soluble in hydrochloric acid present in the stomach, and is therefore toxic when ingested. It also dissolves in glacial acetic acid and a diluted mixture of nitric acid and hydrogen peroxide.
When heated to 500 °C, it decomposes to lead(II) oxide and oxygen. At 580 °C, the reaction is complete.
Nitric acid dissolves the lead(II) oxide component, leaving behind the insoluble lead(IV) oxide:
With iron oxides and with elemental iron, lead(II,IV) oxide forms insoluble iron(II) and iron(III) plumbates, which is the basis of the anticorrosive properties of lead-based paints applied to iron objects.
Use
Red lead has been used as a pigment for primer paints for iron objects. Due to its toxicity, its use is being limited. It finds limited use in some amateur pyrotechnics as a delay charge and was used in the past in the manufacture of dragon's egg pyrotechnic stars.
Red lead is used as a curing agent in some polychloroprene rubber compounds. It is used in place of magnesium oxide to provide better water resistance properties.
Red lead was used for engineer's scraping, before being supplanted by engineer's blue. Although red lead still offers more accurate markings since it doesn't flow as readily as engineer's blue under pressure.
It is also used as an adulterating agent in turmeric powder.
Physiological effects
When inhaled, lead(II,IV) oxide irritates the lungs. In case of high dose, the victim experiences a metallic taste, chest pain, and abdominal pain. When ingested, it is dissolved in the gastric acid and absorbed, leading to lead poisoning. High concentrations can be absorbed through skin as well, and it is important to follow safety precautions when working with lead-based paint.
Long-term contact with lead(II,IV) oxide may lead to accumulation of lead compounds in organisms, with development of symptoms of acute lead poisoning. Chronic poisoning displays as agitation, irritability, vision disorders, hypertension, and a grayish facial hue.
Lead(II,IV) oxide was shown to be carcinogenic for laboratory animals. Its carcinogenicity for humans was not proven.
History
This compound's Latin name minium originates from the Minius, a river in northwest Iberia where it was first mined.
Lead(II,IV) oxide was used as a red pigment in ancient Rome, where it was prepared by calcination of white lead. In the ancient and medieval periods it was used as a pigment in the production of illuminated manuscripts, and gave its name to the minium or miniature, a style of picture painted with the colour.
Made into a paint with linseed oil, red lead was used as a durable paint to protect exterior ironwork. In 1504 the portcullis at Stirling Castle in Scotland was painted with red lead, as were cannons including Mons Meg.
As a finely divided powder, it was also sprinkled on dielectric surfaces to study Lichtenberg figures.
In traditional Chinese medicine, red lead is used to treat ringworms and ulcerations, though the practice is limited due to its toxicity. Also, azarcón, a Mexican folk remedy for gastrointestinal disorders, contains up to 95% lead(II,IV) oxide.
It was also used before the 18th century as medicine.
See also
Lead paint
Lead(II) oxide, PbO
Lead(IV) oxide,
List of inorganic pigments
Minium (mineral)
Minium (pigment)
References
External links
National Pollutant Inventory - Lead and Lead Compounds Fact Sheet
Minium mineral data
Corrosion inhibitors
Inorganic pigments
Mixed valence compounds
Oxides
Pyrotechnic oxidizers | Lead(II,IV) oxide | [
"Chemistry"
] | 1,189 | [
"Mixed valence compounds",
"Inorganic compounds",
"Oxides",
"Salts",
"Inorganic pigments",
"Corrosion inhibitors",
"Process chemicals"
] |
1,830,803 | https://en.wikipedia.org/wiki/Motorola%2068881 | The Motorola 68881 and Motorola 68882 are floating-point units (FPUs) used in some computer systems in conjunction with Motorola's 32-bit 68020 or 68030 microprocessors. These coprocessors are external chips, designed before floating point math became standard on CPUs. The Motorola 68881 was introduced in 1984. The 68882 is a higher performance version produced later.
Overview
The 68020 and 68030 CPUs were designed with the separate 68881 chip in mind. Their instruction sets reserved the "F-line" instructions – that is, all opcodes beginning with the hexadecimal digit "F" could either be forwarded to an external coprocessor or be used as "traps" which would throw an exception, handing control to the computer's operating system. If an FPU is not present in the system, the OS would then either call an FPU emulator to execute the instruction's equivalent using 68020 integer-based software code, return an error to the program, terminate the program, or crash and require a reboot.
Architecture
The 68881 has eight 80-bit data registers (a 64-bit mantissa plus a sign bit, and a 15-bit signed exponent). It allows seven different modes of numeric representation, including single-precision floating point, double-precision floating point, extended-precision floating point, integers as 8-, 16- and 32-bit quantities and a floating-point Binary-coded decimal format. The binary floating point formats are as defined by the IEEE 754 floating-point standard. It was designed specifically for floating-point math and is not a general-purpose CPU. For example, when an instruction requires any address calculations, the main CPU handles them before the 68881 takes control.
The CPU/FPU pair are designed such that both can run at the same time. When the CPU encounters a 68881 instruction, it hands the FPU all operands needed for that instruction, and then the FPU releases the CPU to go on and execute the next instruction.
68882
The 68882 is an improved version of the 68881, with better pipelining, and eventually available at higher clock speeds. Its instruction set is exactly the same. Motorola claimed in some marketing literature that it executes some instructions 40% faster than a 68881 at the same clock speed, though this did not reflect typical performance, as seen by its more modest improvement in the table below. The 68882 is pin compatible with the 68881 and can be used as a direct replacement in most systems. The most important software incompatibility is that the 68882 uses a larger FSAVE state frame, which affects UNIX and other preemptive multitasking OSes that had to be modified to allocate more space for it.
Usage
The 68881 or 68882 were used in the Sun Microsystems Sun-3 workstations, IBM RT PC workstations, Apple Computer Macintosh II family, NeXT Computer, Sharp X68000, Amiga 3000, Convergent Technologies MightyFrame, Atari Mega STE, TT, and Falcon. Some third-party Amiga and Atari products used the 68881 or 68882 as a memory-mapped peripheral to the 68000.
Versions
68881
155,000 transistors on-chip
12 MHz version
16 MHz version ran at 160 kFLOPS
20 MHz version ran at 192 kFLOPS
25 MHz version ran at 240 kFLOPS
68882
176,000 transistors on-chip
25 MHz version ran at 264 kFLOPS
33 MHz version ran at 352 kFLOPS
40 MHz version ran at 422 kFLOPS
50 MHz version ran at 528 kFLOPS
These statistics came from the comp.sys.m68k FAQ. No statistics are listed for the 16 MHz and 20 MHz 68882, though these chips were indeed produced.
Legacy
Starting with the Motorola 68040, floating point support was included in the CPU itself.
References
Notes
freescale.com - Motorola MC68000 Family Programmer's Reference Manual
Coprocessors
Floating point
Integrated circuits
68k architecture | Motorola 68881 | [
"Technology",
"Engineering"
] | 889 | [
"Computer engineering",
"Integrated circuits"
] |
17,278,619 | https://en.wikipedia.org/wiki/Aluminium%20monochloride | Aluminium monochloride, or chloridoaluminium is the metal halide with the formula AlCl. Aluminium monochloride as a molecule is thermodynamically stable at high temperature and low pressure only. This compound is produced as a step in the Alcan process to smelt aluminium from an aluminium-rich alloy. When the alloy is placed in a reactor that is heated to 1,300 °C and mixed with aluminium trichloride, a gas of aluminium monochloride is produced.
It then disproportionates into aluminium melt and aluminium trichloride upon cooling to 900 °C.
This molecule has been detected in the interstellar medium, where molecules are so dilute that intermolecular collisions are unimportant.
See also
Aluminium monofluoride
Aluminium monobromide
Aluminium monoiodide
References
Aluminium(I) compounds
Chlorides
Metal halides
Diatomic molecules | Aluminium monochloride | [
"Physics",
"Chemistry"
] | 192 | [
"Chlorides",
"Inorganic compounds",
"Molecules",
"Salts",
"Inorganic compound stubs",
"Metal halides",
"Diatomic molecules",
"Matter"
] |
17,279,558 | https://en.wikipedia.org/wiki/Carbon%20nanotubes%20in%20photovoltaics | Organic photovoltaic devices (OPVs) are fabricated from thin films of organic semiconductors, such as polymers and small-molecule compounds, and are typically on the order of 100 nm thick. Because polymer based OPVs can be made using a coating process such as spin coating or inkjet printing, they are an attractive option for inexpensively covering large areas as well as flexible plastic surfaces. A promising low cost alternative to conventional solar cells made of crystalline silicon, there is a large amount of research being dedicated throughout industry and academia towards developing OPVs and increasing their power conversion efficiency.
Single wall carbon nanotubes as light harvesting media
Single wall carbon nanotubes possess a wide range of direct bandgaps matching the solar spectrum, strong photoabsorption, from infrared to ultraviolet, and high carrier mobility and reduced carrier transport scattering, which make themselves ideal photovoltaic material. Photovoltaic effect can be achieved in ideal single wall carbon nanotube (SWNT) diodes. Individual SWNTs can form ideal p-n junction diodes. An ideal behavior is the theoretical limit of performance for any diode, a highly sought after goal in all electronic materials development. Under illumination, SWNT diodes show significant power conversion efficiencies owing to enhanced properties of an ideal diode.
Recently, SWNTs were directly configured as energy conversion materials to fabricate thin-film solar cells, with nanotubes serving as both photogeneration sites and a charge carriers collecting/transport layer. The solar cells consist of a semitransparent thin film of nanotubes conformally coated on a n-type crystalline silicon substrate to create high-density p-n heterojunctions between nanotubes and n-Si to favor charge separation and extract electrons (through n-Si) and holes (through nanotubes). Initial tests have shown a power conversion efficiency of >1%, proving that CNTs-on-Si is a potentially suitable configuration for making solar cells. For the first time, Zhongrui Li demonstrated that SOCl2 treatment of SWNT boosts the power conversion efficiency of SWNT/n-Si heterojunction solar cells by more than 60%. Later on the acid doping approach is widely adopted in the later published CNT/Si works.
Even higher efficiency can be achieved if acid liquid is kept inside the void space of nanotube network. Acid infiltration of nanotube networks significantly boosts the cell efficiency to 13.8%, as reported by Yi Jia, by reducing the internal resistance that improves fill factor, and by forming photoelectrochemical units that enhance charge separation and transport.
The wet acid induced problems can be avoided by using aligned CNT film. In aligned CNT film, the transport distance is shortened, and the exciton quenching rate is also reduced. Additionally aligned nanotube film has much smaller void space, and better contact with substrate. So, plus strong acid doping, using aligned single wall carbon nanotube film can further improve power conversion efficiency (a record-high power-conversion-efficiency of >11% was achieved by Yeonwoong Jung).
Zhongrui Li also made the first n-SWNT/p-Si photovoltaic device by tuning SWNTs from p-type to n-type through polyethylene imine functionalization.
Carbon nanotube composites in the photoactive layer
Combining the physical and chemical characteristics of conjugated polymers with the high conductivity along the tube axis of carbon nanotubes (CNTs) provides a great deal of incentive to disperse CNTs into the photoactive layer in order to obtain more efficient OPV devices. The interpenetrating bulk donor–acceptor heterojunction in these devices can achieve charge separation and collection because of the existence of a bicontinuous network. Along this network, electrons and holes can travel toward their respective contacts through the electron acceptor and the polymer hole donor. Photovoltaic efficiency enhancement is proposed to be due to the introduction of internal polymer/nanotube junctions within the polymer matrix. The high electric field at these junctions can split up the excitons, while the single-walled carbon nanotube (SWCNT) can act as a pathway for the electrons.
The dispersion of CNTs in a solution of an electron donating conjugated polymer is perhaps the most common strategy to implement CNT materials into OPVs. Generally poly(3-hexylthiophene) (P3HT) or poly(3-octylthiophene) (P3OT) are used for this purpose. These blends are then spin coated onto a transparent conductive electrode with thicknesses that vary from 60 to 120 nm. These conductive electrodes are usually glass covered with indium tin oxide (ITO) and a 40 nm sublayer of poly(3,4-ethylenedioxythiophene) (PEDOT) and poly(styrenesulfonate) (PSS). PEDOT and PSS help to smooth the ITO surface, decreasing the density of pinholes and stifling current leakage that occurs along shunting paths. Through thermal evaporation or sputter coating, a 20 to 70 nm thick layer of aluminum and sometimes an intermediate layer of lithium fluoride are then applied onto the photoactive material. Multiple research investigations with both multi-walled carbon nanotubes (MWCNTs) and single-walled carbon nanotubes (SWCNTs) integrated into the photoactive material have been completed.
Enhancements of more than two orders of magnitude have been observed in the photocurrent from adding SWCNTs to the P3OT matrix. Improvements were speculated to be due to charge separation at polymer–SWCNT connections and more efficient electron transport through the SWCNTs. However, a rather low power conversion efficiency of 0.04% under 100 mW/cm2 white illumination was observed for the device suggesting incomplete exciton dissociation at low CNT concentrations of 1.0% wt. Because the lengths of the SWCNTs were similar to the thickness of photovoltaic films, doping a higher percentage of SWCNTs into the polymer matrix was believed to cause short circuits. To supply additional dissociation sites, other researchers have physically blended functionalized MWCNTs into P3HT polymer to create a P3HT-MWCNT with fullerene C60 double-layered device. However, the power efficiency was still relatively low at 0.01% under 100 mW/cm2 white illumination. Weak exciton diffusion toward the donor–acceptor interface in the bilayer structure may have been the cause in addition to the fullerene C60 layer possibly experiencing poor electron transport.
More recently, a polymer photovoltaic device from C60-modified SWCNTs and P3HT has been fabricated. Microwave irradiating a mixture of aqueous SWCNT solution and C60 solution in toluene was the first step in making these polymer-SWCNT composites. Conjugated polymer P3HT was then added resulting in a power conversion efficiency of 0.57% under simulated solar irradiation (95 mW/cm2). It was concluded that improved short circuit current density was a direct result of the addition of SWCNTs into the composite causing faster electron transport via the network of SWCNTs. It was also concluded that the morphology change led to an improved fill factor. Overall, the main result was improved power conversion efficiency with the addition of SWCNTs, compared to cells without SWCNTs; however, further optimization was thought to be possible.
Additionally, it has been found that heating to the point beyond the glass transition temperature of either P3HT or P3OT after construction can be beneficial for manipulating the phase separation of the blend. This heating also affects the ordering of the polymeric chains because the polymers are microcrystalline systems and it improves charge transfer, charge transport, and charge collection throughout the OPV device. The hole mobility and power efficiency of the polymer-CNT device also increased significantly as a result of this ordering.
Emerging as another valuable approach for deposition, the use of tetraoctylammonium bromide in tetrahydrofuran has also been the subject of investigation to assist in suspension by exposing SWCNTs to an electrophoretic field. In fact, photoconversion efficiencies of 1.5% and 1.3% were achieved when SWCNTs were deposited in combination with light harvesting cadmium sulfide (CdS) quantum dots and porphyrins, respectively.
Among the best power conversions achieved to date using CNTs were obtained by depositing a SWCNT layer between the ITO and the PEDOT : PSS or between the PEDOT : PSS and the photoactive blend in a modified ITO/PEDOT : PSS/ P3HT : (6,6)-phenyl-C61-butyric acid methyl ester (PCBM)/Al solar cell. By dip-coating from a hydrophilic suspension, SWCNT were deposited after an initially exposing the surface to an argon plasma to achieve a power conversion efficiency of 4.9%, compared to 4% without CNTs.
However, even though CNTs have shown potential in the photoactive layer, they have not resulted in a solar cell with a power conversion efficiency greater than the best tandem organic cells (6.5% efficiency). But, it has been shown in most of the previous investigations that the control over a uniform blending of the electron donating conjugated polymer and the electron accepting CNT is one of the most difficult as well as crucial aspects in creating efficient photocurrent collection in CNT-based OPV devices. Therefore, using CNTs in the photoactive layer of OPV devices is still in the initial research stages and there is still room for novel methods to better take advantage of the beneficial properties of CNTs.
One issue with utilizing SWCNTs for the photoactive layer of PV devices is the mixed purity when synthesized (about 1/3 metallic and 2/3 semiconducting). Metallic SWCNTs (m-SWCNTs) can cause exciton recombination between the electron and hole pairs, and the junction between metallic and semiconducting SWCNTs (s-SWCNTs) form Schottky barriers that reduce the hole transmission probability. The discrepancy in electronic structure of synthesized CNTs requires electronic sorting to separate and remove the m-SWCNTs to optimize the semiconducting performance. This may be accomplished through diameter and electronic sorting of CNTs through a density gradient ultracentrifugation (DGU) process, involving a gradient of surfactants that can separate the CNTs by diameter, chirality, and electronic type. This sorting method enables the separation of m-SWCNTs and the precise collection of multiple chiralities of s-SWCNTs, each chirality able to absorb a unique wavelength of light.
The multiple chiralities of s-SWCNTs are used as the hole transport material along with the fullerene component PC71BM to fabricate heterojunctions for the PV active layer. The polychiral s-SWCNTs enable a wide range optical absorption from visible to near-infrared (NIR) light, increasing the photo current relative to using single chirality nanotubes. To maximize light absorption, the inverted device structure was used with a zinc oxide nanowire layer penetrating the active layer to minimize collection length. Molybdenum oxide (MoOx) was utilized as a high work function hole transport layer to maximize voltage.
Cells fabricated with this architecture have achieved record power conversion efficiencies of 3.1%, higher than any other solar cell materials that utilize CNTs in the active layer. This design also has exceptional stability, with the PCE remaining at around 90% over a period of 30 days. The exceptional chemical stability of carbon nanomaterials enables excellent environmental stability compared to most organic photovoltaics that must be encapsulated to reduce degradation.
Relative to the best of polymer-fullerene heterojunction solar cells that have PCEs of about 10%, polychiral nanotube and fullerene solar cells are still a far ways off. Nevertheless, these findings push the achievable limits of CNT technology in solar cells. The ability for polychiral nanotubes to absorb in the NIR regime is a technology that can be utilized to improve the efficiencies of future of multi-junction tandem solar cells along with increasing the lifetime and durability of future noncrystalline solar cells.
Carbon nanotubes as a transparent electrode
ITO is currently the most popular material used for the transparent electrodes in OPV devices; however, it has a number of deficiencies. For one, it is not very compatible with polymeric substrates due to its high deposition temperature of around 600 °C. Traditional ITO also has unfavorable mechanical properties such as being relatively fragile. In addition, the combination of costly layer deposition in vacuum and a limited supply of indium results in high quality ITO transparent electrodes being very expensive. Therefore, developing and commercializing a replacement for ITO is a major focus of OPV research and development.
Conductive CNT coatings have recently become a prospective substitute based on wide range of methods including spraying, spin coating, casting, layer-by-layer, and Langmuir–Blodgett deposition. The transfer from a filter membrane to the transparent support using a solvent or in the form of an adhesive film is another method for attaining flexible and optically transparent CNT films. Other research efforts have shown that films made of arc-discharge CNT can result in a high conductivity and transparency. Furthermore, the work function of SWCNT networks is in the 4.8 to 4.9 eV range (compared to ITO which has a lower work function of 4.7 eV) leading to the expectation that the SWCNT work function should be high enough to assure efficient hole collection. Another benefit is that SWCNT films exhibit a high optical transparency in a broad spectral range from the UV-visible to the near-infrared range. Only a few materials retain reasonable transparency in the infrared spectrum while maintaining transparency in the visible part of the spectrum as well as acceptable overall electrical conductivity. SWCNT films are highly flexible, do not creep, do not crack after bending, theoretically have high thermal conductivities to tolerate heat dissipation, and have high radiation resistance. However, the electrical sheet resistance of ITO is an order of magnitude less than the sheet resistance measured for SWCNT films. Nonetheless, initial research studies demonstrate SWCNT thin films can be used as conducting, transparent electrodes for hole collection in OPV devices with efficiencies between 1% and 2.5% confirming that they are comparable to devices fabricated using ITO. Thus, possibilities exist for advancing this research to develop CNT-based transparent electrodes that exceed the performance of traditional ITO materials.
CNTs in dye-sensitized solar cells
Due to the simple fabrication process, low production cost, and high efficiency, there is significant interest in dye-sensitized solar cells (DSSCs). Thus, improving DSSC efficiency has been the subject of a variety of research investigations because it has the potential to be manufactured economically enough to compete with other solar cell technologies. Titanium dioxide nanoparticles have been widely used as a working electrode for DSSCs because they provide a high efficiency, more than any other metal oxide semiconductor investigated. Yet the highest conversion efficiency under air mass (AM) 1.5 (100 mW/cm2) irradiation reported for this device to date is about 11%. Despite this initial success, the effort to further enhance efficiency has not produced any major results. The transport of electrons across the particle network has been a key problem in achieving higher photoconversion efficiency in nanostructured electrodes. Because electrons encounter many grain boundaries during the transit and experience a random path, the probability of their recombination with oxidized sensitizer is increased. Therefore, it is not adequate to enlarge the oxide electrode surface area to increase efficiency because photo-generated charge recombination should be prevented. Promoting electron transfer through film electrodes and blocking interface states lying below the edge of the conduction band are some of the non-CNT based strategies to enhance efficiency that have been employed.
With recent progress in CNT development and fabrication, there is promise to use various CNT based nanocomposites and nanostructures to direct the flow of photogenerated electrons and assist in charge injection and extraction. To assist the electron transport to the collecting electrode surface in a DSSC, a popular concept is to utilize CNT networks as support to anchor light harvesting semiconductor particles. Research efforts along these lines include organizing CdS quantum dots on SWCNTs. Charge injection from excited CdS into SWCNTs was documented upon excitation of CdS nanoparticles. Other varieties of semiconductor particles including CdSe and CdTe can induce charge-transfer processes under visible light irradiation when attached to CNTs. Including porphyrin and C60 fullerene, organization of photoactive donor polymer and acceptor fullerene on electrode surfaces has also been shown to offer considerable improvement in the photoconversion efficiency of solar cells. Therefore, there is an opportunity to facilitate electron transport and increase the photoconversion efficiency of DSSCs utilizing the electron-accepting ability of semiconducting SWCNTs.
Other researchers fabricated DSSCs using the sol-gel method to obtain titanium dioxide coated MWCNTs for use as an electrode. Because pristine MWCNTs have a hydrophobic surface and poor dispersion stability, pretreatment was necessary for this application. A relatively low-destruction method for removing impurities, H2O2 treatment was used to generate carboxylic acid groups by oxidation of MWCNTs. Another positive aspect was the fact that the reaction gases including and H2O were non-toxic and could be released safely during the oxidation process. As a result of treatment, H2O2 exposed MWCNTs have a hydrophilic surface and the carboxylic acid groups on the surface have polar covalent bonding. Also, the negatively charged surface of the MWCNTs improved the stability of dispersion. By then entirely surrounding the MWCNTs with titanium dioxide nanoparticles using the sol-gel method, an increase in the conversion efficiency of about 50% compared to a conventional titanium dioxide cell was achieved. The enhanced interconnectivity between the titanium dioxide particles and the MWCNTs in the porous titanium dioxide film was concluded to be the cause of the improvement in short circuit current density. Here again, the addition of MWCNTs was thought to provide more efficient electron transfer through film in the DSSC.
One issue with utilizing SWCNTs for the photoactive layer of PV devices is the mixed purity when synthesized (about 1/3 metallic and 2/3 semiconducting). Metallic SWCNTs (m-SWCNTs) can cause exciton recombination between the electron and hole pairs, and the junction between metallic and semiconducting SWCNTs (s-SWCNTs) form Schottky barriers that reduce the hole transmission probability. The discrepancy in electronic structure of synthesized CNTs requires electronic sorting to separate and remove the m-SWCNTs to optimize the semiconducting performance. This may be accomplished through diameter and electronic sorting of CNTs through a density gradient ultracentrifugation (DGU) process, involving a gradient of surfactants that can separate the CNTs by diameter, chirality, and electronic type. This sorting method enables the separation of m-SWCNTs and the precise collection of multiple chiralities of s-SWCNTs, each chirality able to absorb a unique wavelength of light.
The multiple chiralities of s-SWCNTs are used as the hole transport material along with the fullerene component PC71BM to fabricate heterojunctions for the PV active layer. The polychiral s-SWCNTs enable a wide range optical absorption from visible to near-infrared (NIR) light, increasing the photo current relative to using single chirality nanotubes. To maximize light absorption, the inverted device structure was used with a zinc oxide nanowire layer penetrating the active layer to minimize collection length. Molybdenum oxide (MoOx) was utilized as a high work function hole transport layer to maximize voltage.
Cells fabricated with this architecture have achieved record power conversion efficiencies of 3.1%, higher than any other solar cell materials that utilize CNTs in the active layer. This design also has exceptionally stability, with the PCE remaining at around 90% over a period of 30 days. The exceptional chemical stability of carbon nanomaterials enables excellent environmental stability compared to most organic photovoltaics that must be encapsulated to reduce degradation.
Relative to the best of polymer-fullerene heterojunction solar cells that have PCEs of about 10%, polychiral nanotube and fullerene solar cells are still a far ways off. Nevertheless, these findings push the achievable limits of CNT technology in solar cells. The ability for polychiral nanotubes to absorb in the NIR regime is a technology that can be utilized to improve the efficiencies of future of multi-junction tandem solar cells along with increasing the lifetime and durability of future noncrystalline solar cells.
See also
Allotropes of carbon
Carbon nanotube
Optical properties of carbon nanotubes
Selective chemistry of single-walled nanotubes
References
Carbon nanotubes
Solar cells
Nanoelectronics
Thin-film cells
Photovoltaics | Carbon nanotubes in photovoltaics | [
"Materials_science",
"Mathematics"
] | 4,529 | [
"Thin-film cells",
"Nanoelectronics",
"Nanotechnology",
"Planes (geometry)",
"Thin films"
] |
17,287,701 | https://en.wikipedia.org/wiki/WIEN2k | The WIEN2k package is a computer program written in Fortran that performs quantum mechanical calculations on periodic solids. It uses the full-potential (linearized) augmented plane-wave and local-orbitals [FP-(L)APW+lo] basis set to solve the Kohn–Sham equations of density functional theory.
WIEN2k was originally developed by Peter Blaha and Karlheinz Schwarz from the Institute of Materials Chemistry of the Vienna University of Technology. The first public release of the code was done in 1990. Then, the next releases were WIEN93, WIEN97, and WIEN2k. The latest version WIEN2k_24.1 was released in August 2024. It has been licensed by more than 3400 user groups and has about 16000 citations on Google scholar (Blaha WIEN2k).
WIEN2k uses density functional theory to calculate the electronic structure of a solid. It is based on the most accurate scheme for the calculation of the bond structure-the full potential energy (linear) augmented plane wave ((L) APW) + local orbit (lo) method. WIEN2k uses an all-electronic solution, including relativistic terms.
Features and calculated properties
WIEN2k works with both centrosymmetric and non-centrosymmetric lattices, with 230 built-in space groups. It supports a variety of functionals including local-density approximation (LDA), many different generalized gradient approximations (GGA), Hubbard models, on-site hybrids, meta-GGA and full hybrids, and can also include spin-orbit coupling and Van der Waals terms. It can be used for structure optimization, both unit cell dimensions and internal atomic positions. For the latter an adaptive fixed-point iteration is used which simultaneously solves for atomic positions and the electron density. The code supports both OpenMP and MPI parallelization, which can be used efficiently in combination. It also supports parallelization by dispatching parts of the calculations to different computers.
A number of different properties can be calculated using the densities, many of these in packages which have been contributed by users over the years. WIEN2K can be used to calculate:
Density of states
Electron and spin density
Bader charges and critical points
Wannier functions
Total energies and energy differences
Fermi surfaces
Optical properties
X-ray structure factors
Atomic forces, from which phonon and elastic properties can be extracted
Electric field gradients
Nuclear magnetic resonance spectra
X-ray emission and X-ray absorption spectra
Electron energy loss spectra
Berry phase and related topological properties.
See also
List of quantum chemistry and solid state physics software
References
External links
WIEN2k homepage
Computational chemistry software
Density functional theory software
Physics software | WIEN2k | [
"Physics",
"Chemistry"
] | 554 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
4,692,179 | https://en.wikipedia.org/wiki/Low-energy%20electron%20diffraction | Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV) and observation of diffracted electrons as spots on a fluorescent screen.
LEED may be used in one of two ways:
Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I–V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface at hand.
Historical perspective
An electron-diffraction experiment similar to modern LEED was the first to observe the wavelike properties of electrons, but LEED was established as an ubiquitous tool in surface science only with the advances in vacuum generation and electron detection techniques.
Davisson and Germer's discovery of electron diffraction
The theoretical possibility of the occurrence of electron diffraction first emerged in 1924, when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel-laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h/p, where h is the Planck constant.
The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927, when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays developed by Bragg and Laue earlier. Before the acceptance of the de Broglie hypothesis, diffraction was believed to be an exclusive property of waves.
Davisson and Germer published notes of their electron-diffraction experiment result in Nature and in Physical Review in 1927. One month after Davisson and Germer's work appeared, Thompson and Reid published their electron-diffraction work with higher kinetic energy (thousand times higher than the energy used by Davisson and Germer) in the same journal. Those experiments revealed the wave property of electrons and opened up an era of electron-diffraction study.
Development of LEED as a tool in surface science
Though discovered in 1927, low-energy electron diffraction did not become a popular tool for surface analysis until the early 1960s. The main reasons were that monitoring directions and intensities of diffracted beams was a difficult experimental process due to inadequate vacuum techniques and slow detection methods such as a Faraday cup. Also, since LEED is a surface-sensitive method, it required well-ordered surface structures. Techniques for the preparation of clean metal surfaces first became available much later.
Nonetheless, H. E. Farnsworth and coworkers at Brown University pioneered the use of LEED as a method for characterizing the absorption of gases onto clean metal surfaces and the associated regular adsorption phases, starting shortly after the Davisson and Germer discovery into the 1970s.
In the early 1960s LEED experienced a renaissance, as ultra-high vacuum became widely available, and the post acceleration detection method was introduced by Germer and his coworkers at Bell Labs using a flat phosphor screen. Using this technique, diffracted electrons were accelerated to high energies to produce clear and visible diffraction patterns on the screen. Ironically the post-acceleration method had already been proposed by Ehrenberg in 1934. In 1962 Lander and colleagues introduced the modern hemispherical screen with associated hemispherical grids. In the mid-1960s, modern LEED systems became commercially available as part of the ultra-high-vacuum instrumentation suite by Varian Associates and triggered an enormous boost of activities in surface science. Notably, future Nobel prize winner Gerhard Ertl started his studies of surface chemistry and catalysis on such a Varian system.
It soon became clear that the kinematic (single-scattering) theory, which had been successfully used to explain X-ray diffraction experiments, was inadequate for the quantitative interpretation of experimental data obtained from LEED. At this stage a detailed determination of surface structures, including adsorption sites, bond angles and bond lengths was not possible.
A dynamical electron-diffraction theory, which took into account the possibility of multiple scattering, was established in the late 1960s. With this theory, it later became possible to reproduce experimental data with high precision.
Experimental setup
In order to keep the studied sample clean and free from unwanted adsorbates, LEED experiments are performed in an ultra-high vacuum environment (residual gas pressure <10−7 Pa).
LEED optics
The main components of a LEED instrument are:
An electron gun from which monochromatic electrons are emitted by a cathode filament that is at a negative potential, typically 10–600 V, with respect to the sample. The electrons are accelerated and focused into a beam, typically about 0.1 to 0.5 mm wide, by a series of electrodes serving as electron lenses. Some of the electrons incident on the sample surface are backscattered elastically, and diffraction can be detected if sufficient order exists on the surface. This typically requires a region of single crystal surface as wide as the electron beam, although sometimes polycrystalline surfaces such as highly oriented pyrolytic graphite (HOPG) are sufficient.
A high-pass filter for scattered electrons in the form of a retarding field analyzer, which blocks all but elastically scattered electrons. It usually contains three or four hemispherical concentric grids. Because only radial fields around the sampled point would be allowed and the geometry of the sample and the surrounding area is not spherical, the space between the sample and the analyzer has to be field-free. The first grid, therefore, separates the space above the sample from the retarding field. The next grid is at a negative potential to block low energy electrons, and is called the suppressor or the gate. To make the retarding field homogeneous and mechanically more stable another grid at the same potential is added behind the second grid. The fourth grid is only necessary when the LEED is used like a tetrode and the current at the screen is measured, when it serves as screen between the gate and the anode.
A hemispherical positively-biased fluorescent screen on which the diffraction pattern can be directly observed, or a position-sensitive electron detector. Most new LEED systems use a reverse view scheme, which has a minimized electron gun, and the pattern is viewed from behind through a transmission screen and a viewport. Recently, a new digitized position sensitive detector called a delay-line detector with better dynamic range and resolution has been developed.
Sample
The sample of the desired surface crystallographic orientation is initially cut and prepared outside the vacuum chamber. The correct alignment of the crystal can be achieved with the help of X-ray diffraction methods such as Laue diffraction. After being mounted in the UHV chamber the sample is cleaned and flattened. Unwanted surface contaminants are removed by ion sputtering or by chemical processes such as oxidation and reduction cycles. The surface is flattened by annealing at high temperatures.
Once a clean and well-defined surface is prepared, monolayers can be adsorbed on the surface by exposing it to a gas consisting of the desired adsorbate atoms or molecules.
Often the annealing process will let bulk impurities diffuse to the surface and therefore give rise to a re-contamination after each cleaning cycle. The problem is that impurities that adsorb without changing the basic symmetry of the surface, cannot easily be identified in the diffraction pattern. Therefore, in many LEED experiments Auger electron spectroscopy is used to accurately determine the purity of the sample.
Using the detector for Auger electron spectroscopy
LEED optics is in some instruments also used for Auger electron spectroscopy. To improve the measured signal, the gate voltage is scanned in a linear ramp. An RC circuit serves to derive the second derivative, which is then amplified and digitized. To reduce the noise, multiple passes are summed up. The first derivative is very large due to the residual capacitive coupling between gate and the anode and may degrade the performance of the circuit. By applying a negative ramp to the screen this can be compensated. It is also possible to add a small sine to the gate. A high-Q RLC circuit is tuned to the second harmonic to detect the second derivative.
Data acquisition
A modern data acquisition system usually contains a CCD/CMOS camera pointed to the screen for diffraction pattern visualization and a computer for data recording and further analysis. More expensive instruments have in-vacuum position sensitive electron detectors that measure the current directly, which helps in the quantitative I–V analysis of the diffraction spots.
Theory
Surface sensitivity
The basic reason for the high surface sensitivity of LEED is that for low-energy electrons the interaction between the solid and electrons is especially strong. Upon penetrating the crystal, primary electrons will lose kinetic energy due to inelastic scattering processes such as plasmon and phonon excitations, as well as electron–electron interactions.
In cases where the detailed nature of the inelastic processes is unimportant, they are commonly treated by assuming an exponential decay of the primary electron-beam intensity I0 in the direction of propagation:
Here d is the penetration depth, and denotes the inelastic mean free path, defined as the distance an electron can travel before its intensity has decreased by the factor 1/e. While the inelastic scattering processes and consequently the electronic mean free path depend on the energy, it is relatively independent of the material. The mean free path turns out to be minimal (5–10 Å) in the energy range of low-energy electrons (20–200 eV). This effective attenuation means that only a few atomic layers are sampled by the electron beam, and, as a consequence, the contribution of deeper atoms to the diffraction progressively decreases.
Kinematic theory: single scattering
Kinematic diffraction is defined as the situation where electrons impinging on a well-ordered crystal surface are elastically scattered only once by that surface. In the theory the electron beam is represented by a plane wave with a wavelength given by the de Broglie hypothesis:
The interaction between the scatterers present in the surface and the incident electrons is most conveniently described in reciprocal space. In three dimensions the primitive reciprocal lattice vectors are related to the real space lattice in the following way:
For an incident electron with wave vector and scattered wave vector , the condition for constructive interference and hence diffraction of scattered electron waves is given by the Laue condition:
where (h, k, l) is a set of integers, and
is a vector of the reciprocal lattice. Note that these vectors specify the Fourier components of charge density in the reciprocal (momentum) space, and that the incoming electrons scatter at these density modulations within the crystal lattice. The magnitudes of the wave vectors are unchanged, i.e. , because only elastic scattering is considered.
Since the mean free path of low-energy electrons in a crystal is only a few angstroms, only the first few atomic layers contribute to the diffraction. This means that there are no diffraction conditions in the direction perpendicular to the sample surface. As a consequence, the reciprocal lattice of a surface is a 2D lattice with rods extending perpendicular from each lattice point. The rods can be pictured as regions where the reciprocal lattice points are infinitely dense.
Therefore, in the case of diffraction from a surface the Laue condition reduces to the 2D form:
where and are the primitive translation vectors of the 2D reciprocal lattice of the surface and , denote the component of respectively the reflected and incident wave vector parallel to the sample surface. and are related to the real space surface lattice, with as the surface normal, in the following way:
The Laue-condition equation can readily be visualized using the Ewald's sphere construction.
Figures 3 and 4 show a simple illustration of this principle: The wave vector of the incident electron beam is drawn such that it terminates at a reciprocal lattice point. The Ewald's sphere is then the sphere with radius and origin at the center of the incident wave vector. By construction, every wave vector centered at the origin and terminating at an intersection between a rod and the sphere will then satisfy the 2D Laue condition and thus represent an allowed diffracted beam.
Interpretation of LEED patterns
Figure 4 shows the Ewald's sphere for the case of normal incidence of the primary electron beam, as would be the case in an actual LEED setup. It is apparent that the pattern observed on the fluorescent screen is a direct picture of the reciprocal lattice of the surface. The spots are indexed according to the values of h and k. The size of the Ewald's sphere and hence the number of diffraction spots on the screen is controlled by the incident electron energy. From the knowledge of the reciprocal lattice models for the real space lattice can be constructed and the surface can be characterized at least qualitatively in terms of the surface periodicity and the point group. Figure 7 shows a model of an unreconstructed (100) face of a simple cubic crystal and the expected LEED pattern. Since these patterns can be inferred from the crystal structure of the bulk crystal, known from other more quantitative diffraction techniques, LEED is more interesting in the cases where the surface layers of a material reconstruct, or where surface adsorbates form their own superstructures.
Superstructures
Overlaying superstructures on a substrate surface may introduce additional spots in the known (1×1) arrangement. These are known as extra spots or super spots. Figure 6 shows many such spots appearing after a simple hexagonal surface of a metal has been covered with a layer of graphene. Figure 7 shows a schematic of real and reciprocal space lattices for a simple (1×2) superstructure on a square lattice.
For a commensurate superstructure the symmetry and the rotational alignment with respect to adsorbent surface can be determined from the LEED pattern. This is easiest shown by using a matrix notation, where the primitive translation vectors of the superlattice are linked to the primitive translation vectors of the underlying (1×1) lattice in the following way
The matrix for the superstructure then is
Similarly, the primitive translation vectors of the lattice describing the extra spots {a, b} are linked to the primitive translation vectors of the reciprocal lattice {a∗, b∗}
G∗ is related to G in the following way
Domains
An essential problem when considering LEED patterns is the existence of symmetrically equivalent domains. Domains may lead to diffraction patterns that have higher symmetry than the actual surface at hand. The reason is that usually the cross sectional area of the primary electron beam (~1 mm2) is large compared to the average domain size on the surface and hence the LEED pattern might be a superposition of diffraction beams from domains oriented along different axes of the substrate lattice.
However, since the average domain size is generally larger than the coherence length of the probing electrons, interference between electrons scattered from different domains can be neglected. Therefore, the total LEED pattern emerges as the incoherent sum of the diffraction patterns associated with the individual domains.
Figure 8 shows the superposition of the diffraction patterns for the two orthogonal domains (2×1) and (1×2) on a square lattice, i.e. for the case where one structure is just rotated by 90° with respect to the other. The (1×2) structure and the respective LEED pattern are shown in Figure 7. It is apparent that the local symmetry of the surface structure is twofold while the LEED pattern exhibits a fourfold symmetry.
Figure 1 shows a real diffraction pattern of the same situation for the case of a Si(100) surface. However, here the (2×1) structure is formed due to surface reconstruction.
Dynamical theory: multiple scattering
The inspection of the LEED pattern gives a qualitative picture of the surface periodicity i.e. the size of the surface unit cell and to a certain degree of surface symmetries. However it will give no information about the atomic arrangement within a surface unit cell or the sites of adsorbed atoms. For instance, when the whole superstructure in Figure 7 is shifted such that the atoms adsorb in bridge sites instead of on-top sites the LEED pattern stays the same, although the individual spot intensities may somewhat differ.
A more quantitative analysis of LEED experimental data can be achieved by analysis of so-called I–V curves, which are measurements of the intensity versus incident electron energy. The I–V curves can be recorded by using a camera connected to computer controlled data handling or by direct measurement with a movable Faraday cup. The experimental curves are then compared to computer calculations based on the assumption of a particular model system. The model is changed in an iterative process until a satisfactory agreement between experimental and theoretical curves is achieved. A quantitative measure for this agreement is the so-called reliability- or R-factor. A commonly used reliability factor is the one proposed by Pendry. It is expressed in terms of the logarithmic derivative of the intensity:
The R-factor is then given by:
where and is the imaginary part of the electron self-energy. In general, is considered as a good agreement, is considered mediocre and is considered a bad agreement. Figure 9 shows examples of the comparison between experimental I–V spectra and theoretical calculations.
Dynamical LEED calculations
The term dynamical stems from the studies of X-ray diffraction and describes the situation where the response of the crystal to an incident wave is included self-consistently and multiple scattering can occur. The aim of any dynamical LEED theory is to calculate the intensities of diffraction of an electron beam impinging on a surface as accurately as possible.
A common method to achieve this is the self-consistent multiple scattering approach. One essential point in this approach is the assumption that the scattering properties of the surface, i.e. of the individual atoms, are known in detail. The main task then reduces to the determination of the effective wave field incident on the individual scatters present in the surface, where the effective field is the sum of the primary field and the field emitted from all the other atoms. This must be done in a self-consistent way, since the emitted field of an atom depends on the incident effective field upon it. Once the effective field incident on each atom is determined, the total field emitted from all atoms can be found and its asymptotic value far from the crystal then gives the desired intensities.
A common approach in LEED calculations is to describe the scattering potential of the crystal by a "muffin tin" model, where the crystal potential can be imagined being divided up by non-overlapping spheres centered at each atom such that the potential has a spherically symmetric form inside the spheres and is constant everywhere else. The choice of this potential reduces the problem to scattering from spherical potentials, which can be dealt with effectively. The task is then to solve the Schrödinger equation for an incident electron wave in that "muffin tin" potential.
Related techniques
Tensor LEED
In LEED the exact atomic configuration of a surface is determined by a trial and error process where measured I–V curves are compared to computer-calculated spectra under the assumption of a model structure. From an initial reference structure a set of trial structures is created by varying the model parameters. The parameters are changed until an optimal agreement between theory and experiment is achieved. However, for each trial structure a full LEED calculation with multiple scattering corrections must be conducted. For systems with a large parameter space the need for computational time might become significant. This is the case for complex surfaces structures or when considering large molecules as adsorbates.
Tensor LEED is an attempt to reduce the computational effort needed by avoiding full LEED calculations for each trial structure. The scheme is as follows: One first defines a reference surface structure for which the I–V spectrum is calculated. Next a trial structure is created by displacing some of the atoms. If the displacements are small the trial structure can be considered as a small perturbation of the reference structure and first-order perturbation theory can be used to determine the I–V curves of a large set of trial structures.
Spot profile analysis low-energy electron diffraction (SPA-LEED)
A real surface is not perfectly periodic but has many imperfections in the form of dislocations, atomic steps, terraces and the presence of unwanted adsorbed atoms. This departure from a perfect surface leads to a broadening of the diffraction spots and adds to the background intensity in the LEED pattern.
SPA-LEED is a technique where the profile and shape of the intensity of diffraction beam spots is measured. The spots are sensitive to the irregularities in the surface structure and their examination therefore permits more-detailed conclusions about some surface characteristics. Using SPA-LEED may for instance permit a quantitative determination of the surface roughness, terrace sizes, dislocation arrays, surface steps and adsorbates.
Although some degree of spot profile analysis can be performed in regular LEED and even LEEM setups, dedicated SPA-LEED setups, which scan the profile of the diffraction spot over a dedicated channeltron detector allow for much higher dynamic range and profile resolution.
Other
Spin-polarized low energy electron diffraction
Inelastic low energy electron diffraction
Very low-energy electron diffraction (VLEED)
Reflection high-energy electron diffraction (RHEED)
Ultrafast low-energy electron diffraction (ULEED)
See also
List of surface analysis methods
External links
LEED program packages
LEED pattern analyzer (LEEDpat)
References
Laboratory techniques in condensed matter physics
Electron beam
Diffraction
Materials science
Scientific techniques | Low-energy electron diffraction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,628 | [
"Electron",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Electron beam",
"Materials science",
"Laboratory techniques in condensed matter physics",
"Diffraction",
"Crystallography",
"Condensed matter physics",
"nan",
"Spectroscopy"
] |
8,085,896 | https://en.wikipedia.org/wiki/Organocatalysis | In organic chemistry, organocatalysis is a form of catalysis in which the rate of a chemical reaction is increased by an organic catalyst. This "organocatalyst" consists of carbon, hydrogen, sulfur and other nonmetal elements found in organic compounds. Because of their similarity in composition and description, they are often mistaken as a misnomer for enzymes due to their comparable effects on reaction rates and forms of catalysis involved.
Organocatalysts which display secondary amine functionality can be described as performing either enamine catalysis (by forming catalytic quantities of an active enamine nucleophile) or iminium catalysis (by forming catalytic quantities of an activated iminium electrophile). This mechanism is typical for covalent organocatalysis. Covalent binding of substrate normally requires high catalyst loading (for proline-catalysis typically 20–30 mol%).
Noncovalent interactions such as hydrogen-bonding facilitates low catalyst loadings (down to 0.001 mol%).
Organocatalysis offers several advantages. There is no need for metal-based catalysis thus making a contribution to green chemistry. In this context, simple organic acids have been used as catalyst for the modification of cellulose in water on multi-ton scale. When the organocatalyst is chiral an avenue is opened to asymmetric catalysis; for example, the use of proline in aldol reactions is an example of chirality and green chemistry. Organic chemists David MacMillan and Benjamin List were both awarded the 2021 Nobel Prize in chemistry for their work on asymmetric organocatalysis.
Introduction
Regular achiral organocatalysts are based on nitrogen such as piperidine used in the Knoevenagel condensation. DMAP used in esterifications and DABCO used in the Baylis-Hillman reaction. Thiazolium salts are employed in the Stetter reaction. These catalysts and reactions have a long history but current interest in organocatalysis is focused on asymmetric catalysis with chiral catalysts, called asymmetric organocatalysis or enantioselective organocatalysis. A pioneering reaction developed in the 1970s is called the Hajos–Parrish–Eder–Sauer–Wiechert reaction. Between 1968 and 1997, there were only a few reports of the use of small organic molecules as catalysts for asymmetric reactions (the Hajos–Parrish reaction probably being the most famous), but these chemical studies were viewed more as unique chemical reactions than as integral parts of a larger, interconnected field.
In this reaction, naturally occurring chiral proline is the chiral catalyst in an aldol reaction. The starting material is an achiral triketone and it requires just 3% of proline to obtain the reaction product, a ketol in 93% enantiomeric excess. This is the first example of an amino acid-catalyzed asymmetric aldol reaction.
The asymmetric synthesis of the Wieland-Miescher ketone (1985) is also based on proline and another early application was one of the transformations in the total synthesis of Erythromycin by Robert B. Woodward (1981). A mini-review digest article focuses on selected recent examples of total synthesis of natural and pharmaceutical products using organocatalytic reactions.
Many chiral organocatalysts are an adaptation of chiral ligands (which together with a metal center also catalyze asymmetric reactions) and both concepts overlap to some degree.
A breakthrough in the field of organocatalysis came in 1997 when Yian Shi reported the first general, highly enantioselective organocatalytic reaction with the catalytic asymmetric epoxidation of trans- and trisubstituted olefins with chiral dioxiranes. Since that time, several different types of reactions have been developed.
Organocatalyst classes
Organocatalysts for asymmetric synthesis can be grouped in several classes:
Biomolecules: proline, phenylalanine. Secondary amines in general. The cinchona alkaloids, certain oligopeptides.
Synthetic catalysts derived from biomolecules.
Hydrogen bonding catalysts, including TADDOLS, derivatives of BINOL such as NOBIN, and organocatalysts based on thioureas
Triazolium salts as next-generation Stetter reaction catalysts
Examples of asymmetric reactions involving organocatalysts are:
Asymmetric Diels-Alder reactions
Asymmetric Michael reactions
Asymmetric Mannich reactions
Shi epoxidation
Organocatalytic transfer hydrogenation
Proline
Proline catalysis has been reviewed.
Imidazolidinone organocatalysis
Imidazolidinones are catalysts for many transformations such as asymmetric Diels-Alder reactions and Michael additions. Chiral catalysts induce asymmetric reactions, often with high enantioselectivities. This catalyst works by forming an iminium ion with carbonyl groups of α,β-unsaturated aldehydes (enals) and enones in a rapid chemical equilibrium. This iminium activation is similar to activation of carbonyl groups by a Lewis acid and both catalysts lower the substrate's LUMO:
The transient iminium intermediate is chiral which is transferred to the reaction product via chiral induction. The catalysts have been used in Diels-Alder reactions, Michael additions, Friedel-Crafts alkylations, transfer hydrogenations and epoxidations.
One example is the asymmetric synthesis of the drug warfarin (in equilibrium with the hemiketal) in a Michael addition of 4-hydroxycoumarin and benzylideneacetone:
A recent exploit is the vinyl alkylation of crotonaldehyde with an organotrifluoroborate salt:
For other examples of its use: see organocatalytic transfer hydrogenation and asymmetric Diels-Alder reactions.
Thiourea organocatalysis
A large group of organocatalysts incorporate the urea or the thiourea moiety. These catalytically effective (thio)urea derivatives termed (thio)urea organocatalysts provide explicit double hydrogen-bonding interactions to coordinate and activate H-bond accepting substrates.
Their current uses are restricted to asymmetric multicomponent reactions, including those involving Michael addition, asymmetric multicomponent reactions for the synthesis of spirocycles, asymmetric multicomponent reactions involving acyl Strecker reactions, asymmetric Petasis reactions, asymmetric Biginelli reactions, asymmetric Mannich reactions, asymmetric aza-Henry reactions, and asymmetric reductive coupling reactions.
References
External links
Catalysis
Organic chemistry | Organocatalysis | [
"Chemistry"
] | 1,445 | [
"Catalysis",
"Chemical kinetics",
"nan"
] |
8,088,302 | https://en.wikipedia.org/wiki/Residual%20chemical%20shift%20anisotropy | Residual chemical shift anisotropy (RCSA) is the difference between the chemical shift anisotropy (CSA) of aligned and non-aligned molecules. It is normally three orders of magnitude smaller than the static CSA, with values on the order of parts-per-billion (ppb). RCSA is useful for structural determination and it is among the new developments in NMR spectroscopy.
See also
Residual dipolar coupling
References
Further reading
Nuclear magnetic resonance spectroscopy
Nuclear chemistry
Nuclear physics
Asymmetry | Residual chemical shift anisotropy | [
"Physics",
"Chemistry",
"Astronomy"
] | 103 | [
"Spectroscopy stubs",
"Nuclear magnetic resonance",
"Spectrum (physical sciences)",
"Asymmetry",
"Nuclear magnetic resonance spectroscopy",
"Nuclear chemistry",
"Astronomy stubs",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Nuclear physics",
"nan",
"Molecular physics stubs"... |
8,088,814 | https://en.wikipedia.org/wiki/Isotropic%20line | In the geometry of quadratic forms, an isotropic line or null line is a line for which the quadratic form applied to the displacement vector between any pair of its points is zero. An isotropic line occurs only with an isotropic quadratic form, and never with a definite quadratic form.
Using complex geometry, Edmond Laguerre first suggested the existence of two isotropic lines through the point that depend on the imaginary unit :
First system:
Second system:
Laguerre then interpreted these lines as geodesics:
An essential property of isotropic lines, and which can be used to define them, is the following: the distance between any two points of an isotropic line situated at a finite distance in the plane is zero. In other terms, these lines satisfy the differential equation . On an arbitrary surface one can study curves that satisfy this differential equation; these curves are the geodesic lines of the surface, and we also call them isotropic lines.
In the complex projective plane, points are represented by homogeneous coordinates and lines by homogeneous coordinates . An isotropic line in the complex projective plane satisfies the equation:
In terms of the affine subspace , an isotropic line through the origin is
In projective geometry, the isotropic lines are the ones passing through the circular points at infinity.
In the real orthogonal geometry of Emil Artin, isotropic lines occur in pairs:
A non-singular plane which contains an isotropic vector shall be called a hyperbolic plane. It can always be spanned by a pair , of vectors which satisfy
We shall call any such ordered pair , a hyperbolic pair. If is a non-singular plane with orthogonal geometry and is an isotropic vector of , then there exists precisely one in such that , is a hyperbolic pair. The vectors and are then the only isotropic vectors of .
Relativity
Isotropic lines have been used in cosmological writing to carry light. For example, in a mathematical encyclopedia, light consists of photons: "The worldline of a zero rest mass (such as a non-quantum model of a photon and other elementary particles of mass zero) is an isotropic line."
For isotropic lines through the origin, a particular point is a null vector, and the collection of all such isotropic lines forms the light cone at the origin.
Élie Cartan expanded the concept of isotropic lines to multivectors in his book on spinors in three dimensions.
References
Pete L. Clark, Quadratic forms chapter I: Witts theory from University of Miami in Coral Gables, Florida.
O. Timothy O'Meara (1963, 2000) Introduction to Quadratic Forms, page 94
Quadratic forms
Theory of relativity | Isotropic line | [
"Physics",
"Mathematics"
] | 566 | [
"Quadratic forms",
"Number theory",
"Theory of relativity"
] |
8,088,939 | https://en.wikipedia.org/wiki/Real%20point | In geometry, a real point is a point in the complex projective plane with homogeneous coordinates for which there exists a nonzero complex number such that , , and are all real numbers.
This definition can be widened to a complex projective space of arbitrary finite dimension as follows:
are the homogeneous coordinates of a real point if there exists a nonzero complex number such that the coordinates of
are all real.
A point which is not real is called an imaginary point.
Context
Geometries that are specializations of real projective geometry, such as Euclidean geometry, elliptic geometry or conformal geometry may be complexified, thus embedding the points of the geometry in a complex projective space, but retaining the identity of the original real space as special. Lines, planes etc. are expanded to the lines, etc. of the complex projective space. As with the inclusion of points at infinity and complexification of real polynomials, this allows some theorems to be stated more simply without exceptions and for a more regular algebraic analysis of the geometry.
Viewed in terms of homogeneous coordinates, a real vector space of homogeneous coordinates of the original geometry is complexified. A point of the original geometric space is defined by an equivalence class of homogeneous vectors of the form , where is an nonzero complex value and is a real vector. A point of this form (and hence belongs to the original real space) is called a real point, whereas a point that has been added through the complexification and thus does not have this form is called an imaginary point.
Real subspace
A subspace of a projective space is real if it is spanned by real points.
Every imaginary point belongs to exactly one real line, the line through the point and its complex conjugate.
See also
Rational point
References
Projective geometry
Point (geometry) | Real point | [
"Mathematics"
] | 361 | [
"Point (geometry)",
"Geometry",
"Geometry stubs"
] |
8,089,191 | https://en.wikipedia.org/wiki/Quellung%20reaction | The quellung reaction, also called the Neufeld reaction, is a biochemical reaction in which antibodies bind to the bacterial capsule of Streptococcus pneumoniae, Klebsiella pneumoniae, Neisseria meningitidis, Bacillus anthracis, Haemophilus influenzae, Escherichia coli, and Salmonella. The antibody reaction allows these species to be visualized under a microscope. If the reaction is positive, the capsule becomes opaque and appears to enlarge.
Quellung is the German word for "swelling" and describes the microscopic appearance of pneumococcal or other bacterial capsules after their polysaccharide antigen has combined with a specific antibody. The antibody usually comes from serum taken from an immunized laboratory animal. As a result of this combination, and precipitation of the large, complex molecule formed, the capsule appears to swell, because of increased surface tension, and its outlines become demarcated.
The pneumococcal quellung reaction was first described in 1902 by the scientist Fred Neufeld, and applied only to Streptococcus pneumoniae, both as microscopic capsular swelling and macroscopic agglutination (clumping visible with the naked eye). It was initially an intellectual curiosity more than anything else, and could distinguish only the three pneumococcal serotypes known at that time. However, it acquired an important practical use with the advent of serum therapy to treat certain types of pneumococcal pneumonia in the 1920s because selection of the proper antiserum to treat an individual patient required correct identification of the infecting pneumococcal serotype, and the quellung reaction was the only method available to do this. Dr. Albert Sabin made modifications to Neufeld's technique so that it could be done more rapidly, and other scientists expanded the technique to identify 29 additional serotypes.
Application of Neufeld’s discoveries to other important areas of research came when Fred Griffith showed that pneumococci could transfer information to transform one serotype into another. Oswald Avery, Colin MacLeod, and Maclyn McCarty later showed that the transforming factor was deoxyribonucleic acid, or DNA.
Serum therapy for infectious diseases was displaced by antibiotics in the 1940s, but identification of specific serotypes remained important as the understanding of the epidemiology of pneumococcal infections still required their identification to determine where different serotypes spread, as well as the variable invasiveness of different serotypes. Understanding the prevalence of various serotypes was also critical to the development of pneumococcal vaccines to prevent invasive infections.
The quellung reaction has been used to identify the 93 known capsular serotypes of Streptococcus pneumoniae in diagnostic settings, but in recent years it has been challenged by the latex agglutination method, and further by molecular typing techniques such as the polymerase chain reaction, which detect DNA and therefore target genetic differences between serotypes. Currently, there are 100 known capsular serotypes.
References
Further reading
Park, I. H., Pritchard, D., Cartee, R., Brandao, A., Brandileone, M. and Nahm, M. Discovery of a new capsular serotype (6C) within serogroup 6 of Streptococcus pneumoniae. Journal of Clinical Microbiology. 2007;45:1225–1233.
Biochemistry methods
Staining
1902 introductions
1902 in biology | Quellung reaction | [
"Chemistry",
"Biology"
] | 745 | [
"Biochemistry methods",
"Biochemistry"
] |
8,089,957 | https://en.wikipedia.org/wiki/List%20of%20%C5%A0koda%20Auto%20engines | This is a list of Škoda Auto engines.
OHV (1964–2003)
Škoda OHV is family of aluminium-block OHV engines developed by Škoda in 1964 for 1000MB and with various modifications manufactured for range of models until 2003. All versions were four-cylinders with aluminium block, three-bearing crankshaft and wet liners, which were made in various bores to allow variety of displacements. Until 1987, all engines used iron cast five port cylinder head, after that year, 135 and 136 versions for Škoda and all subsequent variants used Aluminium eight port head.
Iron-cast head engines
988 cc (1964-1977)
bore , stroke , compression ratio 8,3:1,
32 kW/4650RPM, toque 68 Nm/3000RPM
Used in:
Škoda 1000 MB (1964–1969)
Škoda 100 (1969–1976)
1107 cc (1967-1977)
bore , stroke , compression ratio 9,5:1,
38 kW/4800RPM, torque 83 Nm/3000RPM
Used in:
Škoda 1100 MB (1967–1969)
Škoda 1100 MBX
Škoda 110 (1969–1976)
Škoda 110R Coupé (1970–1980)
Škoda 742.10 - 1046 cc (1976-1989)
bore , stroke
Used in:
Škoda 105 (1976–1989)
Škoda 742.12 - 1174 cc (1976-1990)
bore , stroke
Used in:
Škoda 120 (1976–1990)
Škoda 125 (1988–1990)
742.12x (1977-1986)
A more powerful variant of the 1.2-litre 742.12 engine, with a compression ratio of 9.5:1
40,5 kW/5200RPM, torque 85,5Nm/3250RPM
Used in:
Škoda 120 LS/GLS (1977–1987)
Škoda Garde/Rapid 120 (1982-1986)
Škoda 742-13 - 1289 cc (1984-1988)
bore , , compression ratio 9,5:1
43 kW/5000RPM, toque 97Nm/2800RPM
Used in:
Škoda 130 (1984–1988)
Škoda Rapid 130 (1984-1988)
Aluminium head engines
For the 1987 , Škoda made major modifications to the 130 engine to meet new, stricter emission standards. The new engine had bimetallic pistons to lower oil consumption and a new 8-port cylinder head, which improved power output and allowed the engine to run on unleaded fuel. To fit the new cylinder head, the engine block also had to be modified. In 1993, this engine became available with Bosch single-point injection system and a catalytic converter. In 1996, the engine was further modified to meet Euro 2 emission standards: modifications included a multipoint injection system, larger camshaft bearings, and larger intake/exhaust valves, which further improved power output and efficiency of engine.
In 1999, the engine received its last major modification. The engine block was reinforced and modified to fit a larger crankshaft, valvetrain and lubrication system was modified to fit hydraulic tappets. As there was no need for a distributor (MPI using an ignition module mounted to the cylinder head), the valvetrain cover (hitherto unchanged since 1964) could be made narrower, moving the accessory belt closer to engine. This also allowed the engine to be fitted to the smaller VW Lupo. The injection system was upgraded, now using two lambda probes and a drive-by-wire throttle valve. These modifications allowed the engine to meet Euro 4 emission standards, before production ended in the spring of 2003.
997 cc (1999-2001)
bore , stroke
Škoda Fabia
VW Lupo
Seat Arosa
1289 cc (1987-2003)
bore , stroke
Škoda 135/136 (1987–1990)
Škoda Rapid 135/136 (1987–1990)
Škoda (1987–1995)
Škoda Felicia (1994–2000)
1397 cc (2000-2003)
bore , stroke
Škoda Fabia (2000–2003)
Škoda Octavia (1999–2001)
See also
Škoda Auto
List of Volkswagen Group engines
References
External links
Skoda-Auto.com - official website
Engines
Lists of automobile engines
Engines by maker | List of Škoda Auto engines | [
"Technology"
] | 884 | [
"Engines",
"Engines by maker"
] |
8,090,966 | https://en.wikipedia.org/wiki/Erosion%20corrosion%20of%20copper%20water%20tubes | Erosion corrosion, also known as impingement damage, is the combined effect of corrosion and erosion caused by rapid flowing turbulent water. It is probably the second most common cause of copper tube failures behind Type 1 pitting which is also known as Cold Water Pitting of Copper Tube.
Copper Water Tubes
Copper tubes have been used to distribute drinking water within buildings for many years, and hundreds of miles are installed throughout Europe every year. The long life of copper when exposed to natural waters is a result of its thermodynamic stability, its high resistance to reacting with the environment, and the formation of insoluble corrosion products that insulate the metal from the environment. The corrosion rate of copper in most drinkable waters is less than 2.5 μm/year, at this rate a 15 mm tube with a wall thickness of 0.7 mm would last for about 280 years. In some soft waters the general corrosion rate may increase to 12.5 μm/year, but even at this rate it would take over 50 years to perforate the same tube.
Occurrence
If the general water speed or the degree of local turbulence in an installation is high, the protective film that would normally be formed on a copper tube as a result of slight initial corrosion, may be torn off the surface locally, permitting further corrosion to take place at that point. If this process continues it can produce deep localised attack of the type known as erosion-corrosion or impingement damage. The actual attack on the metal is by the corrosive action of the water to which it is exposed while the erosive factor is the mechanical removal of the corrosion product from the surface.
Impingement attack produces highly characteristic water-swept pits, which are often horseshoe shaped, or it can produce broader areas of attack. The leading edge of the pit is frequently undercut by the swirling action of the water. Usually, the surface of the metal within the pits or areas of attack is smooth and carries no substantial corrosion product. Erosion-corrosion is known to occur in pumped-circulation hot water distribution systems, and even in cold water distribution systems, if the water velocities are too high. The factors influencing the attack include the chemical character of the water passing through the system, the temperature, the average water velocity in the system and the presence of any local features likely to induce turbulence in the water stream.
It is unusual for the general water velocity in a system to be so high that impingement attack occurs throughout the whole of the copper pipework. More commonly, the velocity is just sufficiently low for satisfactory protective films to be formed and to remain in position on most of the system, with impingement damage more likely to occur where there is an abrupt change in the direction of water flow giving rise to a high degree of turbulence, such as at tee pieces and elbow fittings. It is not generally realised how great an effect small obstructions can have on the flow pattern of water in a pipe-work system and the extent to which they can induce turbulence and cause corrosion-erosion. For example, it is most important, as far as possible, to ensure that copper tubes cut with a tube cutter are deburred before making the joint. Also a gap between the tube end and the stop in the fitting, due to the tube not having been cut to the correct length and fully inserted into the socket of the fitting, can also induce turbulence in the water stream.
Recommendations
The rate of impingement attack on copper also depends to some extent on the temperature of the water. The maximum velocities for fresh waters at different temperatures recommended in Sweden are given in the table below. These figures are for aerated waters of pH not less than about 7.
Recommended Maximum Water Velocities at Different Temperatures for Copper (m/s)
§ These velocities give a risk of impingement attack and are acceptable only for small bore connections to taps, flushing cisterns etc., through which water flow is intermittent.
BS 6700 gives the following maximum water velocities although it does note that these are currently under investigation and the velocities specified will be amended if the results of this investigation so require.
The minimum water speed at which copper pipes suffer impingement attack depends also to some extent on water composition. Aggressive waters that tend to be cupro-solvent are the most likely to give rise to impingement attack. Installations in large buildings where flow rates may be high and water is in continuous circulation are much more susceptible to attack than ordinary domestic installations. A high mineral content or a pH below 7 is likely to increase the possibility of corrosion-erosion occurring while a positive Langelier Index and consequent tendency to deposit a calcium carbonate scale is generally beneficial. The presence or absence of colloidal organic matter is also probably of some importance.
Remedial measures for impingement attack include modifications to the system to reduce the average water velocity, e.g. by using larger diameter tubes or, if appropriate, to lower the pump speed, and/or to redesign the part of the installation concerned to eliminate the cause of local turbulence, e.g. by using slow or swept bends and tee fittings rather than elbows and square tees. It is important to minimise the possibility of any local turbulence occurring by ensuring that the ends of tubes cut with a tube cutter are deburred and that the tubes are inserted fully to the stops in the fitting before the joints are made, as referred to earlier in this section. In some cases, where the above approaches are not possible, the length of copper tube affected can sometimes be replaced by materials more resistant to corrosion-erosion, e.g. 90/10 copper-nickel (BS Designation CN102) using appropriate fittings, or stainless steel to BS 4127:1994.
See also
Oxygenated treatment
Flow-accelerated corrosion
References
External links
Copper pipe Corrosion Copper Pipe Corrosion Theory and Mechanisms
Corrosion
Copper
Water | Erosion corrosion of copper water tubes | [
"Chemistry",
"Materials_science",
"Environmental_science"
] | 1,211 | [
"Hydrology",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Water",
"Materials degradation"
] |
8,091,561 | https://en.wikipedia.org/wiki/Molecular%20cellular%20cognition | Molecular cellular cognition (MCC) is a branch of neuroscience that involves the study of cognitive processes with approaches that integrate molecular, cellular and behavioral mechanisms. Key goals of MCC studies include the derivation of molecular and cellular explanations of cognitive processes, as well as finding mechanisms and treatments for cognitive disorders.
Although closely connected with behavioral genetics, MCC emphasizes the integration of molecular and cellular explanations of behavior, instead of focusing on the connections between genes and behavior.
Unlike cognitive neuroscience, which historically has focused on the connection between human brain systems and behavior, the field of MCC has used model organisms, such as mice, to study how molecular (i.e. receptor, kinase activation, phosphatase regulation), intra-cellular (i.e. dendritic processes), and inter-cellular processes (i.e. synaptic plasticity; network representations such as place fields) modulate cognitive function.
Methods employed in MCC include (but are not limited to) transgenic organisms (i.e. mice), viral vectors, pharmacology, in vitro and in vivo electrophysiology, optogenetics, in vivo imaging, and behavioral analysis. Modeling has become an essential component of the field because of the complexity of the multilevel data generated.
Scientific roots
The field of MCC has its roots in the pioneering pharmacological studies of the role of NMDA receptor in long-term potentiation and spatial learning and in studies that used knockout mice to look at the role of the alpha calcium calmodulin kinase II and FYN kinase in hippocampal long-term potentiation and spatial learning. The field has since expanded to include a large array of molecules including CREB.
Foundation of the field
MCC became an organized field with the formation of the Molecular Cellular Cognition Society, an organization with no membership fees and meetings that emphasize the participation of junior scientists. Its first meeting took place in Orlando, Florida on November first, 2002. , the society had organized numerous meetings in North America, Europe, and Asia, and included more than 4000 members.
References
External links
Molecular and Cellular Cognition Society
Cognition
Molecular neuroscience | Molecular cellular cognition | [
"Chemistry"
] | 430 | [
"Molecular neuroscience",
"Molecular biology"
] |
8,092,026 | https://en.wikipedia.org/wiki/Excitation%20temperature | In statistical mechanics, the excitation temperature () is defined for a population of particles via the Boltzmann factor. It satisfies
where
is the number of particles in an upper (e.g. excited) state;
is the statistical weight of those upper-state particles;
is the number of particles in a lower (e.g. ground) state;
is the statistical weight of those lower-state particles;
is the exponential function;
is the Boltzmann constant;
is the difference in energy between the upper and lower states.
Thus the excitation temperature is the temperature at which we would expect to find a system with this ratio of level populations. However it has no actual physical meaning except when in local thermodynamic equilibrium. The excitation temperature can even be negative for a system with inverted levels (such as a maser).
In observations of the 21 cm line of hydrogen, the apparent value of the excitation temperature is often called the "spin temperature".
References
Temperature | Excitation temperature | [
"Physics",
"Chemistry"
] | 208 | [
"Thermodynamics stubs",
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Physical chemistry stubs"
] |
8,092,698 | https://en.wikipedia.org/wiki/Euclidean%20distance%20matrix | In mathematics, a Euclidean distance matrix is an matrix representing the spacing of a set of points in Euclidean space.
For points in -dimensional space , the elements of their Euclidean distance matrix are given by squares of distances between them.
That is
where denotes the Euclidean norm on .
In the context of (not necessarily Euclidean) distance matrices, the entries are usually defined directly as distances, not their squares.
However, in the Euclidean case, squares of distances are used to avoid computing square roots and to simplify relevant theorems and algorithms.
Euclidean distance matrices are closely related to Gram matrices (matrices of dot products, describing norms of vectors and angles between them).
The latter are easily analyzed using methods of linear algebra.
This allows to characterize Euclidean distance matrices and recover the points that realize it.
A realization, if it exists, is unique up to rigid transformations, i.e. distance-preserving transformations of Euclidean space (rotations, reflections, translations).
In practical applications, distances are noisy measurements or come from arbitrary dissimilarity estimates (not necessarily metric).
The goal may be to visualize such data by points in Euclidean space whose distance matrix approximates a given dissimilarity matrix as well as possible — this is known as multidimensional scaling.
Alternatively, given two sets of data already represented by points in Euclidean space, one may ask how similar they are in shape, that is, how closely can they be related by a distance-preserving transformation — this is Procrustes analysis.
Some of the distances may also be missing or come unlabelled (as an unordered set or multiset instead of a matrix), leading to more complex algorithmic tasks, such as the graph realization problem or the turnpike problem (for points on a line).
Properties
By the fact that Euclidean distance is a metric, the matrix has the following properties.
All elements on the diagonal of are zero (i.e. it is a hollow matrix); hence the trace of is zero.
is symmetric (i.e. ).
(by the triangle inequality)
In dimension , a Euclidean distance matrix has rank less than or equal to . If the points are in general position, the rank is exactly
Distances can be shrunk by any power to obtain another Euclidean distance matrix. That is, if is a Euclidean distance matrix, then is a Euclidean distance matrix for every .
Relation to Gram matrix
The Gram matrix of a sequence of points in -dimensional space
is the matrix of their dot products (here a point is thought of as a vector from 0 to that point):
, where is the angle between the vector and .
In particular
is the square of the distance of from 0.
Thus the Gram matrix describes norms and angles of vectors (from 0 to) .
Let be the matrix containing as columns.
Then
, because (seeing as a column vector).
Matrices that can be decomposed as , that is, Gram matrices of some sequence of vectors (columns of ), are well understood — these are precisely positive semidefinite matrices.
To relate the Euclidean distance matrix to the Gram matrix, observe that
That is, the norms and angles determine the distances.
Note that the Gram matrix contains additional information: distances from 0.
Conversely, distances between pairs of points determine dot products between vectors ():
(this is known as the polarization identity).
Characterizations
For a matrix , a sequence of points in -dimensional Euclidean space
is called a realization of in if is their Euclidean distance matrix.
One can assume without loss of generality that (because translating by preserves distances).
This follows from the previous discussion because is positive semidefinite of rank at most if and only if it can be decomposed as where is a matrix.
Moreover, the columns of give a realization in .
Therefore, any method to decompose allows to find a realization.
The two main approaches are variants of Cholesky decomposition or using spectral decompositions to find the principal square root of , see Definite matrix#Decomposition.
The statement of theorem distinguishes the first point . A more symmetric variant of the same theorem is the following:
Other characterizations involve Cayley–Menger determinants.
In particular, these allow to show that a symmetric hollow matrix is realizable in if and only if every principal submatrix is.
In other words, a semimetric on finitely many points is embedabble isometrically in if and only if every points are.
In practice, the definiteness or rank conditions may fail due to numerical errors, noise in measurements, or due to the data not coming from actual Euclidean distances.
Points that realize optimally similar distances can then be found by semidefinite approximation (and low rank approximation, if desired) using linear algebraic tools such as singular value decomposition or semidefinite programming.
This is known as multidimensional scaling.
Variants of these methods can also deal with incomplete distance data.
Unlabeled data, that is, a set or multiset of distances not assigned to particular pairs, is much more difficult to deal with.
Such data arises, for example, in DNA sequencing (specifically, genome recovery from partial digest) or phase retrieval.
Two sets of points are called homometric if they have the same multiset of distances (but are not necessarily related by a rigid transformation).
Deciding whether a given multiset of distances can be realized in a given dimension is strongly NP-hard.
In one dimension this is known as the turnpike problem; it is an open question whether it can be solved in polynomial time.
When the multiset of distances is given with error bars, even the one dimensional case is NP-hard.
Nevertheless, practical algorithms exist for many cases, e.g. random points.
Uniqueness of representations
Given a Euclidean distance matrix, the sequence of points that realize it is unique up to rigid transformations – these are isometries of Euclidean space: rotations, reflections, translations, and their compositions.
Rigid transformations preserve distances so one direction is clear.
Suppose the distances and are equal.
Without loss of generality we can assume by translating the points by and , respectively.
Then the Gram matrix of remaining vectors is identical to the Gram matrix of vectors ().
That is, , where and are the matrices containing the respective vectors as columns.
This implies there exists an orthogonal matrix such that , see Definite symmetric matrix#Uniqueness up to unitary transformations.
describes an orthogonal transformation of (a composition of rotations and reflections, without translations) which maps to (and 0 to 0).
The final rigid transformation is described by .
In applications, when distances don't match exactly, Procrustes analysis aims to relate two point sets as close as possible via rigid transformations, usually using singular value decomposition.
The ordinary Euclidean case is known as the orthogonal Procrustes problem or Wahba's problem (when observations are weighted to account for varying uncertainties).
Examples of applications include determining orientations of satellites, comparing molecule structure (in cheminformatics), protein structure (structural alignment in bioinformatics), or bone structure (statistical shape analysis in biology).
See also
Adjacency matrix
Coplanarity
Distance geometry
Hollow matrix
Distance matrix
Euclidean random matrix
Classical multidimensional scaling, a visualization technique that approximates an arbitrary dissimilarity matrix by a Euclidean distance matrix
Cayley–Menger determinant
Semidefinite embedding
Notes
References
Matrices
Distance | Euclidean distance matrix | [
"Physics",
"Mathematics"
] | 1,519 | [
"Physical quantities",
"Distance",
"Quantity",
"Mathematical objects",
"Size",
"Matrices (mathematics)",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
8,093,356 | https://en.wikipedia.org/wiki/Compton%20edge | In gamma-ray spectrometry, the Compton edge is a feature of the measured gamma-ray energy spectrum that results from Compton scattering in the detector material. It corresponds to the highest energy that can be transferred to a weakly bound electron of a detector's atom by an incident photon in a single scattering process, and manifests itself as a ridge in the measured gamma-ray energy spectrum. It is a measurement phenomenon (meaning that the incident radiation does not possess this feature), which is particularly evident in gamma-ray energy spectra of monoenergetic photons.
When a gamma ray scatters within a scintillator or a semiconductor detector and the scattered photon escapes from the detector's volume, only a fraction of the incident energy is deposited in the detector. This fraction depends on the scattering angle of the photon, leading to a spectrum of energies corresponding to the entire range of possible scattering angles. The highest energy that can be deposited, corresponding to full backscatter, is called the Compton edge. In mathematical terms, the Compton edge is the inflection point of the high-energy side of the Compton region.
Background
In a Compton scattering process, an incident photon collides with a weakly bound electron, leading to its release from the atomic shell. The energy of the outgoing photon, E' , is given by the formula:
E is the energy of the incident photon.
is the mass of the electron.
c is the speed of light.
is the angle of deflection of the photon.
(note that the above formula does not account for the electron binding energy, which can play a non-negligible role for low-energy gamma rays).
The energy transferred to the electron, , varies with the photon's scattering angle. For equal to zero there is no energy transfer, while the maximum energy transfer occurs for equal to 180 degrees (backscattering).
In a single scattering act, is impossible for the photon to transfer any more energy via this process; thus, there is a sharp cutoff at this energy, leading to the name Compton edge. If multiple photopeaks are present in the spectrum, each of them will have its own Compton edge. The part of the spectrum between the Compton edge and the photopeak is due to multiple subsequent Compton-scattering processes.
The continuum of energies corresponding to Compton scattered electrons is known as the Compton continuum.
References
See also
Gamma spectroscopy
Compton suppression
Spectroscopy | Compton edge | [
"Physics",
"Chemistry"
] | 495 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
247,261 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem | The Riemann–Roch theorem is an important theorem in mathematics, specifically in complex analysis and algebraic geometry, for the computation of the dimension of the space of meromorphic functions with prescribed zeros and allowed poles. It relates the complex analysis of a connected compact Riemann surface with the surface's purely topological genus g, in a way that can be carried over into purely algebraic settings.
Initially proved as Riemann's inequality by , the theorem reached its definitive form for Riemann surfaces after work of Riemann's short-lived student . It was later generalized to algebraic curves, to higher-dimensional varieties and beyond.
Preliminary notions
A Riemann surface is a topological space that is locally homeomorphic to an open subset of , the set of complex numbers. In addition, the transition maps between these open subsets are required to be holomorphic. The latter condition allows one to transfer the notions and methods of complex analysis dealing with holomorphic and meromorphic functions on to the surface . For the purposes of the Riemann–Roch theorem, the surface is always assumed to be compact. Colloquially speaking, the genus of a Riemann surface is its number of handles; for example the genus of the Riemann surface shown at the right is three. More precisely, the genus is defined as half of the first Betti number, i.e., half of the -dimension of the first singular homology group with complex coefficients. The genus classifies compact Riemann surfaces up to homeomorphism, i.e., two such surfaces are homeomorphic if and only if their genus is the same. Therefore, the genus is an important topological invariant of a Riemann surface. On the other hand, Hodge theory shows that the genus coincides with the -dimension of the space of holomorphic one-forms on , so the genus also encodes complex-analytic information about the Riemann surface.
A divisor is an element of the free abelian group on the points of the surface. Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients.
Any meromorphic function gives rise to a divisor denoted defined as
where is the set of all zeroes and poles of , and is given by
.
The set is known to be finite; this is a consequence of being compact and the fact that the zeros of a (non-zero) holomorphic function do not have an accumulation point. Therefore, is well-defined. Any divisor of this form is called a principal divisor. Two divisors that differ by a principal divisor are called linearly equivalent. The divisor of a meromorphic 1-form is defined similarly. A divisor of a global meromorphic 1-form is called the canonical divisor (usually denoted ). Any two meromorphic 1-forms will yield linearly equivalent divisors, so the canonical divisor is uniquely determined up to linear equivalence (hence "the" canonical divisor).
The symbol denotes the degree (occasionally also called index) of the divisor , i.e. the sum of the coefficients occurring in . It can be shown that the divisor of a global meromorphic function always has degree 0, so the degree of a divisor depends only on its linear equivalence class.
The number is the quantity that is of primary interest: the dimension (over ) of the vector space of meromorphic functions on the surface, such that all the coefficients of are non-negative. Intuitively, we can think of this as being all meromorphic functions whose poles at every point are no worse than the corresponding coefficient in ; if the coefficient in at is negative, then we require that has a zero of at least that multiplicity at – if the coefficient in is positive, can have a pole of at most that order. The vector spaces for linearly equivalent divisors are naturally isomorphic through multiplication with the global meromorphic function (which is well-defined up to a scalar).
Statement of the theorem
The Riemann–Roch theorem for a compact Riemann surface of genus with canonical divisor states
.
Typically, the number is the one of interest, while is thought of as a correction term (also called index of speciality) so the theorem may be roughly paraphrased by saying
dimension − correction = degree − genus + 1.
Because it is the dimension of a vector space, the correction term is always non-negative, so that
.
This is called Riemann's inequality. Roch's part of the statement is the description of the possible difference between the sides of the inequality. On a general Riemann surface of genus , has degree , independently of the meromorphic form chosen to represent the divisor. This follows from putting in the theorem. In particular, as long as has degree at least , the correction term is 0, so that
.
The theorem will now be illustrated for surfaces of low genus. There are also a number other closely related theorems: an equivalent formulation of this theorem using line bundles and a generalization of the theorem to algebraic curves.
Examples
The theorem will be illustrated by picking a point on the surface in question and regarding the sequence of numbers
i.e., the dimension of the space of functions that are holomorphic everywhere except at where the function is allowed to have a pole of order at most . For , the functions are thus required to be entire, i.e., holomorphic on the whole surface . By Liouville's theorem, such a function is necessarily constant. Therefore, . In general, the sequence is an increasing sequence.
Genus zero
The Riemann sphere (also called complex projective line) is simply connected and hence its first singular homology is zero. In particular its genus is zero. The sphere can be covered by two copies of , with transition map being given by
.
Therefore, the form on one copy of extends to a meromorphic form on the Riemann sphere: it has a double pole at infinity, since
Thus, its canonical divisor is (where is the point at infinity).
Therefore, the theorem says that the sequence reads
1, 2, 3, ... .
This sequence can also be read off from the theory of partial fractions. Conversely if this sequence starts this way, then must be zero.
Genus one
The next case is a Riemann surface of genus , such as a torus , where is a two-dimensional lattice (a group isomorphic to ). Its genus is one: its first singular homology group is freely generated by two loops, as shown in the illustration at the right. The standard complex coordinate on yields a one-form on that is everywhere holomorphic, i.e., has no poles at all. Therefore, , the divisor of is zero.
On this surface, this sequence is
1, 1, 2, 3, 4, 5 ... ;
and this characterises the case . Indeed, for , , as was mentioned above. For with , the degree of is strictly negative, so that the correction term is 0. The sequence of dimensions can also be derived from the theory of elliptic functions.
Genus two and beyond
For , the sequence mentioned above is
1, 1, ?, 2, 3, ... .
It is shown from this that the ? term of degree 2 is either 1 or 2, depending on the point. It can be proven that in any genus 2 curve there are exactly six points whose sequences are 1, 1, 2, 2, ... and the rest of the points have the generic sequence 1, 1, 1, 2, ... In particular, a genus 2 curve is a hyperelliptic curve. For it is always true that at most points the sequence starts with ones and there are finitely many points with other sequences (see Weierstrass points).
Riemann–Roch for line bundles
Using the close correspondence between divisors and holomorphic line bundles on a Riemann surface, the theorem can also be stated in a different, yet equivalent way: let L be a holomorphic line bundle on X. Let denote the space of holomorphic sections of L. This space will be finite-dimensional; its dimension is denoted . Let K denote the canonical bundle on X. Then, the Riemann–Roch theorem states that
.
The theorem of the previous section is the special case of when L is a point bundle.
The theorem can be applied to show that there are g linearly independent holomorphic sections of K, or one-forms on X, as follows. Taking L to be the trivial bundle, since the only holomorphic functions on X are constants. The degree of L is zero, and is the trivial bundle. Thus,
.
Therefore, , proving that there are g holomorphic one-forms.
Degree of canonical bundle
Since the canonical bundle has , applying Riemann–Roch to gives
which can be rewritten as
hence the degree of the canonical bundle is .
Riemann–Roch theorem for algebraic curves
Every item in the above formulation of the Riemann–Roch theorem for divisors on Riemann surfaces has an analogue in algebraic geometry. The analogue of a Riemann surface is a non-singular algebraic curve C over a field k. The difference in terminology (curve vs. surface) is because the dimension of a Riemann surface as a real manifold is two, but one as a complex manifold. The compactness of a Riemann surface is paralleled by the condition that the algebraic curve be complete, which is equivalent to being projective. Over a general field k, there is no good notion of singular (co)homology. The so-called geometric genus is defined as
i.e., as the dimension of the space of globally defined (algebraic) one-forms (see Kähler differential). Finally, meromorphic functions on a Riemann surface are locally represented as fractions of holomorphic functions. Hence they are replaced by rational functions which are locally fractions of regular functions. Thus, writing for the dimension (over k) of the space of rational functions on the curve whose poles at every point are not worse than the corresponding coefficient in D, the very same formula as above holds:
.
where C is a projective non-singular algebraic curve over an algebraically closed field k. In fact, the same formula holds for projective curves over any field, except that the degree of a divisor needs to take into account multiplicities coming from the possible extensions of the base field and the residue fields of the points supporting the divisor. Finally, for a proper curve over an Artinian ring, the Euler characteristic of the line bundle associated to a divisor is given by the degree of the divisor (appropriately defined) plus the Euler characteristic of the structural sheaf .
The smoothness assumption in the theorem can be relaxed, as well: for a (projective) curve over an algebraically closed field, all of whose local rings are Gorenstein rings, the same statement as above holds, provided that the geometric genus as defined above is replaced by the arithmetic genus ga, defined as
.
(For smooth curves, the geometric genus agrees with the arithmetic one.) The theorem has also been extended to general singular curves (and higher-dimensional varieties).
Applications
Hilbert polynomial
One of the important consequences of Riemann–Roch is it gives a formula for computing the Hilbert polynomial of line bundles on a curve. If a line bundle is ample, then the Hilbert polynomial will give the first degree giving an embedding into projective space. For example, the canonical sheaf has degree , which gives an ample line bundle for genus . If we set then the Riemann–Roch formula reads
Giving the degree Hilbert polynomial of
.
Because the tri-canonical sheaf is used to embed the curve, the Hilbert polynomial
is generally considered while constructing the Hilbert scheme of curves (and the moduli space of algebraic curves). This polynomial is
and is called the Hilbert polynomial of a genus g curve.
Pluricanonical embedding
Analyzing this equation further, the Euler characteristic reads as
Since
.
for , since its degree is negative for all , implying it has no global sections, there is an embedding into some projective space from the global sections of . In particular, gives an embedding into where since . This is useful in the construction of the moduli space of algebraic curves because it can be used as the projective space to construct the Hilbert scheme with Hilbert polynomial .
Genus of plane curves with singularities
An irreducible plane algebraic curve of degree d has (d − 1)(d − 2)/2 − g singularities, when properly counted. It follows that, if a curve has (d − 1)(d − 2)/2 different singularities, it is a rational curve and, thus, admits a rational parameterization.
Riemann–Hurwitz formula
The Riemann–Hurwitz formula concerning (ramified) maps between Riemann surfaces or algebraic curves is a consequence of the Riemann–Roch theorem.
Clifford's theorem on special divisors
Clifford's theorem on special divisors is also a consequence of the Riemann–Roch theorem. It states that for a special divisor (i.e., such that ) satisfying , the following inequality holds:
.
Proof
Proof for algebraic curves
The statement for algebraic curves can be proved using Serre duality. The integer is the dimension of the space of global sections of the line bundle associated to D (cf. Cartier divisor). In terms of sheaf cohomology, we therefore have , and likewise . But Serre duality for non-singular projective varieties in the particular case of a curve states that is isomorphic to the dual . The left hand side thus equals the Euler characteristic of the divisor D. When D = 0, we find the Euler characteristic for the structure sheaf is by definition. To prove the theorem for general divisor, one can then proceed by adding points one by one to the divisor and ensure that the Euler characteristic transforms accordingly to the right hand side.
Proof for compact Riemann surfaces
The theorem for compact Riemann surfaces can be deduced from the algebraic version using Chow's Theorem and the GAGA principle: in fact, every compact Riemann surface is defined by algebraic equations in some complex projective space. (Chow's Theorem says that any closed analytic subvariety of projective space is defined by algebraic equations, and the GAGA principle says that sheaf cohomology of an algebraic variety is the same as the sheaf cohomology of the analytic variety defined by the same equations).
One may avoid the use of Chow's theorem by arguing identically to the proof in the case of algebraic curves, but replacing with the sheaf of meromorphic functions h such that all coefficients of the divisor are nonnegative. Here the fact that the Euler characteristic transforms as desired when one adds a point to the divisor can be read off from the long exact sequence induced by the short exact sequence
where is the skyscraper sheaf at P, and the map returns the th Laurent coefficient, where .
Arithmetic Riemann–Roch theorem
A version of the arithmetic Riemann–Roch theorem states that if k is a global field, and f is a suitably admissible function of the adeles of k, then for every idele a, one has a Poisson summation formula:
.
In the special case when k is the function field of an algebraic curve over a finite field and f is any character that is trivial on k, this recovers the geometric Riemann–Roch theorem.
Other versions of the arithmetic Riemann–Roch theorem make use of Arakelov theory to resemble the traditional Riemann–Roch theorem more exactly.
Generalizations of the Riemann–Roch theorem
The Riemann–Roch theorem for curves was proved for Riemann surfaces by Riemann and Roch in the 1850s and for algebraic curves by Friedrich Karl Schmidt in 1931 as he was working on perfect fields of finite characteristic. As stated by Peter Roquette,
The first main achievement of F. K. Schmidt is the discovery that the classical theorem of Riemann–Roch on compact Riemann surfaces can be transferred to function fields with finite base field. Actually, his proof of the Riemann–Roch theorem works for arbitrary perfect base fields, not necessarily finite.
It is foundational in the sense that the subsequent theory for curves tries to refine the information it yields (for example in the Brill–Noether theory).
There are versions in higher dimensions (for the appropriate notion of divisor, or line bundle). Their general formulation depends on splitting the theorem into two parts. One, which would now be called Serre duality, interprets the term as a dimension of a first sheaf cohomology group; with the dimension of a zeroth cohomology group, or space of sections, the left-hand side of the theorem becomes an Euler characteristic, and the right-hand side a computation of it as a degree corrected according to the topology of the Riemann surface.
In algebraic geometry of dimension two such a formula was found by the geometers of the Italian school; a Riemann–Roch theorem for surfaces was proved (there are several versions, with the first possibly being due to Max Noether).
An n-dimensional generalisation, the Hirzebruch–Riemann–Roch theorem, was found and proved by Friedrich Hirzebruch, as an application of characteristic classes in algebraic topology; he was much influenced by the work of Kunihiko Kodaira. At about the same time Jean-Pierre Serre was giving the general form of Serre duality, as we now know it.
Alexander Grothendieck proved a far-reaching generalization in 1957, now known as the Grothendieck–Riemann–Roch theorem. His work reinterprets Riemann–Roch not as a theorem about a variety, but about a morphism between two varieties. The details of the proofs were published by Armand Borel and Jean-Pierre Serre in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof.
Finally a general version was found in algebraic topology, too. These developments were essentially all carried out between 1950 and 1960. After that the Atiyah–Singer index theorem opened another route to generalization. Consequently, the Euler characteristic of a coherent sheaf is reasonably computable. For just one summand within the alternating sum, further arguments such as vanishing theorems must be used.
See also
Arakelov theory
Grothendieck–Riemann–Roch theorem
Hirzebruch–Riemann–Roch theorem
Kawasaki's Riemann–Roch formula
Hilbert polynomial
Moduli of algebraic curves
Notes
References
Grothendieck, Alexander, et al. (1966/67), Théorie des Intersections et Théorème de Riemann–Roch (SGA 6), LNM 225, Springer-Verlag, 1971.
See pages 208–219 for the proof in the complex situation. Note that Jost uses slightly different notation.
, contains the statement for curves over an algebraically closed field. See section IV.1.
.
Vector bundles on Compact Riemann Surfaces, M. S. Narasimhan, pp. 5–6.
Misha Kapovich, The Riemann–Roch Theorem (lecture note) an elementary introduction
J. Gray, The Riemann–Roch theorem and Geometry, 1854–1914.
Is there a Riemann–Roch for smooth projective curves over an arbitrary field? on MathOverflow
Theorems in algebraic geometry
Geometry of divisors
Topological methods of algebraic geometry
Theorems in complex analysis
Bernhard Riemann | Riemann–Roch theorem | [
"Mathematics"
] | 4,106 | [
"Theorems in algebraic geometry",
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"Theorems in geometry"
] |
247,370 | https://en.wikipedia.org/wiki/Communicating%20sequential%20processes | In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels. CSP was highly influential in the design of the occam programming language and also influenced the design of programming languages such as Limbo, RaftLib, Erlang, Go, Crystal, and Clojure's core.async.
CSP was first described by Tony Hoare in a 1978 article, and has since evolved substantially. CSP has been practically applied in industry as a tool for specifying and verifying the concurrent aspects of a variety of different systems, such as the T9000 Transputer, as well as a secure e-commerce system. The theory of CSP itself is also still the subject of active research, including work to increase its range of practical applicability (e.g., increasing the scale of the systems that can be tractably analyzed).
History
Original version
The version of CSP presented in Hoare's original 1978 article was essentially a concurrent programming language rather than a process calculus. It had a substantially different syntax than later versions of CSP, did not possess mathematically defined semantics, and was unable to represent unbounded nondeterminism. Programs in the original CSP were written as a parallel composition of a fixed number of sequential processes communicating with each other strictly through synchronous message-passing. In contrast to later versions of CSP, each process was assigned an explicit name, and the source or destination of a message was defined by specifying the name of the intended sending or receiving process. For example, the process
COPY = *[c:character; west?c → east!c]
repeatedly receives a character from the process named west and sends that character to process named east. The parallel composition
[west::DISASSEMBLE || X::COPY || east::ASSEMBLE]
assigns the names west to the DISASSEMBLE process, X to the COPY process, and east to the ASSEMBLE process, and executes these three processes concurrently.
Development into process algebra
Following the publication of the original version of CSP, Hoare, Stephen Brookes, and A. W. Roscoe developed and refined the theory of CSP into its modern, process algebraic form. The approach taken in developing CSP into a process algebra was influenced by Robin Milner's work on the Calculus of Communicating Systems (CCS) and conversely. The theoretical version of CSP was initially presented in a 1984 article by Brookes, Hoare, and Roscoe, and later in Hoare's book Communicating Sequential Processes, which was published in 1985. In September 2006, that book was still the third-most cited computer science reference of all time according to Citeseer (albeit an unreliable source due to the nature of its sampling). The theory of CSP has undergone a few minor changes since the publication of Hoare's book. Most of these changes were motivated by the advent of automated tools for CSP process analysis and verification. Roscoe's The Theory and Practice of Concurrency describes this newer version of CSP.
Applications
An early and important application of CSP was its use for specification and verification of elements of the INMOS T9000 Transputer, a complex superscalar pipelined processor designed to support large-scale multiprocessing. CSP was employed in verifying the correctness of both the processor pipeline and the Virtual Channel Processor, which managed off-chip communications for the processor.
Industrial application of CSP to software design has usually focused on dependable and safety-critical systems. For example, the Bremen Institute for Safe Systems and Daimler-Benz Aerospace modeled a fault-management system and avionics interface (consisting of about 23,000 lines of code) intended for use on the International Space Station in CSP, and analyzed the model to confirm that their design was free of deadlock and livelock. The modeling and analysis process was able to uncover a number of errors that would have been difficult to detect using testing alone. Similarly, Praxis High Integrity Systems applied CSP modeling and analysis during the development of software (approximately 100,000 lines of code) for a secure smart-card certification authority to verify that their design was secure and free of deadlock. Praxis claims that the system has a much lower defect rate than comparable systems.
Since CSP is well-suited to modeling and analyzing systems that incorporate complex message exchanges, it has also been applied to the verification of communications and security protocols. A prominent example of this sort of application is Lowe's use of CSP and the FDR refinement-checker to discover a previously unknown attack on the Needham–Schroeder public-key authentication protocol, and then to develop a corrected protocol able to defeat the attack.
Informal description
As its name suggests, CSP allows the description of systems in terms of component processes that operate independently, and interact with each other solely through message-passing communication. However, the "Sequential" part of the CSP name is now something of a misnomer, since modern CSP allows component processes to be defined both as sequential processes, and as the parallel composition of more primitive processes. The relationships between different processes, and the way each process communicates with its environment, are described using various process algebraic operators. Using this algebraic approach, quite complex process descriptions can be easily constructed from a few primitive elements.
Primitives
CSP provides two classes of primitives in its process algebra: events and primitive processes.
Events
Events represent communications or interactions. They are assumed to be instantaneous, and their communication is all that an external ‘environment’ can know about processes. An event is communicated only if the environment allows it. If a process does offer an event and the environment allows it, then that event must be communicated. Events may be atomic names (e.g. , ), compound names (e.g. , ), or input/output events (e.g. , ). The set of all events is denoted .
Primitive processes
Primitive processes represent fundamental behaviors: examples include (the process that immediately deadlocks), and (the process that immediately terminates successfully).
Algebraic operators
CSP has a wide range of algebraic operators. The principal ones are informally given as follows.
Prefix
The prefix operator combines an event and a process to produce a new process. For example, is the process that is willing to communicate event with its environment and, after , behaves like the process .
Recursion
Processes can be defined using recursion. Where is any CSP term involving , the process defines a recursive process given by the equation
.
Recursions can also be defined mutually, such as
which defines a pair of mutually recursive processes that alternate between communcating and .
Deterministic choice
The deterministic (or external) choice operator allows the future evolution of a process to be defined as a choice between two component processes and allows the environment to resolve the choice by communicating an initial event for one of the processes. For example, is the process that is willing to communicate the initial events and and subsequently behaves as either or , depending on which initial event the environment chooses to communicate.
Nondeterministic choice
The nondeterministic (or internal) choice operator allows the future evolution of a process to be defined as a choice between two component processes, but does not allow the environment any control over which one of the component processes will be selected. For example, can behave like either or . It can refuse to accept or and is only obliged to communicate if the environment offers both and .
Nondeterminism can be inadvertently introduced into a nominally deterministic choice if the initial events of both sides of the choice are identical. So, for example,
and
are equivalent.
Interleaving
The interleaving operator represents completely independent concurrent activity. The process behaves as both and simultaneously. The events from both processes are arbitrarily interleaved in time. Interleaving can introduce nondeterminism even if and are both deterministic: if and can both communicate the same event, then nondeterministically chooses which of the two processes communicated that event.
Interface parallel
The interface parallel (or generalized parallel) operator represents concurrent activity that requires synchronization between the component processes: for , any event in the interface set can only occur when both and are able to engage in that event.
For example, the process requires that and must both be able to perform event before that event can occur. So, the process is equivalent to , while is equivalent to (i.e. the process deadlocks).
Hiding
The hiding operator provides a way to abstract processes by making some events unobservable by the environment. is the process with the event set hidden.
A trivial example of hiding is which, assuming that the event doesn't appear in , simply reduces to . Hidden events are internalized as τ-actions, which are invisible to and uncontrollable by the environment. The existence of hiding introduces an additional behaviour called divergence, where an infinite sequence of τ-actions is performed. This is captured by the process , whose behaviour is solely to perform τ-actions forever. For example, is equivalent to .
Examples
One of the archetypal CSP examples is an abstract representation of a chocolate vending machine and its interactions with a person wishing to buy some chocolate. This vending machine might be able to carry out two different events, “coin” and “choc” which represent the insertion of payment and the delivery of a chocolate respectively. A machine which demands payment (only in cash) before offering a chocolate can be written as:
A person who might choose to use a coin or card to make payments could be modelled as:
These two processes can be put in parallel, so that they can interact with each other. The behaviour of the composite process depends on the events that the two component processes must synchronise on. Thus,
whereas if synchronization was only required on “coin”, we would obtain
If we abstract this latter composite process by hiding the “coin” and “card” events, i.e.
we get the nondeterministic process
This is a process which either offers a “choc” event and then stops, or just stops. In other words, if we treat the abstraction as an external view of the system (e.g., someone who does not see the decision reached by the person), nondeterminism has been introduced.
Formal definition
Syntax
The syntax of CSP defines the “legal” ways in which processes and events may be combined. Let be an event, and be a set of events. Then the basic syntax of CSP can be defined as:
Note that, in the interests of brevity, the syntax presented above omits the process, which represents divergence, as well as various operators such as alphabetized parallel, piping, and indexed choices.
Formal semantics
CSP has been imbued with several different formal semantics, which define the meaning of syntactically correct CSP expressions. The theory of CSP includes mutually consistent denotational semantics, algebraic semantics, and operational semantics.
Denotational semantics
The three major denotational models of CSP are the traces model, the stable failures model, and the failures/divergences model. Semantic mappings from process expressions to each of these three models provide the denotational semantics for CSP.
The traces model defines the meaning of a process expression as the set of sequences of events (traces) that the process can be observed to perform. For example,
since performs no events
since the process can be observed to have performed no events, the event , or the sequence of events followed by
More formally, the meaning of a process in the traces model is defined as such that:
(i.e. contains the empty sequence)
(i.e. is prefix-closed)
where is the set of all possible finite sequences of events.
The stable failures model extends the traces model with refusal sets, which are sets of events that a process can refuse to perform. A failure is a pair , consisting of a trace , and a refusal set which identifies the events that a process may refuse once it has executed the trace . The observed behavior of a process in the stable failures model is described by the pair . For example,
The failures/divergence model further extends the failures model to handle divergence. The semantics of a process in the failures/divergences model is a pair where is defined as the set of all traces that can lead to divergent behavior and .
Tools
Over the years, a number of tools for analyzing and understanding systems described using CSP have been produced. Early tool implementations used a variety of machine-readable syntaxes for CSP, making input files written for different tools incompatible. However, most CSP tools have now standardized on the machine-readable dialect of CSP devised by Bryan Scattergood, sometimes referred to as CSPM. The CSPM dialect of CSP possesses a formally defined operational semantics, which includes an embedded functional programming language.
FDR
The most well-known CSP tool is probably Failures-Divergences Refinement, which is a commercial product originally developed by Formal Systems (Europe) Ltd. FDR is often described as a model checker, but is technically a refinement checker, in that it converts two CSP process expressions into Labelled Transition Systems (LTSs), and then determines whether one of the processes is a refinement of the other within some specified semantic model (traces, failures, or failures/divergence). FDR applies various state-space compression algorithms to the process LTSs in order to reduce the size of the state-space that must be explored during a refinement check. FDR was succeeded by FDR2, FDR3 and FDR4.
ARC
The Adelaide Refinement Checker (ARC) is a CSP refinement checker developed by the Formal Modelling and Verification Group at The University of Adelaide. ARC differs from FDR2 in that it internally represents CSP processes as Ordered Binary Decision Diagrams (OBDDs), which alleviates the state explosion problem of explicit LTS representations without requiring the use of state-space compression algorithms such as those used in FDR2.
ProB
The ProB project, which is hosted by the Institut für Informatik, Heinrich-Heine-Universität Düsseldorf, was originally created to support analysis of specifications constructed in the B method. However, it also includes support for analysis of CSP processes both through refinement checking, and LTL model-checking. ProB can also be used to verify properties of combined CSP and B specifications. A ProBE CSP Animator is integrated in FDR3.
PAT
The Process Analysis Toolkit (PAT)
is a CSP analysis tool developed in the School of Computing at the National University of Singapore. PAT is able to perform refinement checking, LTL model-checking, and simulation of CSP and Timed CSP processes. The PAT process language extends CSP with support for mutable shared variables, asynchronous message passing, and a variety of fairness and quantitative time related process constructs such as deadline and waituntil. The underlying design principle of the PAT process language is to combine a high-level specification language with procedural programs (e.g. an event in PAT may be a sequential program or even an external C# library call) for greater expressiveness. Mutable shared variables and asynchronous channels provide a convenient syntactic sugar for well-known process modelling patterns used in standard CSP. The PAT syntax is similar, but not identical, to CSPM. The principal differences between the PAT syntax and standard CSPM are the use of semicolons to terminate process expressions, the inclusion of syntactic sugar for variables and assignments, and the use of slightly different syntax for internal choice and parallel composition.
Others
VisualNets produces animated visualisations of CSP systems from specifications, and supports timed CSP.
CSPsim is a lazy simulator. It does not model check CSP, but is useful for exploring very large (potentially infinite) systems.
SyncStitch is a CSP refinement checker with interactive modeling and analyzing environment. It has a graphical state-transition diagram editor. The user can model the behavior of processes as not only CSP expressions but also state-transition diagrams. The result of checking are also reported graphically as computation-trees and can be analyzed interactively with peripheral inspecting tools. In addition to refinement checks, It can perform deadlock check and livelock check.
Related formalisms
Several other specification languages and formalisms have been derived from, or inspired by, the classic untimed CSP, including:
Timed CSP, which incorporates timing information for reasoning about real-time systems
Receptive Process Theory, a specialization of CSP that assumes an asynchronous (i.e. nonblocking) send operation
CSPP
HCSP
TCOZ, an integration of Timed CSP and Object Z
Circus, an integration of CSP and Z based on the Unifying Theories of Programming
CML (COMPASS Modelling Language), a combination of Circus and VDM developed for the modelling of Systems of Systems (SoS)
CspCASL, an extension of CASL that integrates CSP
LOTOS, an international standard that incorporates features of CSP and CCS.
PALPS, a probabilistic extension with locations for ecological models developed by Anna Philippou and Mauricio Toro Bermúdez
Comparison with the actor model
In as much as it is concerned with concurrent processes that exchange messages, the actor model is broadly similar to CSP. However, the two models make some fundamentally different choices with regard to the primitives they provide:
CSP processes are anonymous, while actors have identities.
CSP uses explicit channels for message passing, whereas actor systems transmit messages to named destination actors. These approaches may be considered duals of each other, in the sense that processes receiving through a single channel effectively have an identity corresponding to that channel, while the name-based coupling between actors may be broken by constructing actors that behave as channels.
CSP message-passing fundamentally involves a rendezvous between the processes involved in sending and receiving the message, i.e. the sender cannot transmit a message until the receiver is ready to accept it. In contrast, message-passing in actor systems is fundamentally asynchronous, i.e. message transmission and reception do not have to happen at the same time, and senders may transmit messages before receivers are ready to accept them. These approaches may also be considered duals of each other, in the sense that rendezvous-based systems can be used to construct buffered communications that behave as asynchronous messaging systems, while asynchronous systems can be used to construct rendezvous-style communications by using a message/acknowledgement protocol to synchronize senders and receivers.
Note that the aforementioned properties do not necessarily refer to the original CSP paper by Hoare, but rather the modern incarnation of the idea as seen in implementations such as Go and Clojure's core.async. In the original paper, channels were not a central part of the specification, and the sender and receiver processes actually identify each other by name.
Award
In 1990, “A Queen’s Award for Technological Achievement has been conferred ... on [Oxford University] Computing Laboratory. The award recognises a successful collaboration between the laboratory and Inmos Ltd. … Inmos’ flagship product is the ‘transputer’, a microprocessor with many of the parts that would normally be needed in addition built into the same single component.”
According to Tony Hoare,
“The INMOS Transputer was as embodiment of the ideas … of building microprocessors that could communicate with each other along wires that would stretch between their terminals. The founder had the vision that the CSP ideas were ripe for industrial exploitation, and he made that the basis of the language for programming Transputers, which was called Occam. … The company estimated it enabled them to deliver the hardware one year earlier than would otherwise have happened. They applied for and won a Queen’s award for technological achievement, in conjunction with Oxford University Computing Laboratory.”
See also
Trace theory, the general theory of traces.
Trace monoid and history monoid
Ease programming language
XC programming language
VerilogCSP is a set of macros added to Verilog HDL to support communicating sequential processes channel communications.
Joyce is a programming language based on the principles of CSP, developed by Brinch Hansen around 1989.
SuperPascal is a programming language also developed by Brinch Hansen, influenced by CSP and his earlier work with Joyce.
Ada implements features of CSP such as the rendezvous.
DirectShow is the video framework inside DirectX, it uses the CSP concepts to implement the audio and video filters.
OpenComRTOS is a formally developed network-centric distributed RTOS based on a pragmatic superset of CSP.
Input/output automaton
Parallel programming model
TLA+ is another formal language for modelling and verifying concurrent systems.
References
Further reading
This book has been updated by Jim Davies at the Oxford University Computing Laboratory and the new edition is available for download as a PDF file at the Using CSP website.
Some links relating to this book are available here. The full text is available for download as a PS or PDF file from Bill Roscoe's list of academic publications.
External links
A PDF version of Hoare's CSP book – copyright restriction apply, see the page text before downloading.
The Annotation of CSP (Chinese version), non-profit translation and annotation work based on Prentice-Hall book (1985), Chaochen Zhou's Chinese version (1988), and Jim Davies's online version (2015).
WoTUG, a user group for CSP and occam style systems, contains some information about CSP and useful links.
"CSP Citations" from CiteSeer
Computer-related introductions in 1978
1978 in computing
Process calculi
Concurrent computing | Communicating sequential processes | [
"Technology"
] | 4,604 | [
"Computing platforms",
"Concurrent computing",
"IT infrastructure"
] |
248,042 | https://en.wikipedia.org/wiki/Weissenberg%20number | The Weissenberg number (Wi) is a dimensionless number used in the study of viscoelastic flows. It is named after Karl Weissenberg. The dimensionless number compares the elastic forces to the viscous forces. It can be variously defined, but it is usually given by the relation of stress relaxation time of the fluid and a specific process time. For instance, in simple steady shear, the Weissenberg number, often abbreviated as Wi or We, is defined as the shear rate times the relaxation time . Using the Maxwell model and the Oldroyd-B model, the elastic forces can be written as the first Normal force (N1).
Since this number is obtained from scaling the evolution of the stress, it contains choices for the shear or elongation rate, and the length-scale. Therefore the exact definition of all non dimensional numbers should be given as well as the number itself.
While Wi is similar to the Deborah number and is often confused with it in technical literature, they have different physical interpretations. The Weissenberg number indicates the degree of anisotropy or orientation generated by the deformation, and is appropriate to describe flows with a constant stretch history, such as simple shear. In contrast, the Deborah number should be used to describe flows with a non-constant stretch history, and physically represents the rate at which elastic energy is stored or released.
References
Dimensionless numbers of fluid mechanics
Fluid dynamics
Non-Newtonian fluids
Rheology | Weissenberg number | [
"Chemistry",
"Engineering"
] | 294 | [
"Piping",
"Chemical engineering",
"Rheology",
"Fluid dynamics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.