id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
494,669
https://en.wikipedia.org/wiki/Free%20algebra
In mathematics, especially in the area of abstract algebra known as ring theory, a free algebra is the noncommutative analogue of a polynomial ring since its elements may be described as "polynomials" with non-commuting variables. Likewise, the polynomial ring may be regarded as a free commutative algebra. Definition For R a commutative ring, the free (associative, unital) algebra on n indeterminates {X1,...,Xn} is the free R-module with a basis consisting of all words over the alphabet {X1,...,Xn} (including the empty word, which is the unit of the free algebra). This R-module becomes an R-algebra by defining a multiplication as follows: the product of two basis elements is the concatenation of the corresponding words: and the product of two arbitrary R-module elements is thus uniquely determined (because the multiplication in an R-algebra must be R-bilinear). This R-algebra is denoted R⟨X1,...,Xn⟩. This construction can easily be generalized to an arbitrary set X of indeterminates. In short, for an arbitrary set , the free (associative, unital) R-algebra on X is with the R-bilinear multiplication that is concatenation on words, where X* denotes the free monoid on X (i.e. words on the letters Xi), denotes the external direct sum, and Rw denotes the free R-module on 1 element, the word w. For example, in R⟨X1,X2,X3,X4⟩, for scalars α, β, γ, δ ∈ R, a concrete example of a product of two elements is . The non-commutative polynomial ring may be identified with the monoid ring over R of the free monoid of all finite words in the Xi. Contrast with polynomials Since the words over the alphabet {X1, ...,Xn} form a basis of R⟨X1,...,Xn⟩, it is clear that any element of R⟨X1, ...,Xn⟩ can be written uniquely in the form: where are elements of R and all but finitely many of these elements are zero. This explains why the elements of R⟨X1,...,Xn⟩ are often denoted as "non-commutative polynomials" in the "variables" (or "indeterminates") X1,...,Xn; the elements are said to be "coefficients" of these polynomials, and the R-algebra R⟨X1,...,Xn⟩ is called the "non-commutative polynomial algebra over R in n indeterminates". Note that unlike in an actual polynomial ring, the variables do not commute. For example, X1X2 does not equal X2X1. More generally, one can construct the free algebra R⟨E⟩ on any set E of generators. Since rings may be regarded as Z-algebras, a free ring on E can be defined as the free algebra Z⟨E⟩. Over a field, the free algebra on n indeterminates can be constructed as the tensor algebra on an n-dimensional vector space. For a more general coefficient ring, the same construction works if we take the free module on n generators. The construction of the free algebra on E is functorial in nature and satisfies an appropriate universal property. The free algebra functor is left adjoint to the forgetful functor from the category of R-algebras to the category of sets. Free algebras over division rings are free ideal rings. See also Cofree coalgebra Tensor algebra Free object Noncommutative ring Rational series Term algebra References Algebras Ring theory Free algebraic structures
Free algebra
[ "Mathematics" ]
812
[ "Mathematical structures", "Algebras", "Ring theory", "Fields of abstract algebra", "Category theory", "Algebraic structures", "Free algebraic structures" ]
495,236
https://en.wikipedia.org/wiki/Rho%20factor
A ρ factor (Rho factor) is a bacterial protein involved in the termination of transcription. Rho factor binds to the transcription terminator pause site, an exposed region of single stranded RNA (a stretch of 72 nucleotides) after the open reading frame at C-rich/G-poor sequences that lack obvious secondary structure. Rho factor is an essential transcription protein in bacteria. In Escherichia coli, it is a ~274.6 kD hexamer of identical subunits. Each subunit has an RNA-binding domain and an ATP-hydrolysis domain. Rho is a member of the RecA/SF5 family of ATP-dependent hexameric helicases that function by wrapping nucleic acids around a single cleft extending around the entire hexamer. Rho functions as an ancillary factor for RNA polymerase. There are two types of transcriptional termination in bacteria, rho-dependent termination and intrinsic termination (also called Rho-independent termination). Rho-dependent terminators account for about half of the E. coli factor-dependent terminators. Other termination factors discovered in E. coli include Tau and nusA. Rho-dependent terminators were first discovered in bacteriophage genomes. Function A Rho factor acts on an RNA substrate. Rho's key function is its helicase activity, for which energy is provided by an RNA-dependent ATP hydrolysis. The initial binding site for Rho is an extended (~70 nucleotides, sometimes 80–100 nucleotides) single-stranded region, rich in cytosine and poor in guanine, called the rho utilisation site (rut), in the RNA being synthesised, upstream of the actual terminator sequence. Several rho binding sequences have been discovered. No consensus is found among these, but the different sequences each seem specific, as small mutations in the sequence disrupts its function. Rho binds to RNA and then uses its ATPase activity to provide the energy to translocate along the RNA until it reaches the RNA–DNA helical region, where it unwinds the hybrid duplex structure. RNA polymerase pauses at the termination sequence, which is because there is a specific site around 100 nt away from the Rho binding site called the Rho-sensitive pause site. So, even though the RNA polymerase is about 40 nt per second faster than Rho, it does not pose a problem for the Rho termination mechanism as the RNA polymerase allows Rho factor to catch up. In short, Rho factor acts as an ATP-dependent unwinding enzyme, moving along the newly forming RNA molecule towards its 3′ end and unwinding it from the DNA template as it proceeds. Mutations A nonsense mutation in one gene of an operon prevents the translation of subsequent genes in the unit. This effect is called mutational polarity. A common cause is the absence of the mRNA corresponding to the subsequent (distal) parts of the unit. Suppose that there are Rho-dependent terminators within the transcription unit, that is, before the terminator that usually is used. Normally these earlier terminators are not used, because the ribosome prevents Rho from reaching RNA polymerase. But a nonsense mutation releases the ribosome, so that Rho is free to attach to and/or move along the RNA, enabling it to act on RNA polymerase at the terminator. As a result, the enzyme is released, and the distal regions of the transcription unit are never transcribed. Evolution Rho factor has not been found in Archaea. See also Termination factor Mutation Frequency Decline (Mfd) protein is also capable of dissociating RNA polymerase from the DNA template References External links Bacterial proteins Escherichia coli Gene expression Helicases
Rho factor
[ "Chemistry", "Biology" ]
795
[ "Gene expression", "Model organisms", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Escherichia coli" ]
495,471
https://en.wikipedia.org/wiki/Uridine%20triphosphate
Uridine-5′-triphosphate (UTP) is a pyrimidine nucleoside triphosphate, consisting of the organic base uracil linked to the 1′ carbon of the ribose sugar, and esterified with tri-phosphoric acid at the 5′ position. Its main role is as substrate for the synthesis of RNA during transcription. UTP is the precursor for the production of CTP via CTP synthetase. UTP can be biosynthesized from UDP by Nucleoside Diphosphate Kinase after using the phosphate group from ATP. UDP + ATP ⇌ UTP + ADP; both UTP and ATP are energetically equal. The homologue in DNA is thymidine triphosphate (TTP or dTTP). UTP also has a deoxyribose form (dUTP). Role in metabolism UTP also has the role of a source of energy or an activator of substrates in metabolic reactions, like that of ATP, but more specific. When UTP activates a substrate (such as glucose-1-phosphate), UDP-glucose is formed and inorganic phosphate is released. UDP-glucose enters the synthesis of glycogen. UTP is used in the metabolism of galactose, where the activated form UDP-galactose is converted to UDP-glucose. UDP-glucuronate is used to conjugate bilirubin to a more water-soluble bilirubin diglucuronide. UTP is also used to activate amino sugars like glucosamine-1-phosphate to UDP-glucosamine, and N-acetyl-glucosamine-1-phosphate to UDP-N-acetylglucosamine. Role in receptor mediation UTP also has roles in mediating responses by extracellular binding to the P2Y receptors of cells. UTP and its derivatives are still being investigated for their applications in human medicine. However, there is evidence from various model systems to suggest it has applications in pathogen defense and injury repair. In mice UTP has been found to interact with P2Y4 receptors to mediate an enhancement in antibody production. In Schwannoma cells, UTP binds to the P2YP receptors in the event of damage.  This leads to the downstream signal cascade that leads to the eventual injury repair. See also CTP synthase References Nucleotides Gene expression Metabolism Phosphate esters Pyrimidinediones
Uridine triphosphate
[ "Chemistry", "Biology" ]
529
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Metabolism" ]
495,598
https://en.wikipedia.org/wiki/Blade%20element%20theory
Blade element theory (BET) is a mathematical process originally designed by William Froude (1878), David W. Taylor (1893) and Stefan Drzewiecki (1885) to determine the behavior of propellers. It involves breaking a blade down into several small parts then determining the forces on each of these small blade elements. These forces are then integrated along the entire blade and over one rotor revolution in order to obtain the forces and moments produced by the entire propeller or rotor. One of the key difficulties lies in modelling the induced velocity on the rotor disk. Because of this the blade element theory is often combined with momentum theory to provide additional relationships necessary to describe the induced velocity on the rotor disk, producing blade element momentum theory. At the most basic level of approximation a uniform induced velocity on the disk is assumed: Alternatively the variation of the induced velocity along the radius can be modeled by breaking the blade down into small annuli and applying the conservation of mass, momentum and energy to every annulus. This approach is sometimes called the Froude–Finsterwalder equation. If the blade element method is applied to helicopter rotors in forward flight it is necessary to consider the flapping motion of the blades as well as the longitudinal and lateral distribution of the induced velocity on the rotor disk. The most simple forward flight inflow models are first harmonic models. Simple blade element theory While the momentum theory is useful for determining ideal efficiency, it gives a very incomplete account of the action of screw propellers, neglecting among other things the torque. In order to investigate propeller action in greater detail, the blades are considered as made up of a number of small elements, and the air forces on each element are calculated. Thus, while the momentum theory deals with the flow of the air, the blade-element theory deals primarily with the forces on the propeller blades. The idea of analyzing the forces on elementary strips of propeller blades was first published by William Froude in 1878. It was also worked out independently by Drzewiecki and given in a book on mechanical flight published in Russia seven years later, in 1885. Again, in 1907, Lanchester published a somewhat more advanced form of the blade-element theory without knowledge of previous work on the subject. The simple blade-element theory is usually referred to, however, as the Drzewiecki theory, for it was Drzewiecki who put it into practical form and brought it into general use. Also, he was the first to sum up the forces on the blade elements to obtain the thrust and torque for a whole propeller and the first to introduce the idea of using airfoil data to find the forces on the blade elements. In the Drzewiecki blade-element theory the propeller is considered a warped or twisted airfoil, each segment of which follows a helical path and is treated as a segment of an ordinary wing. It is usually assumed in the simple theory that airfoil coefficients obtained from wind tunnel tests of model wings (ordinarily tested with an aspect ratio of 6) apply directly to propeller blade elements of the same cross-sectional shape. The air flow around each element is considered two-dimensional and therefore unaffected by the adjacent parts of the blade. The independence of the blade elements at any given radius with respect to the neighbouring elements has been established theoretically and has also been shown to be substantially true for the working sections of the blade by special experiments made for the purpose. It is also assumed that the air passes through the propeller with no radial flow (i.e., there is no contraction of the slipstream in passing through the propeller disc) and that there is no blade interference. Aerodynamic forces on a blade element Consider the element at radius r, shown in Fig. 1, which has the infinitesimal length dr and the width b. The motion of the element in an aircraft propeller in flight is along a helical path determined by the forward velocity V of the aircraft and the tangential velocity 2πrn of the element in the plane of the propeller disc, where n represents the revolutions per unit time. The velocity of the element with respect to the air Vr is then the resultant of the forward and tangential velocities, as shown in Fig. 2. Call the angle between the direction of motion of the element and the plane of rotation Φ, and the blade angle β. The angle of attack α of the element relative to the air is then . Applying ordinary airfoil coefficients, the lift force on the element is: Let γ be the angle between the lift component and the resultant force, or . Then the total resultant air force on the element is: The thrust of the element is the component of the resultant force in the direction of the propeller axis (Fig. 2), or and since For convenience let and Then and the total thrust for the propeller (of B blades) is: Referring again to Fig. 2, the tangential or torque force is and the torque on the element is which, if , can be written The expression for the torque of the whole propeller is therefore The horsepower absorbed by the propeller, or the torque horsepower, is and the efficiency is Efficiency Because of the variation of the blade width, angle, and airfoil section along the blade, it is not possible to obtain a simple expression for the thrust, torque, and efficiency of propellers in general. A single element at about two-thirds or three-fourths of the tip radius is, however, fairly representative of the whole propeller, and it is therefore interesting to examine the expression for the efficiency of a single element. The efficiency of an element is the ratio of the useful power to the power absorbed, or Now tan Φ is the ratio of the forward to the tangential velocity, and . According to the simple blade-element theory, therefore, the efficiency of an element of a propeller depends only on the ratio of the forward to the tangential velocity and on the of the airfoil section. The value of Φ which gives the maximum efficiency for an element, as found by differentiating the efficiency with respect to Φ and equating the result to zero, is The variation of efficiency with 0 is shown in Fig. 3 for two extreme values of γ. The efficiency rises to a maximum at and then falls to zero again at . With an of 28.6 the maximum possible efficiency of an element according to the simple theory is 0.932, while with an of 9.5 it is only 0.812. At the values of Φ at which the most important elements of the majority of propellers work (10° to 15°) the effect of on efficiency is still greater. Within the range of 10° to 15°, the curves in Fig. 3 indicate that it is advantageous to have both the of the airfoil sections and the angle Φ (or the advance per revolution, and consequently the pitch) as high as possible. Limitations According to momentum theory, a velocity is imparted to the air passing through the propeller, and half of this velocity is given the air by the time it reaches the propeller plane. This increase of velocity of the air as it passes into the propeller disc is called the inflow velocity. It is always found where there is pressure discontinuity in a fluid. In the case of a wing moving horizontally, the air is given a downward velocity, as shown in Fig. 4., and theoretically half of this velocity is imparted in front of and above the wing, and the other half below and behind. This induced downflow is present in the model wing tests from which the airfoil coefficients used in the blade-element theory are obtained; the inflow indicated by the momentum theory is therefore automatically taken into account in the simple blade-element theory. However, the induced downflow is widely different for different aspect ratios, being zero for infinite aspect ratio. Most model airfoil tests are made with rectangular wings having an arbitrarily chosen aspect ratio of 6, and there is no reason to suppose that the downflow in such a test corresponds to the inflow for each element of a propeller blade. In fact, the general conclusion drawn from an exhaustive series of tests, in which the pressure distribution was measured over 12 sections of a model propeller running in a wind tunnel, is that the lift coefficient of the propeller blade element differs considerably from that measured at the same angle of attack on an airfoil of aspect ratio 6. This is one of the greatest weaknesses of the simple blade-element theory. Another weakness is that the interference between the propeller blades is not considered. The elements of the blades at any particular radius form a cascade similar to a multiplane with negative stagger, as shown in Fig. 5. Near the tips where the gap is large the interference is very small, but in toward the blade roots it is quite large. In actual propellers there is a tip loss which the blade-element theory does not take into consideration. The thrust and torque forces as computed by means of the theory are therefore greater for the elements near the tip than those found by experiment. In order to eliminate scale effect, the wind tunnel tests on model wings should be run at the same value of Reynolds number (scale) as the corresponding elements in the propeller blades. Airfoil characteristics measured at such a low scale as, for example, an air velocity of 30 m.p.h. with a 3-in. chord airfoil, show peculiarities not found when the tests are run at a scale comparable with that of propeller elements. The standard propeller section characteristics given in Figs. 11, 12, 13, and 14 were obtained from high Reynolds-number tests in the Variable Density Tunnel of the NACA, and, fortunately, for all excepting the thickest of these sections there is very little difference in characteristics at high and low Reynolds numbers. These values may be used with reasonable accuracy as to scale for propellers operating at tip speeds well below the speed of sound in air, and therefore relatively free from any effects of compressibility. The poor accuracy of the simple blade-element theory is very well shown in a report by Durand and Lesley, in which they have computed the performance of a large number of model propellers (80) and compared the computed values with the actual performances obtained from tests on the model propellers themselves. In the words of the authors: The divergencies between the two sets of results, while showing certain elements of consistency, are on the whole too large and too capriciously distributed to justify the use of the theory in this simplest form for other than approximate estimates or for comparative purposes. The airfoils were tested in two different wind tunnels and in one of the tunnels at two different air velocities, and the propeller characteristics computed from the three sets of airfoil data differ by as much as 28%, illustrating quite forcibly the necessity for having the airfoil tests made at the correct scale. In spite of all its inaccuracies the simple blade-element theory has been a useful tool in the hands of experienced propeller designers. With it a skilful designer having a knowledge of suitable empirical factors can design propellers which usually fit the main conditions imposed upon them fairly well in that they absorb the engine power at very nearly the proper revolution speed. They are not, however, necessarily the most efficient propellers for their purpose, for the simple theory is not sufficiently accurate to show slight differences in efficiency due to changes in pitch distribution, plan forms, etc. Example In choosing a propeller to analyze, it is desirable that its aerodynamic characteristics be known so that the accuracy of the calculated results can be checked. It is also desirable that the analysis be made of a propeller operating at a relatively low tip speed in order to be free from any effects of compressibility and that it be running free from body interference. The only propeller tests which satisfy all of these conditions are tests of model propellers in a wind tunnel. We shall therefore take for our example the central or master propeller of a series of model wood propellers of standard Navy form, tested by Dr. W. F. Durand at Stanford University. This is a two-bladed propeller 3 ft. in diameter, with a uniform geometrical pitch of 2.1 ft. (or a pitch-diameter ratio of 0.7). The blades have standard propeller sections based on the R.A.F-6 airfoil (Fig. 6), and the blade widths, thicknesses, and angles are as given in the first part of Table I. In our analysis we shall consider the propeller as advancing with a velocity of 40 m.p.h. and turning at the rate of 1,800 r.p.m.For the section at 75% of the tip radius, the radius is 1.125 ft., the blade width is 0.198 ft., the thickness ratio is 0.107, the lower camber is zero, and the blade angle β is 16.6°. The forward velocity and The path angle The angle of attack is therefore From Fig. 7, for a flat-faced section of thickness ratio 0.107 at an angle of attack of 1.1°, γ = 3.0°, and, from Fig. 9, CL = 0.425. (For sections having lower camber, CL should be corrected in accordance with the relation given in Fig. 8, and γ is given the same value as that for a flat-faced section having the upper camber only.) Then and, Also, The computations of Tc and Qc for six representative elements of the propeller are given in convenient tabular form in Table I, and the values of Tc and Qc are plotted against radius in Fig. 9. The curves drawn through these points are sometimes referred to as the torque grading curves. The areas under the curve represent and these being the expressions for the total thrust and torque per blade per unit of dynamic pressure due to the velocity of advance. The areas may be found by means of a planimeter, proper consideration, of course, being given to the scales of values, or the integration may be performed approximately (but with satisfactory accuracy) by means of Simpson's rule. In using Simpson's rule the radius is divided into an even number of equal parts, such as ten. The ordinate at each division can then be found from the grading curve. If the original blade elements divide the blade into an even number of equal parts it is not necessary to plot the grading curves, but the curves are advantageous in that they show graphically the distribution of thrust and torque along the blade. They also provide a check upon the computations, for incorrect points will not usually form a fair curve. If the abscissas are denoted by r and the ordinates at the various divisions by y1, y2, ..., y11, according to Simpson’s rule the area with ten equal divisions will be The area under the thrust-grading curve of our example is therefore and in like manner The above integrations have also been made by means of a planimeter, and the average results from five trials agree with those obtained by means of Simpson’s rule within one-fourth of one per cent. The thrust of the propeller in standard air is and the torque is The power absorbed by the propeller is or and the efficiency is The above-calculated performance compares with that measured in the wind tunnel as follows: The power as calculated by the simple blade-element theory is in this case over 11% too low, the thrust is about 5 % low, and the efficiency is about 8% high. Of course, a differently calculated performance would have been obtained if propeller-section characteristics from tests on the same series of airfoils in a different wind tunnel had been used, but the variable-density tunnel tests are probably the most reliable of all. Some light may be thrown upon the discrepancy between the calculated and observed performance by referring again to the pressure distribution tests on a model propeller. In these tests the pressure distribution over several sections of a propeller blade was measured while the propeller was running in a wind tunnel, and the three following sets of tests were made on corresponding airfoils: The results of these three sets of airfoil tests are shown for the section at three-fourths of the tip radius in Fig. 10, which has been taken from the report. It will be noticed that the coefficients of resultant force CR agree quite well for the median section of the airfoil of aspect ratio 6 and the corresponding section of the special propeller-blade airfoil but that the resultant force coefficient for the entire airfoil of aspect ratio 6 is considerably lower. It is natural, then, that the calculated thrust and power of a propeller should be too low when based on airfoil characteristics for aspect ratio 6. Modifications Many modifications to the simple blade-element theory have been suggested in order to make it more complete and to improve its accuracy. Most of these modified theories attempt to take into account the blade interference, and, in some of them, attempts are also made to eliminate the inaccuracy due to the use of airfoil data from tests on wings having a finite aspect ratio, such as 6. The first modification to be made was in the nature of a combination of the simple Drzewiecki theory with the Froude momentum theory. Diagrams Attribution See also Circulation (fluid dynamics) Computational fluid dynamics External links Blade Element Analysis for Propellers Helicopter Theory - Blade Element Theory in Forward Flight from Aerospaceweb.org Blade element theory Stefan Drzewiecki 1903 QBlade: Open Source Blade Element Method Software from H.F.I. TU Berlin NASA-TM-102219: A survey of nonuniform inflow models for rotorcraft flight dynamics and control applications, by Robert Chen, NASA References Fluid dynamics Aerodynamics Helicopter aerodynamics Propellers Wind power
Blade element theory
[ "Chemistry", "Engineering" ]
3,630
[ "Chemical engineering", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics" ]
496,264
https://en.wikipedia.org/wiki/Dialysis%20%28chemistry%29
In chemistry, dialysis is the process of separating molecules in solution by the difference in their rates of diffusion through a semipermeable membrane, such as dialysis tubing. Dialysis is a common laboratory technique that operates on the same principle as medical dialysis. In the context of life science research, the most common application of dialysis is for the removal of unwanted small molecules such as salts, reducing agents, or dyes from larger macromolecules such as proteins, DNA, or polysaccharides. Dialysis is also commonly used for buffer exchange and drug binding studies. The concept of dialysis was introduced in 1861 by the Scottish chemist Thomas Graham. He used this technique to separate sucrose (small molecule) and gum Arabic solutes (large molecule) in aqueous solution. He called the diffusible solutes crystalloids and those that would not pass the membrane colloids. From this concept dialysis can be defined as a spontaneous separation process of suspended colloidal particles from dissolved ions or molecules of small dimensions through a semi permeable membrane. Most common dialysis membrane are made of cellulose, modified cellulose or synthetic polymer (cellulose acetate or nitrocellulose). Etymology Dialysis derives from the Greek , 'through', and , 'to loosen'. Principles Dialysis is the process used to change the matrix of molecules in a sample by differentiating molecules by the classification of size. It relies on diffusion, which is the random, thermal movement of molecules in solution (Brownian motion) that leads to the net movement of molecules from an area of higher concentration to a lower concentration until equilibrium is reached. Due to the pore size of the membrane, large molecules in the sample cannot pass through the membrane, thereby restricting their diffusion from the sample chamber. By contrast, small molecules will freely diffuse across the membrane and obtain equilibrium across the entire solution volume, thereby changing the overall concentration of these molecules in the sample and dialysate (see dialysis figure at right). Osmosis is another principle that makes dialysis work. During osmosis, fluid moves from areas of high water concentration to lower water concentration across a semi-permeable membrane until equilibrium. In dialysis, excess fluid moves from sample to the dialysate through a membrane until the fluid level is the same between sample and dialysate. Finally, ultrafiltration is the convective flow of water and dissolved solute down a pressure gradient caused by hydrostatic forces or osmotic forces. In dialysis, ultrafiltration removes molecules of waste and excess fluids from sample. For example, dialysis occurs when a sample contained in a cellulose bag and is immersed into a dialysate solution. During dialysis, equilibrium is achieved between the sample and dialysate since only small molecules can pass the cellulose membrane, leaving only larger particles behind. Once equilibrium is reached, the final concentration of molecules is dependent on the volumes of the solutions involved, and if the equilibrated dialysate is replaced (or exchanged) with fresh dialysate (see procedure below), diffusion will further reduce the concentration of the small molecules in the sample. Dialysis can be used to either introduce or remove small molecules from a sample, because small molecules move freely across the membrane in both directions. Dialysis can also be used to remove salts. This makes dialysis a useful technique for a variety of applications. See dialysis tubing for additional information on the history, properties, and manufacturing of semipermeable membranes used for dialysis. Types Diffusion dialysis Diffusion dialysis is a spontaneous separation process where the driving force which produces the separation is the concentration gradient. It has an increase in entropy and decrease in Gibbs free energy which means that it is thermodynamically favorable. Diffusion dialysis uses anion exchange membranes (AEM) or cation exchange membranes (CEM) depending on the compounds to separate. AEM allows the passage of anions while it obstructs the passage of cations due to the co-ion rejection and preservation of electrical neutrality. The opposite happens with cation exchange membranes. Electrodialysis Electrodialysis is a process of separation which uses ion-exchange membranes and an electrical potential as a driving force. It is mainly used to remove ions from aqueous solutions. There are three electrodialysis processes which are commonly used - Donnan dialysis, reverse electrodialysis, and electro-electrodialysis. These processes are explained below. Donnan dialysis Donnan dialysis is a separation process which is used to exchange ions between two aqueous solutions which are separated by a CEM or an AEM membrane. In the case of a cation exchange membrane separating two solutions with different acidity, protons (H+) go through the membrane to the less acidic side. This induces an electrical potential that will instigate a flux of the cations present in the less acidic side to the more acidic side. The process will finish when the variation of concentration of H+ is the same order of magnitude as the difference of concentration of the separated cation. Reverse electrodialysis Reverse electrodialysis is a technology based on membranes which gets electricity from a mixing of two water streams with different salinities. It commonly uses anion exchange membranes (AEM) and cation exchange membranes (CEM). AEMs are used to allow the pass of anions and obstruct the pass of cations and CEMs are used to do the opposite. The cations and anions in the high salinity water moves to the low salinity water, cations passing through the CEMs and anions through the AEMs. This phenomenon can be converted to electricity. Electro-electrodialysis Electro-electrodialysis is an electromembrane process utilizing three compartments, which combines electrodialysis and electrolysis. It is commonly used to recover acid from a solution using AEM, CEM and electrolysis. The three compartments are separated by two barriers, which are the ion exchange membranes. The compartment in the middle has the water to be treated. The compartments located on the sides contain clean water. The anions pass through the AEM, while the cations pass through the CEM. The electricity creates H+ in the anions' side and OH− in the cations' side, which react with the respective ions. Procedure Equipment Separating molecules in a solution by dialysis is a relatively straightforward process. Other than the sample and dialysate buffer, all that is typically needed is: Dialysis membrane in an appropriate format (e.g., tubing, cassette, etc.) and molecular weight cut-off (MWCO) A container to hold the dialysate buffer The ability to stir the solutions and control the temperature General protocol A typical dialysis procedure for protein samples is as follows: Prepare the membrane according to instructions Load the sample into dialysis tubing, cassette or device Place sample into an external chamber of dialysis buffer (with gentle stirring of the buffer) Dialyze for 2 hours (at room temperature or 4 °C) Change the dialysis buffer and dialyze for another 2 hours Change the dialysis buffer and dialyze for 2 hours or overnight The total volume of sample and dialysate determine the final equilibrium concentration of the small molecules on both sides of the membrane. By using the appropriate volume of dialysate and multiple exchanges of the buffer, the concentration of small contaminants within the sample can be decreased to acceptable or negligible levels. For example, when dialyzing 1mL of sample against 200mL of dialysate, the concentration of unwanted dialyzable substances will be decreased 200-fold when equilibrium is attained. Following two additional buffer changes of 200mL each, the contaminant level in the sample will be reduced by a factor of 8 x 106 (200 x 200 x 200). Variables and protocol optimization Although dialyzing a sample is relatively simple, a universal dialysis procedure for all applications cannot be provided due to the following variables: The sample volume The size of the molecules being separated The membrane used The geometry of the membrane, which affects the diffusion distance Additionally, the dialysis endpoint is somewhat subjective and application specific. Therefore, the general procedure might require optimization. Dialysis membranes and MWCO Dialysis membranes are produced and characterized according to molecular-weight cutoff (MWCO) limits. While membranes with MWCOs ranging from 1-1,000,000 kDa are commercially available, membranes with MWCOs near 10 kDa are most commonly used. The MWCO of a membrane is the result of the number and average size of the pores created during production of the dialysis membrane. The MWCO typically refers to the smallest average molecular mass of a standard molecule that will not effectively diffuse across the membrane during extended dialysis. Thus, a dialysis membrane with a 10K MWCO will generally retain greater than 90% of a protein having a molecular mass of at least 10kDa. It is important to note that the MWCO of a membrane is not a sharply defined value. Molecules with mass near the MWCO limit of the membrane will diffuse across the membrane more slowly than molecules significantly smaller than the MWCO. In order for a molecule to rapidly diffuse across a membrane, it typically needs to be at least 20- to 50-times smaller than the MWCO rating of a membrane. Therefore, it is not practical to separate a 30kDa protein from a 10kDa protein using dialysis across a 20K rated dialysis membrane. Dialysis membranes for laboratory use are typically made of a film of regenerated cellulose or cellulose esters. See reference for a review of cellulose membranes and manufacturing. Laboratory dialysis formats Dialysis is generally performed in clipped bags of dialysis tubing or in a variety of formatted dialyzers. The choice of the dialysis set up used is largely dependent on the size of the sample and the preference of the user. Dialysis tubing is the oldest and generally the least expensive format used for dialysis in the lab. Tubing is cut and sealed with a clip at one end, then filled and sealed with a clip on the other end. Tubing provides flexibility but has increased concerns regarding handling, sealing and sample recovery. Dialysis tubing is typically supplied either wet or dry in rolls or pleated telescoped tubes. A wide variety of dialysis devices (or dialyzers) are available from several vendors. Dialyzers are designed for specific sample volume ranges and provide greater sample security and improved ease of use and performance for dialysis experiments over tubing. The most common preformatted dialyzers are Slide-A-Lyzer, Float-A-Lyzer, and the Pur-A-lyzer/D-Tube/GeBAflex Dialyzers product lines. Applications Dialysis has a wide range of applications. These can be divided into two categories depending on the type of dialysis used. Diffusion dialysis Some applications of the diffusion dialysis are explained below. Strong aqueous caustic soda solutions can be purified of hemicellulose by diffusion dialysis. This is specific to the largely-obsolete viscose process. The first step in that process is to treat almost-pure cellulose (cotton linters or dissolving pulp) with strong (17-20% w/w) solutions of sodium hydroxide (caustic soda) in water. One effect of that step is to dissolve the hemicelluloses (low-MW polymers). In some circumstances, it is desirable to remove as much hemicellulose as possible out of the process, and that can be done using dialysis. Acids can be recovered from aqueous solutions using anion-exchange membranes. That process is an alternative treatment of industrial wastewater. It is used for the recovery of mixed acid (HF+ HNO3), the recovery and concentration of Zn2+ and Cu2+, in H2SO4+ CuSO4 and H2SO4+ ZnSO4 and the recovery of H2SO4 from waste sulphuric acid solutions containing Fe and Ni ions, which are produced at the diamond manufacturing process. Alkali waste can be recovered using diffusion dialysis because of its low energy cost. The NaOH base can be recovered from the aluminium etching solution applying a technique develop by Astom Corporation of Japan. De-alcoholisation of beer is another application of the diffusion dialysis. Taking into account that a concentration gradient is applied for this technique, the alcohol and other small molecule compounds transfer across the membrane from higher concentrations to lower, which is water. It is used for this application for the low operation conditions and the possibility to remove alcohol to 0.5%. Electrodialysis Some applications of the electrodialysis are explained below. The desalination of whey is the largest area of use for this type of dialysis in the food industry. It is necessary to remove crude cheese whey containing calcium, phosphorus and other inorganic salts to produce different foods such as cake, bread, ice cream and baby foods. The limit of whey demineralisation is almost 90%. De-acidification of fruit juice such as grape, orange, apple and lemon are processes in which electrodialysis is applied. An anion-exchange membrane is employed in this technique implying that citrate ions from the juice are extracted and replaced by hydroxide ions. Desalting of soy sauce can be done by electrodialysis. The conventional values of salt in brewed soy sauce are about 16-18 %, which is a quite high content. Electrodialysis is used to reduce the amount of salt present in the soy sauce. Nowadays diets of low salt content are very present in the society. Electrodialysis allows the separation of amino acids into acidic, basic and neutral groups. Specifically, cytoplasmic leaf proteins are extracted from alfalfa leaves applying electrodialysis. When proteins are denatured, the solutions can be desalted (of K+ ions) and acidified with H+ ions. Advantages and disadvantages Dialysis has both advantages and disadvantages. Following the structure of the previous section, the pros and cons are discussed based on the type of dialysis used. Advantages and drawbacks of both, diffusion dialysis and electrodialysis, are outlined below. Diffusion dialysis The main advantage of diffusion dialysis is the low energy consumption of the unit. This membrane technique operates under normal pressure and does not have a state change. Consequently, the energy required is significantly reduced, which reduces the operating cost. There is also the low installation cost, easy operation and the stability and reliability of the process. Another advantage is that diffusion dialysis does not pollute the environment. A disadvantage is that a diffusion dialyser has a low processing capability and low processing efficiency. There are other methods such as electrodialysis and reverse osmosis that can achieve better efficiencies than diffusion dialysis. Electrodialysis The main benefit of electrodialysis is the high recovery, especially in the water recovery. Another advantage is the fact that not high pressure is applied which implies that the effect fouling is not significant and consequently no chemicals are required to fight against them. Moreover, the fouling layer is not compact which leads to a higher recovery and to a long membrane life. It is also important that the treatments are for concentrations higher than 70,000 ppm eliminating the concentration limit. Finally, the energy required to operate is low due to the non-phase change. In fact, it is lower in comparison with the needed in the multi effect distillation (MED) and mechanical vapour compression (MVC) processes. The main drawback of electrodialysis is the current density limit, the process must be operated at a lower current density than the maximum allowed. The fact is that at certain voltage applied the diffusion of ions through the membrane are not linear leading to water dissociation, which would reduce the efficiency of the operation. Another aspect to take into account is that although low energy is required to operate, the higher the salt feed concentration is, the higher the energy needed will be. Finally, in the case of some products, it must be considered that electrodialysis does not remove microorganisms and organic contaminants, therefore a post-treatment is necessary. See also Electrodialysis Haemodialysis Microdialysis Osmosis Peritoneal dialysis AutoAnalyzer References External links Suppliers Thermo Scientific Spectrum Laboratories Fisher Scientific EMD Millipore Sigma-Aldrich Harvard Apparatus Membrane Filtration Products, Inc. Biochemistry methods Membrane technology
Dialysis (chemistry)
[ "Chemistry", "Biology" ]
3,450
[ "Biochemistry methods", "Membrane technology", "Biochemistry", "Separation processes" ]
496,334
https://en.wikipedia.org/wiki/Sigma%20factor
A sigma factor (σ factor or specificity factor) is a protein needed for initiation of transcription in bacteria. It is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB. The specific sigma factor used to initiate transcription of a given gene will vary, depending on the gene and on the environmental signals needed to initiate transcription of that gene. Selection of promoters by RNA polymerase is dependent on the sigma factor that associates with it. They are also found in plant chloroplasts as a part of the bacteria-like plastid-encoded polymerase (PEP). The sigma factor, together with RNA polymerase, is known as the RNA polymerase holoenzyme. Every molecule of RNA polymerase holoenzyme contains exactly one sigma factor subunit, which in the model bacterium Escherichia coli is one of those listed below. The number of sigma factors varies between bacterial species. E. coli has seven sigma factors. Sigma factors are distinguished by their characteristic molecular weights. For example, σ70 is the sigma factor with a molecular weight of 70 kDa. The sigma factor in the RNA polymerase holoenzyme complex is required for the initiation of transcription, although once that stage is finished, it is dissociated from the complex and the RNAP continues elongation on its own. Specialized sigma factors Different sigma factors are utilized under different environmental conditions. These specialized sigma factors bind the promoters of genes appropriate to the environmental conditions, increasing the transcription of those genes. Sigma factors in E. coli: σ70(RpoD) – σA – the "housekeeping" sigma factor or also called as primary sigma factor (Group 1), transcribes most genes in growing cells. Every cell has a "housekeeping" sigma factor that keeps essential genes and pathways operating. In the case of E. coli and other gram-negative rod-shaped bacteria, the "housekeeping" sigma factor is σ70. Genes recognized by σ70 all contain similar promoter consensus sequences consisting of two parts. Relative to the DNA base corresponding to the start of the RNA transcript, the consensus promoter sequences are characteristically centered at 10 and 35 nucleotides before the start of transcription (−10 and −35). σ19 (FecI) – the ferric citrate sigma factor, regulates the fec gene for iron transport and metabolism σ24 (RpoE) – extreme heat stress response and the extracellular proteins sigma factor σ28 (RpoF/FliA) – the flagellar synthesis and chemotaxis sigma factor σ32 (RpoH) – the heat shock sigma factor, it is turned on when the bacteria are exposed to heat. Due to the higher expression, the factor will bind with a high probability to the polymerase-core-enzyme. Doing so, other heatshock proteins are expressed, which enable the cell to survive higher temperatures. Some of the enzymes that are expressed upon activation of σ32 are chaperones, proteases and DNA-repair enzymes. σ38 (RpoS) – the starvation/stationary phase sigma factor σ54 (RpoN) – the nitrogen-limitation sigma factor There are also anti-sigma factors that inhibit the function of sigma factors and anti-anti-sigma factors that restore sigma factor function. Structure By sequence similarity, most sigma factors are σ70-like (). They have four main regions (domains) that are generally conserved: N-terminus --------------------- C-terminus 1.1 2 3 4 The regions are further subdivided. For example, region 2 includes 1.2 and 2.1 through 2.4. Domain 1.1 is found only in "primary sigma factors" (RpoD, RpoS in E.coli; "Group 1"). It is involved in ensuring the sigma factor will only bind the promoter when it is complexed with the RNA polymerase. Domains 2-4 each interact with specific promoter elements and with RNAP. Region 2.4 recognizes and binds to the promoter −10 element (called the "Pribnow box"). Region 4.2 recognizes and binds to the promoter −35 element. Not every sigma factor of the σ70 family contains all the domains. Group 2, which includes RpoS, is very similar to Group 1 but lacks domain 1. Group 3 also lacks domain 1, and includes σ28. Group 4, also known as the Extracytoplasmic Function (ECF) group, lack both σ1.1 and σ3. RpoE is a member. Other known sigma factors are of the σ54/RpoN () type. They are functional sigma factors, but they have significantly different primary amino acid sequences. Retention during transcription elongation The core RNA polymerase (consisting of 2 alpha (α), 1 beta (β), 1 beta-prime (β'), and 1 omega (ω) subunits) binds a sigma factor to form a complex called the RNA polymerase holoenzyme. It was previously believed that the RNA polymerase holoenzyme initiates transcription, while the core RNA polymerase alone synthesizes RNA. Thus, the accepted view was that sigma factor must dissociate upon transition from transcription initiation to transcription elongation (this transition is called "promoter escape"). This view was based on analysis of purified complexes of RNA polymerase stalled at initiation and at elongation. Finally, structural models of RNA polymerase complexes predicted that, as the growing RNA product becomes longer than ~15 nucleotides, sigma must be "pushed out" of the holoenzyme, since there is a steric clash between RNA and a sigma domain. However, σ70 can remain attached in complex with the core RNA polymerase in early elongation and sometimes throughout elongation. Indeed, the phenomenon of promoter-proximal pausing indicates that sigma plays roles during early elongation. All studies are consistent with the assumption that promoter escape reduces the lifetime of the sigma-core interaction from very long at initiation (too long to be measured in a typical biochemical experiment) to a shorter, measurable lifetime upon transition to elongation. Sigma cycle It had long been thought that the sigma factor obligatorily leaves the core enzyme once it has initiated transcription, allowing it to link to another core enzyme and initiate transcription at another site. Thus, the sigma factor would cycle from one core to another. However, fluorescence resonance energy transfer was used to show that the sigma factor does not obligatorily leave the core. Instead, it changes its binding with the core during initiation and elongation. Therefore, the sigma factor cycles between a strongly bound state during initiation and a weakly bound state during elongation. Sigma factor competition The number of RNAPs in bacterial cells (e.g., E. coli) have been shown to be smaller than the number of sigma factors. Consequently, if a certain sigma factor is overexpressed, not only will it increase the expression levels of genes whose promoters have preference for that sigma factor, but it will also reduce the probability that genes with promoters with preference for other sigma factors will be expressed. Meanwhile, transcription initiation has two major rate limiting steps: the closed and the open complex formation. However, only the dynamics of the first step depends on the concentration of sigma factors. Interestingly, the faster the closed complex formation relative to the open complex formation, the less responsive is a promoter to changes in sigma factors’ concentration (see for a model and empirical data of this phenomenon). Genes with dual sigma factor preference While most genes of E. coli can be recognized by an RNAP with one and only one type of sigma factor (e.g. sigma 70), a few genes (~ 5%) have what is called a “dual sigma factor preference”, that is, they can respond to two different sigma factors, as reported in RegulonDB. The most common ones are those promoters that can respond to both sigma 70 and to sigma 38 (iIlustrated in the figure) . Studies of the dynamics of these genes showed that when the cells enter stationary growth they are almost as induced as those genes that have preference for σ38 alone. This induction level was shown to be predictable from their promoter sequence. A model of their dynamics is shown in the figure. In the future, these promoters may become useful tools in synthetic genetic constructs in E. coli. See also Ekkehard Bautz References External links Gene expression
Sigma factor
[ "Chemistry", "Biology" ]
1,792
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
497,407
https://en.wikipedia.org/wiki/Cohen%E2%80%93Macaulay%20ring
In mathematics, a Cohen–Macaulay ring is a commutative ring with some of the algebro-geometric properties of a smooth variety, such as local equidimensionality. Under mild assumptions, a local ring is Cohen–Macaulay exactly when it is a finitely generated free module over a regular local subring. Cohen–Macaulay rings play a central role in commutative algebra: they form a very broad class, and yet they are well understood in many ways. They are named for , who proved the unmixedness theorem for polynomial rings, and for , who proved the unmixedness theorem for formal power series rings. All Cohen–Macaulay rings have the unmixedness property. For Noetherian local rings, there is the following chain of inclusions. Definition For a commutative Noetherian local ring R, a finite (i.e. finitely generated) R-module is a Cohen-Macaulay module if (in general we have: , see Auslander–Buchsbaum formula for the relation between depth and dim of a certain kind of modules). On the other hand, is a module on itself, so we call a Cohen-Macaulay ring if it is a Cohen-Macaulay module as an -module. A maximal Cohen-Macaulay module is a Cohen-Macaulay module M such that . The above definition was for a Noetherian local rings. But we can expand the definition for a more general Noetherian ring: If is a commutative Noetherian ring, then an R-module M is called Cohen–Macaulay module if is a Cohen-Macaulay module for all maximal ideals . (This is a kind of circular definition unless we define zero modules as Cohen-Macaulay. So we define zero modules as Cohen-Macaulay modules in this definition.) Now, to define maximal Cohen-Macaulay modules for these rings, we require that to be such an -module for each maximal ideal of R. As in the local case, R is a Cohen-Macaulay ring if it is a Cohen-Macaulay module (as an -module on itself). Examples Noetherian rings of the following types are Cohen–Macaulay. Any regular local ring. This leads to various examples of Cohen–Macaulay rings, such as the integers , or a polynomial ring over a field K, or a power series ring . In geometric terms, every regular scheme, for example a smooth variety over a field, is Cohen–Macaulay. Any 0-dimensional ring (or equivalently, any Artinian ring). Any 1-dimensional reduced ring, for example any 1-dimensional domain. Any 2-dimensional normal ring. Any Gorenstein ring. In particular, any complete intersection ring. The ring of invariants when R is a Cohen–Macaulay algebra over a field of characteristic zero and G is a finite group (or more generally, a linear algebraic group whose identity component is reductive). This is the Hochster–Roberts theorem. Any determinantal ring. That is, let R be the quotient of a regular local ring S by the ideal I generated by the r × r minors of some p × q matrix of elements of S. If the codimension (or height) of I is equal to the "expected" codimension (p−r+1)(q−r+1), R is called a determinantal ring. In that case, R is Cohen−Macaulay. Similarly, coordinate rings of determinantal varieties are Cohen-Macaulay. Some more examples: The ring K[x]/(x²) has dimension 0 and hence is Cohen–Macaulay, but it is not reduced and therefore not regular. The subring K[t2, t3] of the polynomial ring K[t], or its localization or completion at t=0, is a 1-dimensional domain which is Gorenstein, and hence Cohen–Macaulay, but not regular. This ring can also be described as the coordinate ring of the cuspidal cubic curve y2 = x3 over K. The subring K[t3, t4, t5] of the polynomial ring K[t], or its localization or completion at t=0, is a 1-dimensional domain which is Cohen–Macaulay but not Gorenstein. Rational singularities over a field of characteristic zero are Cohen–Macaulay. Toric varieties over any field are Cohen–Macaulay. The minimal model program makes prominent use of varieties with klt (Kawamata log terminal) singularities; in characteristic zero, these are rational singularities and hence are Cohen–Macaulay, One successful analog of rational singularities in positive characteristic is the notion of F-rational singularities; again, such singularities are Cohen–Macaulay. Let X be a projective variety of dimension n ≥ 1 over a field, and let L be an ample line bundle on X. Then the section ring of L is Cohen–Macaulay if and only if the cohomology group Hi(X, Lj) is zero for all 1 ≤ i ≤ n−1 and all integers j. It follows, for example, that the affine cone Spec R over an abelian variety X is Cohen–Macaulay when X has dimension 1, but not when X has dimension at least 2 (because H1(X, O) is not zero). See also Generalized Cohen–Macaulay ring. Cohen–Macaulay schemes We say that a locally Noetherian scheme is Cohen–Macaulay if at each point the local ring is Cohen–Macaulay. Cohen–Macaulay curves Cohen–Macaulay curves are a special case of Cohen–Macaulay schemes, but are useful for compactifying moduli spaces of curves where the boundary of the smooth locus is of Cohen–Macaulay curves. There is a useful criterion for deciding whether or not curves are Cohen–Macaulay. Schemes of dimension are Cohen–Macaulay if and only if they have no embedded primes. The singularities present in Cohen–Macaulay curves can be classified completely by looking at the plane curve case. Non-examples Using the criterion, there are easy examples of non-Cohen–Macaulay curves from constructing curves with embedded points. For example, the scheme has the decomposition into prime ideals . Geometrically it is the -axis with an embedded point at the origin, which can be thought of as a fat point. Given a smooth projective plane curve , a curve with an embedded point can be constructed using the same technique: find the ideal of a point in and multiply it with the ideal of . Then is a curve with an embedded point at . Intersection theory Cohen–Macaulay schemes have a special relation with intersection theory. Precisely, let X be a smooth variety and V, W closed subschemes of pure dimension. Let Z be a proper component of the scheme-theoretic intersection , that is, an irreducible component of expected dimension. If the local ring A of at the generic point of Z is Cohen-Macaulay, then the intersection multiplicity of V and W along Z is given as the length of A: . In general, that multiplicity is given as a length essentially characterizes Cohen–Macaulay ring; see #Properties. Multiplicity one criterion, on the other hand, roughly characterizes a regular local ring as a local ring of multiplicity one. Example For a simple example, if we take the intersection of a parabola with a line tangent to it, the local ring at the intersection point is isomorphic to which is Cohen–Macaulay of length two, hence the intersection multiplicity is two, as expected. Miracle flatness or Hironaka's criterion There is a remarkable characterization of Cohen–Macaulay rings, sometimes called miracle flatness or Hironaka's criterion. Let R be a local ring which is finitely generated as a module over some regular local ring A contained in R. Such a subring exists for any localization R at a prime ideal of a finitely generated algebra over a field, by the Noether normalization lemma; it also exists when R is complete and contains a field, or when R is a complete domain. Then R is Cohen–Macaulay if and only if it is flat as an A-module; it is also equivalent to say that R is free as an A-module. A geometric reformulation is as follows. Let X be a connected affine scheme of finite type over a field K (for example, an affine variety). Let n be the dimension of X. By Noether normalization, there is a finite morphism f from X to affine space An over K. Then X is Cohen–Macaulay if and only if all fibers of f have the same degree. It is striking that this property is independent of the choice of f. Finally, there is a version of Miracle Flatness for graded rings. Let R be a finitely generated commutative graded algebra over a field K, There is always a graded polynomial subring A ⊂ R (with generators in various degrees) such that R is finitely generated as an A-module. Then R is Cohen–Macaulay if and only if R is free as a graded A-module. Again, it follows that this freeness is independent of the choice of the polynomial subring A. Properties A Noetherian local ring is Cohen–Macaulay if and only if its completion is Cohen–Macaulay. If R is a Cohen–Macaulay ring, then the polynomial ring R[x] and the power series ring R[[x]] are Cohen–Macaulay. For a non-zero-divisor u in the maximal ideal of a Noetherian local ring R, R is Cohen–Macaulay if and only if R/(u) is Cohen–Macaulay. The quotient of a Cohen–Macaulay ring by any ideal is universally catenary. If R is a quotient of a Cohen–Macaulay ring, then the locus { p ∈ Spec R | Rp is Cohen–Macaulay } is an open subset of Spec R. Let (R, m, k) be a Noetherian local ring of embedding codimension c, meaning that c = dimk(m/m2) − dim(R). In geometric terms, this holds for a local ring of a subscheme of codimension c in a regular scheme. For c=1, R is Cohen–Macaulay if and only if it is a hypersurface ring. There is also a structure theorem for Cohen–Macaulay rings of codimension 2, the Hilbert–Burch theorem: they are all determinantal rings, defined by the r × r minors of an (r+1) × r matrix for some r. For a Noetherian local ring (R, m), the following are equivalent: R is Cohen–Macaulay. For every parameter ideal Q (an ideal generated by a system of parameters), := the Hilbert–Samuel multiplicity of Q. For some parameter ideal Q, . (See Generalized Cohen–Macaulay ring as well as Buchsbaum ring for rings that generalize this characterization.) The unmixedness theorem An ideal I of a Noetherian ring A is called unmixed in height if the height of I is equal to the height of every associated prime P of A/I. (This is stronger than saying that A/I is equidimensional; see below.) The unmixedness theorem is said to hold for the ring A if every ideal I generated by a number of elements equal to its height is unmixed. A Noetherian ring is Cohen–Macaulay if and only if the unmixedness theorem holds for it. The unmixed theorem applies in particular to the zero ideal (an ideal generated by zero elements) and thus it says a Cohen–Macaulay ring is an equidimensional ring; in fact, in the strong sense: there is no embedded component and each component has the same codimension. See also: quasi-unmixed ring (a ring in which the unmixed theorem holds for integral closure of an ideal). Counterexamples If K is a field, then the ring R = K[x,y]/(x2,xy) (the coordinate ring of a line with an embedded point) is not Cohen–Macaulay. This follows, for example, by Miracle Flatness: R is finite over the polynomial ring A = K[y], with degree 1 over points of the affine line Spec A with y ≠ 0, but with degree 2 over the point y = 0 (because the K-vector space K[x]/(x2) has dimension 2). If K is a field, then the ring K[x,y,z]/(xy,xz) (the coordinate ring of the union of a line and a plane) is reduced, but not equidimensional, and hence not Cohen–Macaulay. Taking the quotient by the non-zero-divisor x−z gives the previous example. If K is a field, then the ring R = K[w,x,y,z]/(wy,wz,xy,xz) (the coordinate ring of the union of two planes meeting in a point) is reduced and equidimensional, but not Cohen–Macaulay. To prove that, one can use Hartshorne's connectedness theorem: if R is a Cohen–Macaulay local ring of dimension at least 2, then Spec R minus its closed point is connected. The Segre product of two Cohen-Macaulay rings need not be Cohen-Macaulay. Grothendieck duality One meaning of the Cohen–Macaulay condition can be seen in coherent duality theory. A variety or scheme X is Cohen–Macaulay if the "dualizing complex", which a priori lies in the derived category of sheaves on X, is represented by a single sheaf. The stronger property of being Gorenstein means that this sheaf is a line bundle. In particular, every regular scheme is Gorenstein. Thus the statements of duality theorems such as Serre duality or Grothendieck local duality for Gorenstein or Cohen–Macaulay schemes retain some of the simplicity of what happens for regular schemes or smooth varieties. Notes References Cohen's paper was written when "local ring" meant what is now called a "Noetherian local ring". External links Examples of Cohen-Macaulay integral domains Examples of Cohen-Macaulay rings See also Ring theory Local rings Gorenstein local rings Wiles's proof of Fermat's Last Theorem Algebraic geometry Commutative algebra
Cohen–Macaulay ring
[ "Mathematics" ]
3,082
[ "Fields of abstract algebra", "Commutative algebra", "Algebraic geometry" ]
497,413
https://en.wikipedia.org/wiki/Riemann%E2%80%93Hurwitz%20formula
In mathematics, the Riemann–Hurwitz formula, named after Bernhard Riemann and Adolf Hurwitz, describes the relationship of the Euler characteristics of two surfaces when one is a ramified covering of the other. It therefore connects ramification with algebraic topology, in this case. It is a prototype result for many others, and is often applied in the theory of Riemann surfaces (which is its origin) and algebraic curves. Statement For a compact, connected, orientable surface , the Euler characteristic is , where g is the genus (the number of handles). This follows, as the Betti numbers are . For the case of an (unramified) covering map of surfaces that is surjective and of degree , we have the formula That is because each simplex of should be covered by exactly in , at least if we use a fine enough triangulation of , as we are entitled to do since the Euler characteristic is a topological invariant. What the Riemann–Hurwitz formula does is to add in a correction to allow for ramification (sheets coming together). Now assume that and are Riemann surfaces, and that the map is complex analytic. The map is said to be ramified at a point P in S′ if there exist analytic coordinates near P and π(P) such that π takes the form π(z) = zn, and n > 1. An equivalent way of thinking about this is that there exists a small neighborhood U of P such that π(P) has exactly one preimage in U, but the image of any other point in U has exactly n preimages in U. The number n is called the ramification index at P and is denoted by eP. In calculating the Euler characteristic of S′ we notice the loss of eP − 1 copies of P above π(P) (that is, in the inverse image of π(P)). Now let us choose triangulations of S and S′ with vertices at the branch and ramification points, respectively, and use these to compute the Euler characteristics. Then S′ will have the same number of d-dimensional faces for d different from zero, but fewer than expected vertices. Therefore, we find a "corrected" formula or as it is also commonly written, using that and multiplying through by -1: (all but finitely many P have eP = 1, so this is quite safe). This formula is known as the Riemann–Hurwitz formula and also as Hurwitz's theorem. Another useful form of the formula is: where b is the number of branch points in S (images of ramification points) and b' is the size of the union of the fibers of branch points (this contains all ramification points and perhaps some non-ramified points). Indeed, to obtain this formula, remove disjoint disc neighborhoods of the branch points from S and their preimages in S so that the restriction of is a covering. Removing a disc from a surface lowers its Euler characteristic by 1 by the formula for connected sum, so we finish by the formula for a non-ramified covering. We can also see that this formula is equivalent to the usual form, as we have since for any we have Examples The Weierstrass -function, considered as a meromorphic function with values in the Riemann sphere, yields a map from an elliptic curve (genus 1) to the projective line (genus 0). It is a double cover (N = 2), with ramification at four points only, at which e = 2. The Riemann–Hurwitz formula then reads with the summation taken over four ramification points. The formula may also be used to calculate the genus of hyperelliptic curves. As another example, the Riemann sphere maps to itself by the function zn, which has ramification index n at 0, for any integer n > 1. There can only be other ramification at the point at infinity. In order to balance the equation we must have ramification index n at infinity, also. Consequences Several results in algebraic topology and complex analysis follow. Firstly, there are no ramified covering maps from a curve of lower genus to a curve of higher genus – and thus, since non-constant meromorphic maps of curves are ramified covering spaces, there are no non-constant meromorphic maps from a curve of lower genus to a curve of higher genus. As another example, it shows immediately that a curve of genus 0 has no cover with N > 1 that is unramified everywhere: because that would give rise to an Euler characteristic > 2. Generalizations For a correspondence of curves, there is a more general formula, Zeuthen's theorem', which gives the ramification correction to the first approximation that the Euler characteristics are in the inverse ratio to the degrees of the correspondence. An orbifold covering of degree N between orbifold surfaces S' and S is a branched covering, so the Riemann–Hurwitz formula implies the usual formula for coverings denoting with the orbifold Euler characteristic. References , section IV.2. Algebraic topology Algebraic curves Riemann surfaces
Riemann–Hurwitz formula
[ "Mathematics" ]
1,070
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
497,481
https://en.wikipedia.org/wiki/Hormesis
Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. Within the hormetic zone, the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen, which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting. In toxicology, hormesis is a dose-response phenomenon to xenobiotics or other stressors. In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. In pharmacology, the hormetic zone is similar to the therapeutic window. In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood. Etymology The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek to excite. The same Greek root provides the word hormone. The term "hormetics" is used for the study of hormesis. The word hormesis was first reported in English in 1943. History A form of hormesis famous in antiquity was Mithridatism, the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac, polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance, the Swiss doctor Paracelsus said, "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison." German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt, who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule. Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology, volume 33, pp. 517–541. In 2004, Edward Calabrese evaluated the concept of hormesis. Over 600 substances show a U-shaped dose–response relationship; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]" Examples Carbon monoxide Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter). The majority of endogenous carbon monoxide is produced by heme oxygenase; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent. Regarding the hormetic curve graph: Deficiency zone: an absence of carbon monoxide signaling has toxic implications Hormetic zone / region of homeostasis: small amount of carbon monoxide has a positive effect: essential as a neurotransmitter beneficial as a pharmaceutical Toxicity zone: excessive exposure results in carbon monoxide poisoning Oxygen Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide: Deficiency zone: hypoxia / asphyxia Hormetic zone / region of homeostasis Toxicity zone: oxidative stress Physical exercise Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk. Mitohormesis The possible effect of small amounts of oxidative stress is under laboratory research. Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman. The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research. However, in over 19 clinical trials, "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span." Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene, vitamin A or vitamin E may increase disease prevalence in humans. More recent studies have reported that rapamycin exhibits hormesis, where low doses can enhance cellular longevity by partially inhibiting mTOR, unlike higher doses that are toxic due to complete inhibition. This partial inhibition of mTOR (by the hormetic effect of low-dose rapamycin) modulates mTOR–mitochondria cross-talk, thereby demonstrating mitohormesis; and consequently reducing oxidative damage, metabolic dysregulation, and mitochondrial dysfunction, thus slowing cellular aging. Alcohol Alcohol is believed to be hormetic in preventing heart disease and stroke, although the benefits of light drinking may have been exaggerated. The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome, therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear. In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans, a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet. Methylmercury In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury, a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville, stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds. Radiation Ionizing radiation Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average. In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work. Chemical and ionizing radiation combined No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave. The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions. The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory. Nucleotide excision repair Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. The DNA damaging (genotoxic) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure. Applications Effects in aging One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock, irradiation, prooxidants, hypergravity, and food restriction. Such compounds that may modulate stress responses in cells have been termed "hormetins". Controversy Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US. Radiation controversy The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation. This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans. Nonetheless, many countries including the Czech Republic, Germany, Austria, Poland, and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency. The United States National Research Council (part of the National Academy of Sciences), the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress) and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses. A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation. A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste. Policy consequences Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses. See also Calorie restriction Michael Ristow Petkau effect Radiation hormesis Stochastic resonance Mithridatism Antifragility Xenohormesis References External links International Dose-Response Society Clinical pharmacology Radiobiology Toxicology Health paradoxes
Hormesis
[ "Chemistry", "Biology", "Environmental_science" ]
2,773
[ "Pharmacology", "Toxicology", "Radiobiology", "Clinical pharmacology", "Radioactivity" ]
497,535
https://en.wikipedia.org/wiki/Turbo%20code
In information theory, turbo codes are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications (e.g., in UMTS and LTE) and in (deep space) satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete with low-density parity-check (LDPC) codes, which provide similar performance. Until the patent for turbo codes expired, the patent-free status of LDPC codes was an important factor in LDPC's continued relevance. The name "turbo code" arose from the feedback loop used during normal turbo code decoding, which was analogized to the exhaust feedback used for engine turbocharging. Hagenauer has argued the term turbo code is a misnomer since there is no feedback involved in the encoding process. History The fundamental patent application for turbo codes was filed on 23 April 1991. The patent application lists Claude Berrou as the sole inventor of turbo codes. The patent filing resulted in several patents including US Patent 5,446,747, which expired 29 August 2013. The first public paper on turbo codes was "Near Shannon Limit Error-correcting Coding and Decoding: Turbo-codes". This paper was published 1993 in the Proceedings of IEEE International Communications Conference. The 1993 paper was formed from three separate submissions that were combined due to space constraints. The merger caused the paper to list three authors: Berrou, Glavieux, and Thitimajshima (from Télécom Bretagne, former ENST Bretagne, France). However, it is clear from the original patent filing that Berrou is the sole inventor of turbo codes and that the other authors of the paper contributed material other than the core concepts. Turbo codes were so revolutionary at the time of their introduction that many experts in the field of coding did not believe the reported results. When the performance was confirmed a small revolution in the world of coding took place that led to the investigation of many other types of iterative signal processing. The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since the introduction of the original parallel turbo codes in 1993, many other classes of turbo code have been discovered, including serial concatenated convolutional codes and repeat-accumulate codes. Iterative turbo decoding methods have also been applied to more conventional FEC systems, including Reed–Solomon corrected convolutional codes, although these systems are too complex for practical implementations of iterative decoders. Turbo equalization also flowed from the concept of turbo coding. In addition to turbo codes, Berrou also invented recursive systematic convolutional (RSC) codes, which are used in the example implementation of turbo codes described in the patent. Turbo codes that use RSC codes seem to perform better than turbo codes that do not use RSC codes. Prior to turbo codes, the best constructions were serial concatenated codes based on an outer Reed–Solomon error correction code combined with an inner Viterbi-decoded short constraint length convolutional code, also known as RSV codes. In a later paper, Berrou gave credit to the intuition of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the interest of probabilistic processing." He adds "R. Gallager and M. Tanner had already imagined coding and decoding techniques whose general principles are closely related," although the necessary calculations were impractical at that time. An example encoder There are many different instances of turbo codes, using different component encoders, input/output ratios, interleavers, and puncturing patterns. This example encoder implementation describes a classic turbo encoder, and demonstrates the general design of parallel turbo codes. This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit block of payload data. The second sub-block is n/2 parity bits for the payload data, computed using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity bits for a known permutation of the payload data, again computed using an RSC code. Thus, two redundant but different sub-blocks of parity bits are sent with the payload. The complete block has bits of data with a code rate of . The permutation of the payload data is carried out by a device called an interleaver. Hardware-wise, this turbo code encoder consists of two identical RSC coders, C1 and C2, as depicted in the figure, which are connected to each other using a concatenation scheme, called parallel concatenation: In the figure, M is a memory register. The delay line and interleaver force input bits dk to appear in different sequences. At first iteration, the input sequence dk appears at both outputs of the encoder, xk and y1k or y2k due to the encoder's systematic nature. If the encoders C1 and C2 are used in n1 and n2 iterations, their rates are respectively equal to The decoder The decoder is built in a similar way to the above encoder. Two elementary decoders are interconnected to each other, but in series, not in parallel. The decoder operates on lower speed (i.e., ), thus, it is intended for the encoder, and is for correspondingly. yields a soft decision which causes delay. The same delay is caused by the delay line in the encoder. The 's operation causes delay. An interleaver installed between the two decoders is used here to scatter error bursts coming from output. DI block is a demultiplexing and insertion module. It works as a switch, redirecting input bits to at one moment and to at another. In OFF state, it feeds both and inputs with padding bits (zeros). Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder receives a pair of random variables: where and are independent noise components having the same variance . is a k-th bit from encoder output. Redundant information is demultiplexed and sent through DI to (when ) and to (when ). yields a soft decision; i.e.: and delivers it to . is called the logarithm of the likelihood ratio (LLR). is the a posteriori probability (APP) of the data bit which shows the probability of interpreting a received bit as . Taking the LLR into account, yields a hard decision; i.e., a decoded bit. It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be used in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi algorithm is an appropriate one. However, the depicted structure is not an optimal one, because uses only a proper fraction of the available redundant information. In order to improve the structure, a feedback loop is used (see the dotted line on the figure). Soft decision approach The decoder front-end produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1 and is also called soft bit. The integer could be drawn from the range [−127, 127], where: −127 means "certainly 0" −100 means "very likely 0" 0 means "it could be either 0 or 1" 100 means "very likely 1" 127 means "certainly 1" This introduces a probabilistic aspect to the data-stream from the front end, but it conveys more information about each bit than just 0 or 1. For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal analog voltage is above or below a given threshold voltage level. For a turbo code decoder, the front end would provide an integer measure of how far the internal voltage is from the given threshold. To decode the -bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of the -bit parity sub-blocks. Both decoders use the sub-block of m likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block. Solving hypotheses to find bits The key innovation of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of m bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the m-bit pattern of the payload, typically in 15 to 18 cycles. An analogy can be drawn between this process and that of solving cross-reference puzzles like crossword or sudoku. Consider a partially completed, possibly garbled crossword puzzle. Two puzzle solvers (decoders) are trying to solve it: one possessing only the "down" clues (parity bits), and the other possessing only the "across" clues. To start, both solvers guess the answers (hypotheses) to their own clues, noting down how confident they are in each letter (payload bit). Then, they compare notes, by exchanging answers and confidence ratings with each other, noticing where and how they differ. Based on this new knowledge, they both come up with updated answers and confidence ratings, repeating the whole process until they converge to the same solution. Performance Turbo codes perform well due to the attractive combination of the code's random appearance on the channel together with the physically realisable decoding structure. Turbo codes are affected by an error floor. Practical applications using turbo codes Telecommunications: Turbo codes are used extensively in 3G and 4G mobile telephony standards; e.g., in HSPA, EV-DO and LTE. MediaFLO, terrestrial mobile television system from Qualcomm. The interaction channel of satellite communication systems, such as DVB-RCS and DVB-RCS2. Recent NASA missions such as Mars Reconnaissance Orbiter use turbo codes as an alternative to Reed–Solomon error correction-Viterbi decoder codes. IEEE 802.16 (WiMAX), a wireless metropolitan network standard, uses block turbo coding and convolutional turbo coding. Bayesian formulation From an artificial intelligence viewpoint, turbo codes can be considered as an instance of loopy belief propagation in Bayesian networks. See also BCJR algorithm Convolutional code Forward error correction Interleaver Low-density parity-check code Serial concatenated convolutional codes Soft-decision decoding Turbo equalizer Viterbi algorithm References Further reading Publications External links "The UMTS Turbo Code and an Efficient Decoder Implementation Suitable for Software-Defined Radios" (International Journal of Wireless Information Networks) "Pushing the Limit", a Science News feature about the development and genesis of turbo codes International Symposium On Turbo Codes Coded Modulation Library, an open source library for simulating turbo codes in matlab "Turbo Equalization: Principles and New Results" , an IEEE Transactions on Communications article about using convolutional codes jointly with channel equalization. IT++ Home Page The IT++ is a powerful C++ library which in particular supports turbo codes Turbo codes publications by David MacKay AFF3CT Home Page (A Fast Forward Error Correction Toolbox) for high speed turbo codes simulations in software 3GPP LTE Turbo Reference Design. Estimate Turbo Code BER Performance in AWGN (MatLab). Parallel Concatenated Convolutional Coding: Turbo Codes (MatLab Simulink) Error detection and correction Capacity-approaching codes French inventions
Turbo code
[ "Engineering" ]
2,651
[ "Error detection and correction", "Reliability engineering" ]
26,860,351
https://en.wikipedia.org/wiki/Standard-Model%20Extension
Standard-Model Extension (SME) is an effective field theory that contains the Standard Model, general relativity, and all possible operators that break Lorentz symmetry. Violations of this fundamental symmetry can be studied within this general framework. CPT violation implies the breaking of Lorentz symmetry, and the SME includes operators that both break and preserve CPT symmetry. Development In 1989, Alan Kostelecký and Stuart Samuel proved that interactions in string theories could lead to the spontaneous breaking of Lorentz symmetry. Later studies have indicated that loop-quantum gravity, non-commutative field theories, brane-world scenarios, and random dynamics models also involve the breakdown of Lorentz invariance. Interest in Lorentz violation has grown rapidly in the last decades because it can arise in these and other candidate theories for quantum gravity. In the early 1990s, it was shown in the context of bosonic superstrings that string interactions can also spontaneously break CPT symmetry. This work suggested that experiments with kaon interferometry would be promising for seeking possible signals of CPT violation due to their high sensitivity. The SME was conceived to facilitate experimental investigations of Lorentz and CPT symmetry, given the theoretical motivation for violation of these symmetries. An initial step, in 1995, was the introduction of effective interactions. Although Lorentz-breaking interactions are motivated by constructs such as string theory, the low-energy effective action appearing in the SME is independent of the underlying theory. Each term in the effective theory involves the expectation of a tensor field in the underlying theory. These coefficients are small due to Planck-scale suppression, and in principle are measurable in experiments. The first case considered the mixing of neutral mesons, because their interferometric nature makes them highly sensitive to suppressed effects. In 1997 and 1998, two papers by Don Colladay and Alan Kostelecký gave birth to the minimal SME in flat spacetime. This provided a framework for Lorentz violation across the spectrum of standard-model particles, and provided information about types of signals for potential new experimental searches. In 2004, the leading Lorentz-breaking terms in curved spacetimes were published, thereby completing the picture for the minimal SME. In 1999, Sidney Coleman and Sheldon Glashow presented a special isotropic limit of the SME. Higher-order Lorentz violating terms have been studied in various contexts, including electrodynamics. Lorentz transformations: observer vs. particle The distinction between particle and observer transformations is essential to understanding Lorentz violation in physics because Lorentz violation implies a measurable difference between two systems differing only by a particle Lorentz transformation. In special relativity, observer Lorentz transformations relate measurements made in reference frames with differing velocities and orientations. The coordinates in the one system are related to those in the other by an observer Lorentz transformation—a rotation, a boost, or a combination of both. Each observer will agree on the laws of physics, since this transformation is simply a change of coordinates. On the other hand, identical experiments can be rotated or boosted relative to each other, while being studied by the same inertial observer. These transformations are called particle transformations, because the matter and fields of the experiment are physically transformed into the new configuration. In a conventional vacuum, observer and particle transformations can be related to each other in a simple way—basically one is the inverse of the other. This apparent equivalence is often expressed using the terminology of active and passive transformations. The equivalence fails in Lorentz-violating theories, however, because fixed background fields are the source of the symmetry breaking. These background fields are tensor-like quantities, creating preferred directions and boost-dependent effects. The fields extend over all space and time, and are essentially frozen. When an experiment sensitive to one of the background fields is rotated or boosted, i.e. particle transformed, the background fields remain unchanged, and measurable effects are possible. Observer Lorentz symmetry is expected for all theories, including Lorentz violating ones, since a change in the coordinates cannot affect the physics. This invariance is implemented in field theories by writing a scalar lagrangian, with properly contracted spacetime indices. Particle Lorentz breaking enters if the theory includes fixed SME background fields filling the universe. Building the SME The SME can be expressed as a Lagrangian with various terms. Each Lorentz-violating term is an observer scalar constructed by contracting standard field operators with controlling coefficients called coefficients for Lorentz violation. These are not parameters, but rather predictions of the theory, since they can in principle be measured by appropriate experiments. The coefficients are expected to be small because of the Planck-scale suppression, so perturbative methods are appropriate. In some cases, other suppression mechanisms could mask large Lorentz violations. For instance, large violations that may exist in gravity could have gone undetected so far because of couplings with weak gravitational fields. Stability and causality of the theory have been studied in detail. Spontaneous Lorentz symmetry breaking In field theory, there are two possible ways to implement the breaking of a symmetry: explicit and spontaneous. A key result in the formal theory of Lorentz violation, published by Kostelecký in 2004, is that explicit Lorentz violation leads to incompatibility of the Bianchi identities with the covariant conservation laws for the energy–momentum and spin-density tensors, whereas spontaneous Lorentz breaking evades this difficulty. This theorem requires that any breaking of Lorentz symmetry must be dynamical. Formal studies of the possible causes of the breakdown of Lorentz symmetry include investigations of the fate of the expected Nambu–Goldstone modes. Goldstone's theorem implies that the spontaneous breaking must be accompanied by massless bosons. These modes might be identified with the photon, the graviton, spin-dependent interactions, and spin-independent interactions. Experimental searches The possible signals of Lorentz violation in any experiment can be calculated from the SME. It has therefore proven to be a remarkable tool in the search for Lorentz violation across the landscape of experimental physics. Up until the present, experimental results have taken the form of upper bounds on the SME coefficients. Since the results will be numerically different for different inertial reference frames, the standard frame adopted for reporting results is the Sun-centered frame. This frame is a practical and appropriate choice, since it is accessible and inertial on the time scale of hundreds of years. Typical experiments seek couplings between the background fields and various particle properties such as spin, or propagation direction. One of the key signals of Lorentz violation arises because experiments on Earth are unavoidably rotating and revolving relative to the Sun-centered frame. These motions lead to both annual and sidereal variations of the measured coefficients for Lorentz violation. Since the translational motion of the Earth around the Sun is nonrelativistic, annual variations are typically suppressed by a factor 10−4. This makes sidereal variations the leading time-dependent effect to look for in experimental data. Measurements of SME coefficients have been done with experiments involving: birefringence and dispersion from cosmological sources clock-comparison measurements CMB polarization collider experiments electromagnetic resonant cavities equivalence principle gauge and Higgs particles high-energy astrophysical observations laboratory and gravimetric tests of gravity matter interferometry neutrino oscillations oscillations and decays of K, B, D mesons particle-antiparticle comparisons post-newtonian gravity in the solar system and beyond second- and third-generation particles space-based missions spectroscopy of hydrogen and antihydrogen spin-polarized matter. All experimental results for SME coefficients are tabulated in the Data Tables for Lorentz and CPT Violation. See also Antimatter tests of Lorentz violation Lorentz-violating electrodynamics Lorentz-violating neutrino oscillations Bumblebee Models Tests of special relativity Test theories of special relativity References External links Background information on Lorentz and CPT violation Data Tables for Lorentz and CPT Violation Physics beyond the Standard Model
Standard-Model Extension
[ "Physics" ]
1,666
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
26,860,933
https://en.wikipedia.org/wiki/Eurocode%208%3A%20Design%20of%20structures%20for%20earthquake%20resistance
In the Eurocode series of European standards (EN) related to construction, Eurocode 8: Design of structures for earthquake resistance (abbreviated EN 1998 or, informally, EC 8) describes how to design structures in seismic zone, using the limit state design philosophy. It was approved by the European Committee for Standardization (CEN) on 23 April 2004. Its purpose is to ensure that in the event of earthquakes: human lives are protected; damage is limited; structures important for civil protection remain operational. The random nature of the seismic events and the limited resources available to counter their effects are such as to make the attainment of these goals only partially possible and only measurable in probabilistic terms. The extent of the protection that can be provided to different categories of buildings, which is only measurable in probabilistic terms, is a matter of optimal allocation of resources and is therefore expected to vary from country to country, depending on the relative importance of the seismic risk with respect to risks of other origin and on the global economic resources. Special structures, such as nuclear power plants, offshore structures and large dams, are beyond the scope of EN 1998. EN 1998 contains only those provisions that, in addition to the provisions of the other relevant Eurocodes, must be observed for the design of structures in seismic regions. It complements in this respect the other EN Eurocodes. Eurocode 8 comprises several documents, grouped in six parts numbered from EN 1998-1 to EN 1998-6. Part 1: General rules, seismic actions and rules for buildings EN 1998-1 applies to the design of buildings and civil engineering works in seismic regions. It is subdivided in 10 Sections, some of which are specifically devoted to the design of buildings. Section 1 of EN 1998-1 contains the scope, normative references, assumptions, principles and application rules, terms and definitions, symbols and units. Section 2 of EN 1998-1 contains the basic performance requirements and compliance criteria applicable to buildings and civil engineering works in seismic regions. Section 3 of EN 1998-1 gives the rules for the representation of seismic actions and for their combination with other actions. Certain types of structures, dealt with in EN 1998-2 to EN 1998-6, need complementing rules which are given in those Parts. Section 4 of EN 1998-1 contains general design rules relevant specifically to buildings. Sections 5 to 9 of EN 1998-1 contain specific rules for various structural materials and elements, relevant specifically to buildings as follows: Section 5: Specific rules for concrete buildings; Section 6: Specific rules for steel buildings; Section 7: Specific rules for composite steel-concrete buildings; Section 8: Specific rules for timber buildings; Section 9: Specific rules for masonry buildings. Section 10 contains the fundamental requirements and other relevant aspects of design and safety related to base isolation of structures and specifically to base isolation of buildings. Part 2: Bridges EN 1998-2 covers the seismic design of bridges in which the horizontal seismic actions are mainly resisted through bending of the piers or at the abutments; i.e. of bridges composed of vertical or nearly vertical pier systems supporting the traffic deck superstructure. It is also applicable to the seismic design of cable-stayed and arched bridges, although its provisions should not be considered as fully covering these cases. Part 3: Assessment and retrofitting of buildings The scope of EN 1998-3 is defined as follows: To provide criteria for the evaluation of the seismic performance of existing individual building structures. To describe the approach in selecting necessary corrective measures To set forth criteria for the design of retrofitting measures (i.e. conception, structural analysis including intervention measures, final dimensioning of structural parts and their connections to existing structural elements). Part 4: Silos, tanks and pipelines In EN 1998-4, principles and application rules for the seismic design of the structural aspects of facilities composed of above-ground and buried pipeline systems and of storage tanks of different types and uses, as well as for independent items, such as for example single water towers serving a specific purpose or groups of silos enclosing granular materials are addressed. Part 5: Foundations, retaining structures and geotechnical aspects EN 1998-5 establishes the requirements, criteria, and rules for the siting and foundation soil of structures for earthquake resistance. It covers the design of different foundation systems, the design of earth retaining structures and soil-structure interaction under seismic actions. Part 6: Towers, masts and chimneys EN 1998-6 establishes requirements, criteria, and rules for the design of tall slender structures: towers, including bell-towers, intake towers, radio and TV-towers, masts, chimneys (including free-standing industrial chimneys) and lighthouses. External links Eurocodes: Building the Future The European Commission Website on the EN Eurocodes EN 1998: Design of structures for earthquake resistance EN 1998: Design of structures for earthquake resistance - "Eurocodes: Background and applications" workshop Bridge design 8
Eurocode 8: Design of structures for earthquake resistance
[ "Engineering" ]
999
[ "Structural engineering", "Bridge design", "Architecture" ]
26,866,107
https://en.wikipedia.org/wiki/Corona%20ring
In electrical engineering, a corona ring, more correctly referred to as an anti-corona ring, is a toroid of conductive material, usually metal, which is attached to a terminal or other irregular hardware piece of high voltage equipment. The purpose of the corona ring is to distribute the electric field gradient and lower its maximum values below the corona threshold, preventing corona discharge. Corona rings are used on very high voltage power transmission insulators and switchgear, and on scientific research apparatus that generates high voltages. A very similar related device, the grading ring, is used around insulators. Corona discharge Corona discharge is a leakage of electric current into the air adjacent to high voltage conductors. It is sometimes visible as a dim blue glow in the air next to sharp points on high voltage equipment. The high electric field ionizes the air, making it conductive, allowing current to leak from the conductor into the air in the form of ions. In very high voltage electric power transmission lines and equipment, corona results in an economically significant waste of power and may deteriorate the hardware from its original state. In devices such as electrostatic generators, Marx generators, and tube-type television sets, the current load caused by corona leakage can reduce the voltage produced by the device, causing it to malfunction. Coronas also produce noxious and corrosive ozone gas, which can cause aging and brittleness of nearby structures such as insulators. The gasses create a health hazard for workers and local residents. For these reasons corona discharge is considered undesirable in most electrical equipment. How they work Corona discharges only occur when the electric field (potential gradient) at the surface of conductors exceeds a critical value, the dielectric strength or disruptive potential gradient of air. It is roughly 30 kV/cm at sea level but decreases when atmospheric pressure decreases. Therefore, corona discharge is more of a problem at high altitudes. The electric field at the surface of a conductor is greatest where the curvature is sharpest, so corona discharge occurs first at sharp points, corners and edges. The terminals on very high voltage equipment are frequently designed with large diameter rounded shapes such as balls and toruses called corona caps, to suppress corona formation. Some parts of high voltage circuits have hardware with exposed sharp edges or corners, such as the attachment points where wires or bus bars are connected to insulators; corona caps and rings are usually installed at these points to prevent corona formation. The corona ring is electrically connected to the high voltage conductor, encircling the points where corona would form. Since the ring is at the same potential as the conductor, the presence of the ring reduces the potential gradient at the surface of the conductor below the disruptive potential gradient, preventing corona from forming on the metal points. Grading rings A very similar related device, called a grading ring, is also used on high-voltage equipment. Grading rings are similar to corona rings, but they encircle insulators rather than conductors. Although they may also serve to suppress corona, their main purpose is to reduce the potential gradient along the insulator, preventing premature electrical breakdown. The potential gradient (electric field) across an insulator is not uniform but is highest at the end next to the high voltage electrode. If subjected to a high enough voltage, the insulator will break down and become conductive at that end first. Once a section of the insulator at the end has electrically broken down and become conductive, the full voltage is applied across the remaining length, so the breakdown will quickly progress from the high voltage end to the other, and a flashover arc will start. Therefore, insulators can stand significantly higher voltages if the potential gradient at the high voltage end is reduced. The grading ring surrounds the end of the insulator next to the high voltage conductor. It reduces the gradient at the end, resulting in a more even voltage gradient along the insulator, allowing a shorter, cheaper insulator to be used for a given voltage. Grading rings also reduce aging and deterioration of the insulator that can occur at the high voltage end due to the high electric field there. In very high voltage apparatus like Marx generators and particle accelerator tubes, insulating columns often have many metal grading rings spaced evenly along their length. These are linked by a voltage divider chain of high-value resistors so there is an equal voltage drop from each ring to the next. This divides the potential difference evenly along the length of the column so there are no high field spots, resulting in the least stress on the insulators. Uses Corona rings are used on extremely high voltage apparatus like Van de Graaff generators, Cockcroft–Walton generators, and particle accelerators, as well as electric power transmission insulators, bushings, and switchgear. Manufacturers suggest a corona ring on the line end of the insulator for transmission lines above 230 kV and on both ends for potentials above 500 kV. Corona rings prolong the lifetime of insulator surfaces by suppressing the effects of corona discharge. Corona rings may also be installed on the insulators of antennas of high-power radio transmitters. However, they increase the capacitance of the insulators. See also Arcing horns References External links Highv Corona Ring What is Grading Ring Differences Between Corona Rings And Grading Rings Electrical breakdown Dielectrics Electric power systems components
Corona ring
[ "Physics" ]
1,094
[ "Physical phenomena", "Materials", "Electrical phenomena", "Electrical breakdown", "Dielectrics", "Matter" ]
26,867,920
https://en.wikipedia.org/wiki/Shim%20%28magnetism%29
A shim is a device used to adjust the homogeneity of a magnetic field. Shims received their name from the purely mechanical shims used to adjust position and parallelity of the pole faces of an electromagnet. Coils used to adjust the homogeneity of a magnetic field by changing the current flowing through it were called "electrical current shims" because of their similar function. Usage in magnetic resonance spectroscopy In NMR and MRI, shimming is used prior to the operation of the magnet to eliminate inhomogeneities in its field. Initially, the magnetic field inside an NMR spectrometer or MRI scanner will be far from homogeneous compared with an "ideal" field of the device. This is a result of production tolerances and of the magnetic field of the environment. Iron constructions in walls and floor of the examination room become magnetized and disturb the field of the scanner. The probe and the sample or the patient become slightly magnetized when brought into the strong magnetic field and create additional inhomogeneous fields. The process of correcting for these inhomogeneities is called shimming the magnet, shimming the probe or shimming the sample, depending on the assumed source of the remaining inhomogeneity. Field homogeneity of the order of 1 ppm over a volume of several liters is needed in an MRI scanner. High-resolution NMR spectroscopy demands field homogeneity better than 1 ppb within a volume of a few milliliters. There are two types of shimming: active and passive. Active shimming uses coils with adjustable current. Passive shimming involves pieces of steel with good magnetic qualities. The steel pieces are placed near the permanent or superconducting magnet. They become magnetized and produce their own magnetic field. In both cases, the additional magnetic fields (produced by coils or steel) add to the overall magnetic field of the superconducting magnet in such a way as to increase the homogeneity of the total field. There are different ways to define inhomogeneity of a magnetic field in the center of the MR spectrometer. Currently, for medical MR scanners, the industry standard is to measure volume root mean square (VRMS) values of the field for the different (mostly concentric) volumes in the middle of the scanner. References Further reading Gerald A. Pearson, Shimming an NMR Magnet http://web.mit.edu/8.13/www/pdf_files/shimming.pdf Magnetic devices Electromagnetism Nuclear magnetic resonance
Shim (magnetism)
[ "Physics", "Chemistry" ]
532
[ "Electromagnetism", "Physical phenomena", "Nuclear magnetic resonance", "Fundamental interactions", "Nuclear physics" ]
20,995,293
https://en.wikipedia.org/wiki/Bicine
Bicine is an organic compound used as a buffering agent. It is one of Good's buffers and has a pKa of 8.35 at 20 °C. It is prepared by the reaction of glycine with ethylene oxide, followed by hydrolysis of the resultant lactone. Bicine is a contaminant in amine systems used for gas sweetening. It is formed by amine degradation in the presence of O2, SO2, H2S or Thiosulfate. See also Tricine References Buffer solutions Hydroxy acids Zwitterions Acetic acids Diols
Bicine
[ "Physics", "Chemistry" ]
126
[ "Buffer solutions", "Ions", "Zwitterions", "Matter" ]
20,995,816
https://en.wikipedia.org/wiki/%CE%91-Methylstyrene
α-Methylstyrene (AMS) is an organic compound with the formula C6H5C(CH3)=CH2. It is a colorless oil. Synthesis and reactions AMS is formed as a by-product of the cumene process. In this procedure, cumene is converted to its radical, through a reaction with oxygen. Normally these cumene radicals are converted to cumene hydroperoxide, however they can also undergo radical disproportionation to form AMS. Although this is only a minor side reaction, the cumene process is run at such a large scale that the recovery of AMS is commercially viable and satisfies much of the global demand. AMS can also be produced by dehydrogenation of cumene. The homopolymer obtained from this monomer, poly(α-methylstyrene), is unstable, being characterized by a low ceiling temperature of 65°C. Side effects in humans The American Conference of Governmental Industrial Hygienists (2009) defined occupational exposure limits of 10 ppm for airborne concentrations of a-methylstyrene. based on allergic reactions, and effects on the central nervous system. References Isopropenyl compounds Benzene derivatives Monomers IARC Group 2B carcinogens
Α-Methylstyrene
[ "Chemistry", "Materials_science" ]
266
[ "Isopropenyl compounds", "Monomers", "Functional groups", "Polymer chemistry" ]
20,998,975
https://en.wikipedia.org/wiki/Channel-stopper
In semiconductor device fabrication, channel-stopper or channel-stop is an area in semiconductor devices produced by implantation or diffusion of ions, by growing or patterning the silicon oxide, or other isolation methods in semiconductor material with the primary function to limit the spread of the channel area or to prevent the formation of parasitic channels (inversion layers). References Semiconductor device fabrication
Channel-stopper
[ "Materials_science" ]
75
[ "Semiconductor device fabrication", "Microtechnology" ]
3,619,345
https://en.wikipedia.org/wiki/Crystal%20polymorphism
In crystallography, polymorphism is the phenomenon where a compound or element can crystallize into more than one crystal structure. The preceding definition has evolved over many years and is still under discussion today. Discussion of the defining characteristics of polymorphism involves distinguishing among types of transitions and structural changes occurring in polymorphism versus those in other phenomena. Overview Phase transitions (phase changes) that help describe polymorphism include polymorphic transitions as well as melting and vaporization transitions. According to IUPAC, a polymorphic transition is "A reversible transition of a solid crystalline phase at a certain temperature and pressure (the inversion point) to another phase of the same chemical composition with a different crystal structure." Additionally, Walter McCrone described the phases in polymorphic matter as "different in crystal structure but identical in the liquid or vapor states." McCrone also defines a polymorph as “a crystalline phase of a given compound resulting from the possibility of at least two different arrangements of the molecules of that compound in the solid state.” These defining facts imply that polymorphism involves changes in physical properties but cannot include chemical change. Some early definitions do not make this distinction. Eliminating chemical change from those changes permissible during a polymorphic transition delineates polymorphism. For example, isomerization can often lead to polymorphic transitions. However, tautomerism (dynamic isomerization) leads to chemical change, not polymorphism. As well, allotropy of elements and polymorphism have been linked historically. However, allotropes of an element are not always polymorphs. A common example is the allotropes of carbon, which include graphite, diamond, and londsdaleite. While all three forms are allotropes, graphite is not a polymorph of diamond and londsdaleite. Isomerization and allotropy are only two of the phenomena linked to polymorphism. For additional information about identifying polymorphism and distinguishing it from other phenomena, see the review by Brog et al. It is also useful to note that materials with two polymorphic phases can be called dimorphic, those with three polymorphic phases, trimorphic, etc. Polymorphism is of practical relevance to pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives. Detection Experimental methods Early records of the discovery of polymorphism credit Eilhard Mitscherlich and Jöns Jacob Berzelius for their studies of phosphates and arsenates in the early 1800s. The studies involved measuring the interfacial angles of the crystals to show that chemically identical salts could have two different forms. Mitscherlich originally called this discovery isomorphism. The measurement of crystal density was also used by Wilhelm Ostwald and expressed in Ostwald's Ratio. The development of the microscope enhanced observations of polymorphism and aided Moritz Ludwig Frankenheim’s studies in the 1830s. He was able to demonstrate methods to induce crystal phase changes and formally summarized his findings on the nature of polymorphism. Soon after, the more sophisticated polarized light microscope came into use, and it provided better visualization of crystalline phases allowing crystallographers to distinguish between different polymorphs. The hot stage was invented and fitted to a polarized light microscope by Otto Lehmann in about 1877. This invention helped crystallographers determine melting points and observe polymorphic transitions. While the use of hot stage microscopes continued throughout the 1900s, thermal methods also became commonly used to observe the heat flow that occurs during phase changes such as melting and polymorphic transitions. One such technique, differential scanning calorimetry (DSC), continues to be used for determining the enthalpy of polymorphic transitions. In the 20th century, X-ray crystallography became commonly used for studying the crystal structure of polymorphs. Both single crystal x-ray diffraction and powder x-ray diffraction techniques are used to obtain measurements of the crystal unit cell. Each polymorph of a compound has a unique crystal structure. As a result, different polymorphs will produce different x-ray diffraction patterns. Vibrational spectroscopic methods came into use for investigating polymorphism in the second half of the twentieth century and have become more commonly used as optical, computer, and semiconductor technologies improved. These techniques include infrared (IR) spectroscopy, terahertz spectroscopy and Raman spectroscopy. Mid-frequency IR and Raman spectroscopies are sensitive to changes in hydrogen bonding patterns. Such changes can subsequently be related to structural differences. Additionally, terahertz and low frequency Raman spectroscopies reveal vibrational modes resulting from intermolecular interactions in crystalline solids. Again, these vibrational modes are related to crystal structure and can be used to uncover differences in 3-dimensional structure among polymorphs. Computational methods Computational chemistry may be used in combination with vibrational spectroscopy techniques to understand the origins of vibrations within crystals. The combination of techniques provides detailed information about crystal structures, similar to what can be achieved with x-ray crystallography. In addition to using computational methods for enhancing the understanding of spectroscopic data, the latest development in identifying polymorphism in crystals is the field of crystal structure prediction. This technique uses computational chemistry to model the formation of crystals and predict the existence of specific polymorphs of a compound before they have been observed experimentally by scientists. Examples Many compounds exhibit polymorphism. It has been claimed that "every compound has different polymorphic forms, and that, in general, the number of forms known for a given compound is proportional to the time and money spent in research on that compound." Organic compounds Benzamide The phenomenon was discovered in 1832 by Friedrich Wöhler and Justus von Liebig. They observed that the silky needles of freshly crystallized benzamide slowly converted to rhombic crystals. Present-day analysis identifies three polymorphs for benzamide: the least stable one, formed by flash cooling is the orthorhombic form II. This type is followed by the monoclinic form III (observed by Wöhler/Liebig). The most stable form is monoclinic form I. The hydrogen bonding mechanisms are the same for all three phases; however, they differ strongly in their pi-pi interactions. Maleic acid In 2006 a new polymorph of maleic acid was discovered, 124 years after the first crystal form was studied. Maleic acid is manufactured on an industrial scale in the chemical industry. It forms salt found in medicine. The new crystal type is produced when a co-crystal of caffeine and maleic acid (2:1) is dissolved in chloroform and when the solvent is allowed to evaporate slowly. Whereas form I has monoclinic space group P21/c, the new form has space group Pc. Both polymorphs consist of sheets of molecules connected through hydrogen bonding of the carboxylic acid groups: in form I, the sheets alternate with respect of the net dipole moment, while in form II, the sheets are oriented in the same direction. 1,3,5-Trinitrobenzene After 125 years of study, 1,3,5-trinitrobenzene yielded a second polymorph. The usual form has the space group Pbca, but in 2004, a second polymorph was obtained in the space group Pca21 when the compound was crystallised in the presence of an additive, trisindane. This experiment shows that additives can induce the appearance of polymorphic forms. Other organic compounds Acridine has been obtained as eight polymorphs and aripiprazole has nine. The record for the largest number of well-characterised polymorphs is held by a compound known as ROY. Glycine crystallizes as both monoclinic and hexagonal crystals. Polymorphism in organic compounds is often the result of conformational polymorphism. Inorganic matter Elements Elements including metals may exhibit polymorphism. Allotropy is the term used when describing elements having different forms and is used commonly in the field of metallurgy. Some (but not all) allotropes are also polymorphs. For example, iron has three allotropes that are also polymorphs. Alpha-iron, which exists at room temperature, has a bcc form. Above 910 degrees gamma-iron exists, which has a fcc form. Above 1390 degrees delta-iron exists with a bcc form. Another metallic example is tin, which has two allotropes that are also polymorphs. At room temperature, beta-tin exists as a white tetragonal form. When cooled below 13.2 degrees, alpha-tin forms which is gray in color and has a cubic diamond form. A classic example of a nonmetal that exhibits polymorphism is carbon. Carbon has many allotropes, including graphite, diamond, and londsdaleite. However, these are not all polymorphs of each other. Graphite is not a polymorph of diamond and londsdaleite, since it is chemically distinct, having sp2 hybridized bonding. Diamond, and londsdaleite are chemically identical, both having sp3 hybridized bonding, and they differ only in their crystal structures, making them polymorphs. Additionally, graphite has two polymorphs, a hexagonal (alpha) form and a rhombohedral (beta) form. Binary metal oxides Polymorphism in binary metal oxides has attracted much attention because these materials are of significant economic value. One set of famous examples have the composition SiO2, which form many polymorphs. Important ones include: α-quartz, β-quartz, tridymite, cristobalite, moganite, coesite, and stishovite. Other inorganic compounds A classical example of polymorphism is the pair of minerals calcite, which is rhombohedral, and aragonite, which is orthorhombic. Both are forms of calcium carbonate. A third form of calcium carbonate is vaterite, which is hexagonal and relatively unstable. β-HgS precipitates as a black solid when Hg(II) salts are treated with H2S. With gentle heating of the slurry, the black polymorph converts to the red form. Factors affecting polymorphism According to Ostwald's rule, usually less stable polymorphs crystallize before the stable form. The concept hinges on the idea that unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. The founding case of fibrous vs rhombic benzamide illustrates the case. Another example is provided by two polymorphs of titanium dioxide. Nevertheless, there are known systems, such as metacetamol, where only narrow cooling rate favors obtaining metastable form II. Polymorphs have disparate stabilities. Some convert rapidly at room (or any) temperature. Most polymorphs of organic molecules only differ by a few kJ/mol in lattice energy. Approximately 50% of known polymorph pairs differ by less than 2 kJ/mol and stability differences of more than 10 kJ/mol are rare. Polymorph stability may change upon temperature or pressure. Importantly, structural and thermodynamic stability are different. Thermodynamic stability may be studied using experimental or computational methods. Polymorphism is affected by the details of crystallisation. The solvent in all respects affects the nature of the polymorph, including concentration, other components of the solvent, i.e., species that inhibiting or promote certain growth patterns. A decisive factor is often the temperature of the solvent from which crystallisation is carried out. Metastable polymorphs are not always reproducibly obtained, leading to cases of "disappearing polymorphs", with usually negative implications on law and business. In pharmaceuticals Legal aspects Drugs receive regulatory approval and are granted patents for only a single polymorph. In a classic patent dispute, the GlaxoSmithKline defended its patent for the Type II polymorph of the active ingredient in Zantac against competitors while that of the Type I polymorph had already expired. Polymorphism in drugs can also have direct medical implications since dissolution rates depend on the polymorph. Polymorphic purity of drug samples can be checked using techniques such as powder X-ray diffraction, IR/Raman spectroscopy, and utilizing the differences in their optical properties in some cases. Case studies The known cases up to 2015 are discussed in a review article by Bučar, Lancaster, and Bernstein. Dibenzoxazepines Multidisciplinary studies involving experimental and computational approaches were applied to pharmaceutical molecules to facilitate the comparison of their solid-state structures. Specifically, this study has focused on exploring how changes in molecular structure affect the molecular conformation, packing motifs, interactions in the resultant crystal lattices and the extent of solid-state diversity of these compounds. The results highlight the value of crystal structure prediction studies and PIXEL calculations in the interpretation of the observed solid-state behaviour and quantifying the intermolecular interactions in the packed structures and identifying the key stabilising interactions. An experimental screen yielded 4 physical forms for clozapine as compared to 60 distinct physical forms for olanzapine. The experimental screening results of clozapine are consistent with its crystal energy landscape which confirms that no alternate packing arrangement is thermodynamically competitive to the experimentally obtained structure. Whilst in case of olanzapine, crystal energy landscape highlights that the extensive experimental screening has probably not found all possible polymorphs of olanzapine, and further solid form diversity could be targeted with a better understanding of the role of kinetics in its crystallisation. CSP studies were able to offer an explanation for the absence of the centrosymmetric dimer in anhydrous clozapine. PIXEL calculations on all the crystal structures of clozapine revealed that similar to olanzapine, the intermolecular interaction energy in each structure is also dominated by the Ed. Despite the molecular structure similarity between amoxapine and loxapine (molecules in group 2), the crystal packing observed in polymorphs of loxa differs significantly from the amoxapine. A combined experimental and computational study demonstrated that the methyl group in loxapine has a significant influence in increasing the range of accessible solid forms and favouring various alternate packing arrangements. CSP studies have again helped in explaining the observed solid-state diversity of loxapine and amoxapine. PIXEL calculations showed that in absence of strong H-bonds, weak H-bonds such as C–H...O, C–H...N and dispersion interactions play a key role in stabilising the crystal lattice of both the molecules. Efficient crystal packing of amoxapine seems to be contributing towards its monomorphic behaviour as compared to the comparatively less efficient packing of loxapine molecules in both polymorphs. The combination of experimental and computational approaches has provided a deeper understanding of the factors influencing the solid-state structure and diversity in these compounds. Hirshfeld surfaces using Crystal Explorer represent another way of exploring packing modes and intermolecular interactions in molecular crystals. The influence of changes in the small substituents on shape and electron distribution can also be investigated by mapping the total electron density on the electrostatic potential for molecules in the gas phase. This allows straightforward visualisation and comparison of overall shape, electron-rich and electron-deficient regions within molecules. The shape of these molecules can be further investigated to study its influence on diverse solid-state diversity. Posaconazole The original formulations of posaconazole on the market licensed as Noxafil were formulated utilising form I of posaconazole. The discovery of polymorphs of posaconazole increased rapidly and resulted in much research in crystallography of posaconazole. A methanol solvate and a 1,4-dioxane co-crystal were added to the Cambridge Structural Database (CSD). Ritonavir The antiviral drug ritonavir exists as two polymorphs, which differ greatly in efficacy. Such issues were solved by reformulating the medicine into gelcaps and tablets, rather than the original capsules. Aspirin There was only one proven polymorph Form I of aspirin, though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin. Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes. Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight. Paracetamol Paracetamol powder has poor compression properties, which poses difficulty in making tablets. A second polymorph was found with more suitable compressive properties. Cortisone acetate Cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form. Carbamazepine Carbamazepine, estrogen, paroxetine, and chloramphenicol also show polymorphism. Pyrazinamide Pyrazinamide has at least 4 polymorphs. All of them transforms to stable α form at room temperature upon storage or mechanical treatment. Recent studies prove that α form is thermodynamically stable at room temperature. Polytypism Polytypes are a special case of polymorphs, where multiple close-packed crystal structures differ in one dimension only. Polytypes have identical close-packed planes, but differ in the stacking sequence in the third dimension perpendicular to these planes. Silicon carbide (SiC) has more than 170 known polytypes, although most are rare. All the polytypes of SiC have virtually the same density and Gibbs free energy. The most common SiC polytypes are shown in Table 1. Table 1: Some polytypes of SiC. A second group of materials with different polytypes are the transition metal dichalcogenides, layered materials such as molybdenum disulfide (MoS2). For these materials the polytypes have more distinct effects on material properties, e.g. for MoS2, the 1T polytype is metallic in character, while the 2H form is more semiconducting. Another example is tantalum disulfide, where the common 1T as well as 2H polytypes occur, but also more complex 'mixed coordination' types such as 4Hb and 6R, where the trigonal prismatic and the octahedral geometry layers are mixed. Here, the 1T polytype exhibits a charge density wave, with distinct influence on the conductivity as a function of temperature, while the 2H polytype exhibits superconductivity. ZnS and CdI2 are also polytypical. It has been suggested that this type of polymorphism is due to kinetics where screw dislocations rapidly reproduce partly disordered sequences in a periodic fashion. Theory ]In terms of thermodynamics, two types of polymorphic behaviour are recognized. For a monotropic system, plots of the free energies of the various polymorphs against temperature do not cross before all polymorphs melt. As a result, any transition from one polymorph to another below the melting point will be irreversible. For an enantiotropic system, a plot of the free energy against temperature shows a crossing point before the various melting points. It may also be possible to convert interchangeably between the two polymorphs by heating or cooling, or through physical contact with a lower energy polymorph. A simple model of polymorphism is to model the Gibbs free energy of a ball-shaped crystal as . Here, the first term is the surface energy, and the second term is the volume energy. Both parameters . The function rises to a maximum before dropping, crossing zero at . In order to crystallize, a ball of crystal much overcome the energetic barrier to the part of the energy landscape. Now, suppose there are two kinds of crystals, with different energies and , and if they have the same shape as in Figure 2, then the two curves intersect at some . Then the system has three phases: . Crystals tend to dissolve. Amorphous phase. . Crystals tend to grow as form 1. . Crystals tend to grow as form 2. If the crystal is grown slowly, it could be kinetically stuck in form 1. See also Allotropy Isomorphism (crystallography) Dimorphism (Wiktionary) Polyamorphism References External links "Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website "SiC and Polytpism" Mineralogy Gemology Crystallography
Crystal polymorphism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,570
[ "Crystallography", "Polymorphism (materials science)", "Condensed matter physics", "Materials science" ]
3,620,291
https://en.wikipedia.org/wiki/Sfermion
In supersymmetric extension to the Standard Model (SM) of physics, a sfermion is a hypothetical spin-0 superpartner particle (sparticle) of its associated fermion. Each particle has a superpartner with spin that differs by . Fermions in the SM have spin- and, therefore, sfermions have spin 0. The name 'sfermion' was formed by the general rule of prefixing an 's' to the name of its superpartner, denoting that it is a scalar particle with spin 0. For instance, the electron's superpartner is the selectron and the top quark's superpartner is the stop squark. One corollary from supersymmetry is that sparticles have the same gauge numbers as their SM partners. This means that sparticle–particle pairs have the same color charge, weak isospin charge, and hypercharge (and consequently electric charge). Unbroken supersymmetry also implies that sparticle–particle pairs have the same mass. This is evidently not the case, since these sparticles would have already been detected. Thus, sparticles must have different masses from the particle partners and supersymmetry is said to be broken. Fundamental sfermions Squarks Squarks (also quarkinos) are the superpartners of quarks. These include the sup squark, sdown squark, scharm squark, sstrange squark, stop squark, and sbottom squark. Sleptons Sleptons are the superpartners of leptons. These include the selectron, smuon, stau, and their corresponding sneutrino flavors. See also Minimal Supersymmetric Standard Model (MSSM) References Supersymmetric quantum field theory Hypothetical elementary particles Bosons Subatomic particles with spin 0
Sfermion
[ "Physics" ]
411
[ "Symmetry", "Matter", "Supersymmetric quantum field theory", "Unsolved problems in physics", "Bosons", "Subatomic particles", "Hypothetical elementary particles", "Supersymmetry", "Physics beyond the Standard Model" ]
3,624,741
https://en.wikipedia.org/wiki/Two-dimensional%20electron%20gas
A two-dimensional electron gas (2DEG) is a scientific model in solid-state physics. It is an electron gas that is free to move in two dimensions, but tightly confined in the third. This tight confinement leads to quantized energy levels for motion in the third direction, which can then be ignored for most problems. Thus the electrons appear to be a 2D sheet embedded in a 3D world. The analogous construct of holes is called a two-dimensional hole gas (2DHG), and such systems have many useful and interesting properties. Realizations Most 2DEGs are found in transistor-like structures made from semiconductors. The most commonly encountered 2DEG is the layer of electrons found in MOSFETs (metal–oxide–semiconductor field-effect transistors). When the transistor is in inversion mode, the electrons underneath the gate oxide are confined to the semiconductor-oxide interface, and thus occupy well defined energy levels. For thin-enough potential wells and temperatures not too high, only the lowest level is occupied (see the figure caption), and so the motion of the electrons perpendicular to the interface can be ignored. However, the electron is free to move parallel to the interface, and so is quasi-two-dimensional. Other methods for engineering 2DEGs are high-electron-mobility-transistors (HEMTs) and rectangular quantum wells. HEMTs are field-effect transistors that utilize the heterojunction between two semiconducting materials to confine electrons to a triangular quantum well. Electrons confined to the heterojunction of HEMTs exhibit higher mobilities than those in MOSFETs, since the former device utilizes an intentionally undoped channel thereby mitigating the deleterious effect of ionized impurity scattering. Two closely spaced heterojunction interfaces may be used to confine electrons to a rectangular quantum well. Careful choice of the materials and alloy compositions allow control of the carrier densities within the 2DEG. Electrons may also be confined to the surface of a material. For example, free electrons will float on the surface of liquid helium, and are free to move along the surface, but stick to the helium; some of the earliest work in 2DEGs was done using this system. Besides liquid helium, there are also solid insulators (such as topological insulators) that support conductive surface electronic states. Recently, atomically thin solid materials have been developed (graphene, as well as metal dichalcogenide such as molybdenum disulfide) where the electrons are confined to an extreme degree. The two-dimensional electron system in graphene can be tuned to either a 2DEG or 2DHG (2-D hole gas) by gating or chemical doping. This has been a topic of current research due to the versatile (some existing but mostly envisaged) applications of graphene. A separate class of heterostructures that can host 2DEGs are oxides. Although both sides of the heterostructure are insulators, the 2DEG at the interface may arise even without doping (which is the usual approach in semiconductors). Typical example is a ZnO/ZnMgO heterostructure. More examples can be found in a recent review including a notable discovery of 2004, a 2DEG at the LaAlO3/SrTiO3 interface which becomes superconducting at low temperatures. The origin of this 2DEG is still unknown, but it may be similar to modulation doping in semiconductors, with electric-field-induced oxygen vacancies acting as the dopants. Experiments Considerable research involving 2DEGs and 2DHGs has been done, and much continues to this day. 2DEGs offer a mature system of extremely high mobility electrons, especially at low temperatures. When cooled to 4 K, 2DEGs may have mobilities of the order of 1,000,000 cm2/Vs and lower temperatures can lead to further increase of still. Specially grown, state of the art heterostructures with mobilities around 30,000,000 cm2/(V·s) have been made. These enormous mobilities offer a test bed for exploring fundamental physics, since besides confinement and effective mass, the electrons do not interact with the semiconductor very often, sometimes traveling several micrometers before colliding; this so-called mean free path can be estimated in the parabolic band approximation as where is the electron density in the 2DEG. Note that typically depends on . Mobilities of 2DHG systems are smaller than those of most 2DEG systems, in part due to larger effective masses of holes (few 1000 cm2/(V·s) can already be considered high mobility). Aside from being in practically every semiconductor device in use today, two dimensional systems allow access to interesting physics. The quantum Hall effect was first observed in a 2DEG, which led to two Nobel Prizes in physics, of Klaus von Klitzing in 1985, and of Robert B. Laughlin, Horst L. Störmer and Daniel C. Tsui in 1998. Spectrum of a laterally modulated 2DEG (a two-dimensional superlattice) subject to magnetic field B can be represented as the Hofstadter's butterfly, a fractal structure in the energy vs B plot, signatures of which were observed in transport experiments. Many more interesting phenomena pertaining to 2DEG have been studied.[A] See also Two-dimensional gas Footnotes A. Examples of more 2DEG physics. Full control of the 2DEG spin polarization was demonstrated. Possibly, this could be relevant to quantum information technology. Wigner crystallization in magnetic field. Microwave-induced magnetoresistance oscillations discovered by R. G. Mani et al. Possible existence of non-abelian quasiparticles in the fractional quantum Hall effect at filling factor 5/2. Further reading References Transistors Quantum electronics Mesoscopic physics MOSFETs Surfaces
Two-dimensional electron gas
[ "Physics", "Materials_science" ]
1,229
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Mesoscopic physics" ]
13,029,322
https://en.wikipedia.org/wiki/Vertical%20exaggeration
Vertical exaggeration (VE) is a scale that is used in raised-relief maps, plans and technical drawings (cross section perspectives), in order to emphasize vertical features, which might be too small to identify relative to the horizontal scale. Scaling Factor The vertical exaggeration is given by: where VS is the vertical scale and HS is the horizontal scale, both given as representative fractions. For example, if vertically represents and horizontally represents , the vertical exaggeration, 20×, is given by: . Vertical exaggeration is given as a number; for example 5× means vertical measurements appear 5 times greater than horizontal measurements. A value of 1× indicates that horizontal and vertical scales are identical, and is regarded as having "no vertical exaggeration." Vertical exaggerations less than 1 are not common, but would indicate a reduction in vertical scale (or, equivalently, a horizontal exaggeration). Criticism Some scientists object to vertical exaggeration as a tool that makes an oblique visualization dramatic at the cost of misleading the viewer about the true appearance of the landscape. In some cases, if the vertical exaggeration is too high, the map reader may get confused. References Cartography Descriptive geometry Topography techniques
Vertical exaggeration
[ "Mathematics" ]
252
[ "Geometry", "Geometry stubs" ]
13,030,638
https://en.wikipedia.org/wiki/Cognitive%20module
A cognitive module in cognitive psychology is a specialized tool or sub-unit that can be used by other parts to resolve cognitive tasks. It is used in theories of the modularity of mind and the closely related society of mind theory and was developed by Jerry Fodor. It became better known throughout cognitive psychology by means of his book, The Modularity of Mind (1983). The nine aspects he lists that make up a mental module are domain specificity, mandatory operation, limited central accessibility, fast processing, informational encapsulation, "shallow" outputs, fixed neural architecture, characteristic and specific breakdown patterns, and characteristic ontogenetic pace and sequencing. Not all of these are necessary for the unit to be considered a module, but they serve as general parameters. The question of their existence and nature is a major topic in cognitive science and evolutionary psychology. Some see cognitive modules as an independent part of the mind. Others also see new thought patterns achieved by experience as cognitive modules. Other theories similar to the cognitive module are cognitive description, cognitive pattern and psychological mechanism. Such a mechanism, if created by evolution, is known as evolved psychological mechanism. Examples Some examples of cognitive modules: The modules controlling your hands when you ride a bike, to stop it from crashing, by minor left and right turns. The modules that allow a basketball player to accurately put the ball into the basket by tracking ballistic orbits. The modules that recognise hunger and tell you that you need food. This cognitive module may be dysfunctional for people with eating disorders, for them various non-hunger distress emotions may wrongly make them feel hungry and causes them to eat. The modules that cause you to appreciate a beautiful flower, painting or person. The modules that make humans very efficient in recognising faces, already shown in Rhesus monkeys and in two-month-old babies, see Face perception. The modules that cause some humans to be jealous of their partners' friends. The modules that compute the speeds of incoming vehicles and tells you if you have time to cross without crashing into said vehicles. The modules that cause parents to love and care for their children. The libido modules. Modules that specifically discern the movements of animals. The fight or flight reflex choice modules. Psychological disorders Many common psychological and personality disorders are caused by cognitive modules running amok. Jealousy All people are born with a basic jealousy cognitive module, which is developed through as an evolutionary strategy in order to safeguard a mate. This module triggers aggression towards competitors in order to ensure paternity and prevent extramarital offspring. If this module is activated to a strong degree, it becomes a personality disorder. Stalking Stalking is an extreme psychological disorder also related to jealousy and several other cognitive modules. A stalker is a person who behaves as if he had a relation to another person who is not interested in him. Some behaviors related to this disorder can get to the extent of following the other person on the street or observe him or her at home, compulsively reviewing their activity on social media, and can even result in harassment. Paranoia Being suspicious of fellow human beings is a cognitive module linked to human survival traits, which is generally characterized by being excessively suspicious of others or even situations, perceiving irrational threats from others, or feeling disruptive distrust in others. Such behaviour, in its extreme cases is labeled as paranoid schizophrenia by matter experts, or in milder forms it is also called paranoid personality disorder. Obsessive-compulsive disorder An example of this disorder is commonly illustrated by a person who will repeatedly check that a door is locked. One may constantly wash hands or other body parts, sometimes for hours, to ensure cleanliness. The obsessive-compulsory disorder is an extreme malfunction of a normal adaptation trait in all humans. Transference A cognitive module developed to solve a particular problem in which an emotional load can sometimes be taken to other situations where it is not appropriate. One may be angry at one's boss, but take the anger out on one's family. Often, the transference is unconscious (see also Subconscious mind and Unconscious mind). In psychotherapy, the patient is made aware of this, which makes it easier to modify the unsuitable behaviour. Freud's theory of sublimation Sublimation presents itself when a certain impulse that is socially unacceptable is deflected into a more suitable public behavior. Freud also introduced the idea of the unconscious, which interpreted cognitive modules where a person is not aware of the initial cause of these modules and may use them inappropriately. Schizophrenia Schizophrenia is a psychotic disorder where cognitive modules are triggered too often, overwhelming the brain with information. The inability to repress overwhelming information is a cause of schizophrenia. Treatment of cognitive module psychological disorders Cognitive therapy is a psychotherapeutic method that helps people better understand the cognitive modules that cause them to do certain things, and to teach them alternative, more appropriate cognitive modules to use instead in the future. Psychoanalytic view of cognitive modules According to psychoanalytic theory, many cognitive modules are unconscious and repressed, to avoid mental conflicts. Defenses are meant to be cognitive modules used to suppress the awareness of other cognitive modules. Unconscious cognitive modules may influence our behaviour without our being aware of it. Evolutionary psychology view In the research field of evolutionary psychology it is believed that some cognitive modules are inherited and some are created by learning, but the creation of new modules by learning is often guided by inherited modules. For example, the ability to drive a car or throw a basketball are certainly learned and not inherited modules, but they may make use of inherited modules to rapidly compute trajectories. There is some disagreement between different social scientists on the importance to the capabilities of the human mind of inherited modules. Evolutionary psychologists claim that other social scientists do not accept that some modules are partially inherited, other social scientists claim that evolutionary psychologists are exaggerating the importance of inherited cognitive modules. Memory and creative thought A very important aspect of how humans think is the ability, when encountering a situation or problem, to find more or less similar, but not identical, experiences or cognitive modules. This can be compared to what happens if you sound a tone near a piano. The piano string corresponding to this particular tone will then vibrate. But also other strings, from nearby strings, will vibrate to a lesser extent. Exactly how the human mind does this is not known, but it is believed that when you encounter a situation or problem, many different cognitive modules are activated at the same time, and the mind selects those most useful for understanding a new situation or solving a new problem. Ethics and law Most law-abiding people have cognitive modules that stop them from committing crimes. Criminals have different modules, causing criminal behaviour. Thus, cognitive modules can be a cause of both ethical and unethical behaviour. See also Cognition Cognitive ethology Functionalism (philosophy of mind) Language module Visual modularity References This article is based on an article in Web4Health. Behavior Cognitive architecture Cognitive psychology Ethology Evolutionary psychology Mental content Concepts in the philosophy of mind Theory of mind
Cognitive module
[ "Engineering", "Biology" ]
1,438
[ "Behavior", "Cognitive architecture", "Behavioural sciences", "Cognitive psychology", "Artificial intelligence engineering", "Ethology" ]
13,037,086
https://en.wikipedia.org/wiki/Trichlorophenylsilane
Trichlorophenylsilane is a compound with formula Si(C6H5)Cl3. Similarly to other alkylchlorosilanes, trichlorophenylsilane is a possible precursor to silicone. It hydrolyses in water to give HCl and phenylsilantriol, with the latter condensating to a polymeric substance. See also Methyltrichlorosilane Organochlorosilanes Carbosilanes Phenyl compounds
Trichlorophenylsilane
[ "Chemistry" ]
105
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
13,037,119
https://en.wikipedia.org/wiki/Scattering-matrix%20method
In computational electromagnetics, the scattering-matrix method (SMM) is a numerical method used to solve Maxwell's equations, related to the transfer-matrix method. Principles SMM can, for example, use cylinders to model dielectric/metal objects in the domain. The total-field/scattered-field (TF/SF) formalism where the total field is written as sum of incident and scattered at each point in the domain: By assuming series solutions for the total field, the SMM method transforms the domain into a cylindrical problem. In this domain total field is written in terms of Bessel and Hankel function solutions to the cylindrical Helmholtz equation. SMM method formulation, finally helps compute these coefficients of the cylindrical harmonic functions within the cylinder and outside it, at the same time satisfying EM boundary conditions. Finally, SMM accuracy can be increased by adding (removing) cylindrical harmonic terms used to model the scattered fields. SMM, eventually leads to a matrix formalism, and the coefficients are calculated through matrix inversion. For N-cylinders, each scattered field modeled using 2M+1 harmonic terms, SMM requires to solve a N(2M + 1) system of equations. Advantages SMM, is a rigorous and accurate method deriving from first principles. Hence, it is guaranteed to be accurate within limits of model, and not show spurious effects of numerical dispersion arising in other techniques like Finite-difference time-domain (FDTD) method. See also Eigenmode expansion Finite-difference time-domain method Finite element method Maxwell's equations Method of Lines References Scattering, absorption and radiative transfer (optics) Computational electromagnetics
Scattering-matrix method
[ "Physics", "Chemistry", "Materials_science" ]
344
[ "Materials science stubs", " absorption and radiative transfer (optics)", "Computational electromagnetics", "Scattering stubs", "Computational physics", "Scattering", "Electromagnetism stubs" ]
25,373,662
https://en.wikipedia.org/wiki/Estrada%20index
In chemical graph theory, the Estrada index is a topological index of protein folding. The index was first defined by Ernesto Estrada as a measure of the degree of folding of a protein, which is represented as a path-graph weighted by the dihedral or torsional angles of the protein backbone. This index of degree of folding has found multiple applications in the study of protein functions and protein-ligand interactions. The name "Estrada index" was introduced by de la Peña et al. in 2007. Derivation Let be a graph of size and let be a non-increasing ordering of the eigenvalues of its adjacency matrix . The Estrada index is defined as For a general graph, the index can be obtained as the sum of the subgraph centralities of all nodes in the graph. The subgraph centrality of node is defined as The subgraph centrality has the following closed form where is the th entry of the th eigenvector associated with the eigenvalue . It is straightforward to realise that References Mathematical chemistry Cheminformatics Graph invariants
Estrada index
[ "Chemistry", "Mathematics" ]
218
[ "Drug discovery", "Applied mathematics", "Graph theory", "Molecular modelling", "Mathematical chemistry", "Computational chemistry", "Theoretical chemistry", "Mathematical relations", "nan", "Cheminformatics", "Graph invariants" ]
25,379,184
https://en.wikipedia.org/wiki/Carbon%20nanotube%20nanomotor
A device generating linear or rotational motion using carbon nanotube(s) as the primary component, is termed a nanotube nanomotor. Nature already has some of the most efficient and powerful kinds of nanomotors. Some of these natural biological nanomotors have been re-engineered to serve desired purposes. However, such biological nanomotors are designed to work in specific environmental conditions (pH, liquid medium, sources of energy, etc.). Laboratory-made nanotube nanomotors on the other hand are significantly more robust and can operate in diverse environments including varied frequency, temperature, mediums and chemical environments. The vast differences in the dominant forces and criteria between macroscale and micro/nanoscale offer new avenues to construct tailor-made nanomotors. The various beneficial properties of carbon nanotubes makes them the most attractive material to base such nanomotors on. History Just fifteen years after making the world's first micrometer-sized motor, Alex Zettl led his group at University of California at Berkeley to construct the first nanotube nanomotor in 2003. A few concepts and models have been spun off ever since including the nanoactuator driven by a thermal gradient as well as the conceptual electron windmill, both of which were revealed in 2008. Size effects Electrostatic forces Coulomb's law states that the electrostatic force between two objects is inversely proportional to the square of their distance. Hence, as the distance is reduced to less than a few micrometers, a large force can be generated from seemingly small charges on two bodies. However, electrostatic charge scales quadratically, thereby the electrostatic force also scales quadratically, as the following equations show: Alternatively Here A is area, C is capacitance, F is electrostatic force, E is electrostatic field, L is length, V is voltage and Q is charge. Despite the scaling nature of the electrostatic force it is one of the major mechanisms of sensing and actuation in the field of microelectromechanical systems (MEMS) and is the backbone for the working mechanism of the first NEMS nanomotor. The quadratic scaling is alleviated by increasing the number of units generating the electrostatic force as seen in comb drives in many MEMS devices. Friction Just as the electrostatic force, the frictional force scales quadratically with size F ~ L2. Friction is an ever plaguing problem regardless of the scale of a device. It becomes all the more prominent when a device is scaled down. In the nano scale it can wreak havoc if not accounted for because the parts of a Nano-Electro-Mechanical-Systems (NEMS) device are sometimes only a few atoms thick. Furthermore, such NEMS devices typically have a very large surface area-to-volume ratio. Surfaces in the nanoscale resemble a mountain range, where each peak corresponds to an atom or a molecule. Friction at the nanoscale is proportional to the number of atoms that interact between two surfaces. Hence, friction between perfectly smooth surfaces in the macroscale is actually similar to large rough objects rubbing against each other. In the case of nanotube nanomotors however, the intershell friction in the multi-walled nanotubes (MWNT) is remarkably small. Molecular dynamics studies show that, with the exception of small peaks, the frictional force remains almost negligible for all sliding velocities until a special sliding velocity is reached. Simulations relating the sliding velocity, induced rotation, inter-shell frictional force to the applied force provide explanations for the low inter-wall friction. Contrary to macroscale expectations the speed at which an inner tube travels within an outer tube does not follow a linear relationship with the applied force. Instead, the speed remains constant (as in a plateau) despite increasing applied force occasionally jumping in value to the next plateau. No real rotation is noticed in nonchiral inner tubes. In the case of chiral tubes a true rotation is noticed and the angular velocity also jumps to plateaus along with the jumps in the linear velocity. These plateaus and jumps can be explained as a natural outcome of frictional peaks for growing velocity, the stable (rising) side of the peak leading to a plateau, the dropping (unstable) side leading to a jump. These peaks occur due to parametric excitation of vibrational modes in the walls of the tubes due to the sliding of the inner tube. With the exception of small peaks, that correspond to the speed plateaus, the frictional force remains almost negligible for all sliding velocities until a special sliding velocity. These velocity plateaus correspond to the peaks in the frictional force. The sudden rise in sliding velocity is due to a resonance condition between a frequency that is dependent on the inter-tube corrugation period and particular phonon frequencies of the outer tube which happen to possess a group velocity approximately equal to the sliding velocity. First NEMS nanomotor The first nanomotor can be thought of as a scaled down version of a comparable microelectromechanical systems (MEMS) motor. The nanoactuator consists of a gold plate rotor, rotating about the axis of a multi-walled nanotube (MWNT). The ends of the MWNT rest on a SiO2 layer which form the two electrodes at the contact points. Three fixed stator electrodes (two visible 'in-plane' stators and one 'gate' stator buried beneath the surface) surround the rotor assembly. Four independent voltage signals (one to the rotor and one to each stators) are applied to control the position, velocity and direction of rotation. Empirical angular velocities recorded provide a lower bound of 17 Hz (although capable of operating at much higher frequencies) during complete rotations. Fabrication The MWNTs are synthesized by the arc-discharge technique, suspended in 1,2-dichlorobenzene and deposited on degenerately doped silicon substrates with a 1 μm of SiO2. The MWNT can be aligned according to pre-made markings on the substrate by using an atomic force microscope (AFM) or a scanning electron microscope (SEM). The rotor, electrodes and the 'in-plane' stators are patterned using electron beam lithography using an appropriately masked photo-resist. Gold with a chromium adhesion layer is thermally evaporated, lifted off in acetone and then annealed at 400 °C to ensure better electrical and mechanical contact with the MWNT. The rotor measures 250–500 nm on a side. An HF etch is then used to remove sufficient thickness (500 nm of SiO2) of the substrate to make room for the rotor when it rotates. The Si substrate serves as the gate stator. The MWNT at this point displays a very high torsional spring constant (10−15 to 10−13 N m with resonant frequencies in the tens of megahertz), hence, preventing large angular displacements. To overcome this, one or more outer MWNT shells are compromised or removed in the region between the anchors and the rotor plate. One simple way to accomplish this is by successively applying very large stator voltages (around 80 V DC) that cause mechanical fatigue and eventually shear the outer shells of the MWNT. An alternative method involves the reduction of the outermost MWNT tubes to smaller, wider concentric nanotubes beneath the rotor plate. The smaller nanotube(s) are fabricated using the Electrical driven vaporization (EDV) which is a variant of the electrical-breakdown technique. Passing current between the two electrodes typically results in failure of the outermost shell only on one side of the nanotube. Current is therefore passed between one electrode and the center of the MWNT which results in the failure of the outermost shell between this electrode and the center. The process is repeated on the opposite side to result in the formation of the short concentric nanotube that behaves like a low friction bearing along the longer tube. Arrays of nanoactuators Due to the minuscule magnitude of output generated by a single nanoactuator the necessity to use arrays of such actuators to accomplish a higher task comes into picture. Conventional methods like chemical vapor deposition (CVD) allow the exact placement of nanotubes by growing them directly on the substrate. However, such methods are unable to produce very high qualities of MWNT. Moreover, CVD is a high temperature process that would severely limit the compatibility with other materials in the system. A Si substrate is coated with electron beam resist and soaked in acetone to leave only a thin polymer layer. The substrate is selectively exposed to a low energy electron beam of an SEM that activates the adhesive properties of the polymer later. This forms the basis for the targeting method. The alignment method exploits the surface velocity obtained by a fluid as it flows off a spinning substrate. MWNTs are suspended in orthodicholrobenzene (ODCB) by ultrasonication in an Aquasonic bath that separates most MWNT bundles into individual MWNTs. Drops of this suspension are then pipetted one by one onto the center of a silicon substrate mounted on a spin coater rotating at 3000 rpm. Each subsequent drop of the suspension is pipetted only after the previous drop has completely dried to ensure larger density and better alignment of the MWNTs (90% of the MWNTs over 1 μm long lie within 1°). Standard electron beam lithography is used to pattern the remaining components of the nanoactuators. Arc-discharge evaporation technique This technique is a variant of the standard arc-discharge technique used for the synthesis of fullerenes in an inert gas atmosphere. As Figure 1.3 shows, the experiment is carried out in a reaction vessel containing an inert gas such as helium, argon, etc. flowing at a constant pressure. A potential of around 18 V is applied across two graphite electrodes (diameters of the anode and cathode are 6 mm and 9 mm) separated by a short distance of usually 1–4 mm within this chamber. The amount of current (usually 50–100 A) passed through the electrodes to ensure nanotube formation depends on the dimensions of the electrodes, separation distance and the inert gas used. As a result, carbon atoms are ejected from the anode and are deposited onto the cathode hence shrinking the mass of the anode and increasing the mass of the cathode. The black carbonaceous deposit (a mixture of nanoparticles and nanotubes in a ratio of 1:2) is seen growing on the inside of the cathode while a hard grey metallic shell forms on the outside. The total yield of nanotubes as a proportion of starting graphitic material peaks at a pressure of 500 torr at which point 75% of graphite rod consumed is converted to nanotubes. The nanotubes formed range from 2 to 20 nm in diameter and few to several micrometers in length. There are several advantages of choosing this method over the other techniques such as laser ablation and chemical vapor deposition such as fewer structural defects (due to high growth temperature), better electrical, mechanical and thermal properties, high production rates (several hundred mg in ten minutes), etc. Electrical-breakdown technique Large-scale synthesis of carbon nanotubes typically results in a randomly varied proportion of different types of carbon nanotubes. Some may be semiconducting while others may be metallic in their electrical properties. Most applications require the use of such specific types of nanotubes. Electrical-breakdown technique provides a means for separating and selecting desired type of nanotubes. Carbon nanotubes are known to withstand very large current densities up to 109 A/cm2 partly due to the strong sigma bonds between carbon atoms. However, at sufficiently high currents the nanotubes fail primarily due to rapid oxidation of the outermost shell. This results in a partial conductance drop that becomes apparent within a few seconds. Applying an increased bias displays multiple independent and stepwise drops in conductance (figure 1.4) resulting from the sequential failure of carbon shells. Current in a MWNT typically travels in the outermost shell due to the direct contact between this shell and the electrodes. This controlled destruction of shells without affecting disturbing inner layers of MWNTs permits the effective separation of the nanotubes. Principle The rotor is made to rotate using electrostatic actuation. An out-of-phase common frequency sinusoidal voltages to two in-plane stators S1, S2, a doubled frequency voltage signal to the gate stator S3 and a DC offset voltage to the rotor plate R are applied as shown below: By the sequential application of these asymmetrical stator voltages (less than 5 V) the rotor plate can be drawn to successive stators hence making the plate complete rotations. The high proximity between the stators and the rotor plate is one reason why a large force is not required for electrostatic actuation. Reversing the bias causes the rotor to rotate in the opposite direction as expected. Applications The rotating metal plate could serve as a mirror for ultra-high-density optical sweeping and switching devices as the plate is at the limit of visible light focusing. An array of such actuators, each serving as a high frequency mechanical filter, could be used for parallel signal processing in telecommunications. The plate could serve as a paddle for inducing or detecting fluid motion in microfluidic applications. It could serve as a bio-mechanical element in biological systems, a gated catalyst in wet chemistry reactions or as a general sensor element. A charged oscillating metal plate could be used as a transmitter of electromagnetic radiation. Thermal gradient driven nanotube actuators The nanoactuator, as shown in Figure 2.1 comprises two electrodes connected via a long MWNT. A gold plate acts as the cargo and is attached to a shorter and wider concentric nanotube. The cargo moves towards the cooler electrode (Figure 2.2) due to the thermal gradient in the longer nanotube induced by the high current that is passed through it. The maximum velocity was approximated to 1 μm/s which is comparable to the speeds attained by kinesin biomotors. Fabrication The MWNT are fabricated using the standard arc-discharge evaporation process and deposited on an oxidized silicon substrate. The gold plate in the center of the MWNT is patterned using electron-beam lithography and Cr/Au evaporation. During the same process, the electrodes are attached to the nanotube. Finally, electrical-breakdown technique is used to selectively remove a few outer walls of the MWNT. Just as the nanoactuator from the Zettl group, this enables low friction rotation and translation of the shorter nanotube along the axis of the longer tube. The application of the electrical-breakdown technique does not result in the removal of the tube(s) below the cargo. This might be because the metal cargo absorbs the heat generated in the portion of the tube in its immediate vicinity hence delaying or possibly even preventing tube oxidation in this part. Principle The interaction between the longer and shorter tubes generates an energy surface that confines the motion to specific tracks – translation and rotation. The degree of translational and rotational motion of the shorter tube are highly dependent on the chiralities of the two tubes as shown in Figure 2.3. Motion in the nanoactuator displayed a proclivity of the shorter tube to follow a path of minimum energy. This path could either have a roughly constant energy or have a series of barriers. In the former case, friction and vibrational motion of atoms can be neglected whereas a stepwise motion is expected in the latter scenario. Stepwise motion The stepwise motion can be explained by the existence of periodic energy barriers for relative motion between the longer and shorter tubes. For a given pair of nanotubes, the ratio of the step in rotation to the step in translation is typically a constant, the value of which depends on the chirality of the nanotubes. The energy of such barriers could be estimated from the temperature in the nanotube, a lower bound for which can be estimated as the melting temperature of gold (1300 K) by noting that the gold plate melts (Figure 2.4) to form a spherical structure as current is passed through the nanomotor. The motion rate γ can be written as a function of the attempt frequency , the Boltzmann constant , and temperature as: Taking , using the approximation: where m is the mass of the cargo and represents the contact area, the barrier height is estimated as 17 μeV per atom. Mechanism for actuation Many proposals were made to explain the driving mechanism behind the nanoactuator. The high current (0.1 mA) required to drive the actuator is likely to cause sufficient dissipation to clean the surface of contaminants; hence, ruling out the possibility of contaminants playing a major role. The possibility of electromigration, where the electrons move atomic impurities via momentum transfer due to collisions, was also ruled out because the reversal of the current direction did not affect the direction of displacement. Similarly, rotational motion could not have been caused by an induced magnetic field due to the current passing through the nanotube because the rotation could either be left or right-handed depending on the device. Stray electric field effect could not be the driving factor because the metal plate staid immobile for high resistive devices even under a large applied potential. The thermal gradient in the nanotube provides the best explanation for the driving mechanism. Thermal gradient induced motion The induced motion of the shorter nanotube is explained as the reverse of the heat dissipation that occurs in friction wherein the sliding of two objects in contact results in the dissipation of some of the kinetic energy as phononic excitations caused by the interface corrugation. The presence of a thermal gradient in a nanotube causes a net current of phononic excitations traveling from the hotter region to the cooler region. The interaction of these phononic excitations with mobile elements (the carbon atoms in the shorter nanotube) causes the motion of the shorter nanotube. This explains why the shorter nanotube moves towards the cooler electrode. Changing the direction of the current has no effect on the shape of thermal gradient in the longer nanotube. Hence, direction of the movement of the cargo is independent of the direction of the bias applied. The direct dependence of the velocity of the cargo to the temperature of the nanotube is inferred from the fact that the velocity of the cargo decreases exponentially as the distance from the midpoint of the long nanotube increases. Shortcomings The temperatures and the thermal gradient that the MWNT are subjected to are very high. On one hand, the high thermal gradient seems to have a highly detrimental effect on the lifetime of such nanoactuators. On the other hand, experiments show that the displacement of the shorter tube is directly proportional to the thermal gradient (see Figure 2.5). Therefore, a compromise needs to be reached to optimize the thermal gradient. The dimensions of movable nanotube is directly related to the energy barrier height. Although the current model excites multiple phonon modes, selective phonon mode excitation would enable lowering the phonon bath temperature. Applications Pharmaceutical/Nanofluidic – thermal gradient could be used to drive fluids within the nanotubes or in nanofluidic devices as well as for drug delivery by nanosyringes. Running bio-engineered nanopores using heat generated from adenosine triphosphate (ATP) molecules. Electron windmill Structure As figure 3.1 shows, the nanomotor consists of a double-walled CNT (DWNT) formed from an achiral (18,0) outer tube clamped to external gold electrodes and a narrower chiral (6,4) inner tube. The central portion of the outer tube is removed using the electrical-breakdown technique to expose the free-to-rotate, inner tube. The nanodrill also comprises an achiral outer nanotube attached to a gold electrode but the inner tube is connected to a mercury bath. Principle Conventional nanotube nanomotors make use of static forces that include elastic, electrostatic, friction and van der Waals forces. The electron windmill model makes use of a new "electron-turbine" drive mechanism that obviates that need for metallic plates and gates that the above nanoactuators require. When a DC voltage is applied between the electrodes, a "wind" of electrons is produced from left to right. The incident electron flux in the outer achiral tube initially possesses zero angular momentum, but acquires a finite angular momentum after interacting with the inner chiral tube. By Newton's third law, this flux produces a tangential force (hence a torque) on the inner nanotube causing it to rotate hence giving this model the name – "electron windmill". For moderate voltages, the tangential force produced by the electron wind is much greatly exceed the associated frictional forces. Applications Some of the main applications of the electron windmill include: A voltage pulse could cause the inner element to rotate at a calculated angle hence making the device behave as a switch or a nanoscale memory element. Modification of the electron windmill to construct a nanofluidic pump by replacing the electrical contacts with reservoirs of atoms or molecules under the influence of an applied pressure difference. See also Carbon nanotube Carbon nanotube actuators Molecular motor Motor (disambiguation) Nanomotor Nanotechnology Synthetic molecular motor References External links Physicists build world's smallest motor using nanotubes and etched silicon Nanotube Nanomotor research project Carbon Nanotube Windmills Powered by 'Electron Wind' Zettl Group Research: Nanotube rotor supplementary material World's First Thermal Nanomotor Propelled By Changes In Temperature Images for the first Nanomotor Propelled by Thermal Gradient Carbon Nanotube Windmills Powered by 'Electron Wind' Actuators Carbon nanotubes Nanoelectronics
Carbon nanotube nanomotor
[ "Materials_science" ]
4,594
[ "Nanotechnology", "Nanoelectronics" ]
25,380,742
https://en.wikipedia.org/wiki/Misorientation
In materials science, misorientation is the difference in crystallographic orientation between two crystallites in a polycrystalline material. In crystalline materials, the orientation of a crystallite is defined by a transformation from a sample reference frame (i.e. defined by the direction of a rolling or extrusion process and two orthogonal directions) to the local reference frame of the crystalline lattice, as defined by the basis of the unit cell. In the same way, misorientation is the transformation necessary to move from one local crystal frame to some other crystal frame. That is, it is the distance in orientation space between two distinct orientations. If the orientations are specified in terms of matrices of direction cosines and , then the misorientation operator going from to can be defined as follows: where the term is the reverse operation of , that is, transformation from crystal frame back to the sample frame. This provides an alternate description of misorientation as the successive operation of transforming from the first crystal frame () back to the sample frame and subsequently to the new crystal frame (). Various methods can be used to represent this transformation operation, such as: Euler angles, Rodrigues vectors, axis/angle (where the axis is specified as a crystallographic direction), or unit quaternions. Symmetry and misorientation The effect of crystal symmetry on misorientations is to reduce the fraction of the full orientation space necessary to uniquely represent all possible misorientation relationships. For example, cubic crystals (i.e. FCC) have 24 symmetrically related orientations. Each of these orientations is physically indistinguishable, though mathematically distinct. Therefore, the size of orientation space is reduced by a factor of 24. This defines the fundamental zone (FZ) for cubic symmetries. For the misorientation between two cubic crystallites, each possesses its 24 inherent symmetries. In addition, there exists a switching symmetry, defined by: which recognizes the invariance of misorientation to direction; A→B or B→A. The fraction of the total orientation space in the cubic-cubic fundamental zone for misorientation is then given by: or 1/48 the volume of the cubic fundamental zone. This also has the effect of limiting the maximum unique misorientation angle to 62.8° Disorientation describes the misorientation with the smallest possible rotation angle out of all symmetrically equivalent misorientations that fall within the FZ (usually specified as having an axis in the standard stereographic triangle for cubics). Calculation of these variants involves application of crystal symmetry operators to each of the orientations during the calculation of misorientation. where Ocrys denotes one of the symmetry operators for the material. Misorientation distribution The misorientation distribution (MD) is analogous to the ODF used in characterizing texture. The MD describes the probability of the misorientation between any two grains falling into a range around a given misorientation . While similar to a probability density, the MD is not mathematically the same due to the normalization. The intensity in an MD is given as "multiples of random density" (MRD) with respect to the distribution expected in a material with uniformly distributed misorientations. The MD can be calculated by either series expansion, typically using generalized spherical harmonics, or by a discrete binning scheme, where each data point is assigned to a bin and accumulated. Graphical representation Discrete misorientations or the misorientation distribution can be fully described as plots in the Euler angle, axis/angle, or Rodrigues vector space. Unit quaternions, while computationally convenient, do not lend themselves to graphical representation because of their four-dimensional nature. For any of the representations, plots are usually constructed as sections through the fundamental zone; along φ2 in Euler angles, at increments of rotation angle for axis/angle, and at constant ρ3 (parallel to <001>) for Rodrigues. Due to the irregular shape of the cubic-cubic FZ, the plots are typically given as sections through the cubic FZ with the more restrictive boundaries overlaid. Mackenzie plots are a one-dimensional representation of the MD plotting the relative frequency of the misorientation angle, irrespective of the axis. Mackenzie determined the misorientation distribution for a cubic sample with a random texture. Example of calculating misorientation The following is an example of the algorithm for determining the axis/angle representation of misorientation between two texture components given as Euler angles: Copper [90,35,45] S3 [59,37,63] The first step is converting the Euler angle representation, to an orientation matrix by: where and represent and of the respective Euler component. This yields the following orientation matrices: The misorientation is then: The axis/angle description (with the axis as a unit vector) is related to the misorientation matrix by: (There are errors in the similar formulae for the components of 'r' given in the book by Randle and Engler (see refs.), which will be corrected in the next edition of their book. The above are the correct versions, note a different form for these equations has to be used if Θ = 180 degrees.) For the copper—S3 misorientation given by , the axis/angle description is 19.5° about [0.689,0.623,0.369], which is only 2.3° from <221>. This result is only one of the 1152 symmetrically related possibilities but does specify the misorientation. This can be verified by considering all possible combinations of orientation symmetry (including switching symmetry). References Kocks, U.F., C.N. Tomé, and H.-R. Wenk (1998). Texture and Anisotropy: Preferred Orientations in Polycrystals and their Effect on Materials Properties, Cambridge University Press. Mackenzie, J.K. (1958). Second Paper on the Statistics Associated with the Random Disorientation of Cubes, Biometrika 45,229. Randle, Valerie and Olaf Engler (2000). Introduction to Texture Analysis: Macrotexture, Microtexture & Orientation Mapping, CRC Press. Reed-Hill, Robert E. and Reza Abbaschian (1994). Physical Metallurgy Principles (Third Edition), PWS. Sutton, A.P. and R.W. Balluffi (1995). Interfaces in Crystalline Materials, Clarendon Press. G. Zhu, W. Mao and Y. Yu (1997). "Calculation of misorientation distribution between recrystallized grains and deformed matrix", Scripta mater. 42(2000) 37-41. Symmetry Crystallography
Misorientation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,412
[ "Materials science", "Crystallography", "Condensed matter physics", "Geometry", "Symmetry" ]
25,383,505
https://en.wikipedia.org/wiki/T5%20retrofit%20conversion
T5 retrofit conversion is a means of converting light fittings designed to use T8 format lamps, so that they can use more energy-efficient T5 lamps. This is done by electronically converting the luminaires to high frequency operation. Differences from other fluorescent lamps T5 lamps are approximately 40% smaller than T8 Lamps. T5 lamps have a G5 base while T8 lamps use a G13 base. Conversion technology Conversion kits are available which will work in existing fittings containing switch start, mains frequency fluorescent lamp ballasts. The kits convert the fittings to use energy efficient, high frequency ballasts and accommodate the smaller diameter T5 lamp. The magnetic ballast remains in place but it is bypassed, rendering it ineffective as a conductor. The new high-frequency ballast draws only 2 W, rather than the 6-10 W of the old ballast, increasing the efficiency of the system. Changing to this type of lamp without taking the ballast out of operation (rather than simply bypassing it) results in an increased power factor for the fitting. This increase in power is a result of the separate coils used in an electric ballast, as opposed to the single coil in a magnetic ballast, because it allows the electricity to flow more consistently. There are tree main types of conversion kits: Lamp-end type – Kits which include a replacement starter and two separate components to fit over each end of the T5 lamp. The lamp is then slotted into the existing fitting. Baton type – One piece kits which slot into the existing fitting and into which the T5 lamp is placed. IP65 types – Waterproof Energy efficiency T5 retrofit conversion can maintain existing lighting levels with the higher efficiency of the T5 lamp. However, with kits that operates the lamp on the existing magnetic ballast, the efficiency drops and the lamp life is considerably shortened, as T5 lamps aren't designed to be operated on mains frequency but only on high frequency. References External links Building Sustainable Design - New generation energy efficient fluorescent tubes: triphosphor Light fixtures Environmental engineering
T5 retrofit conversion
[ "Chemistry", "Engineering" ]
421
[ "Chemical engineering", "Civil engineering", "Environmental engineering" ]
25,384,704
https://en.wikipedia.org/wiki/TCP%20Cookie%20Transactions
TCP Cookie Transactions (TCPCT) is specified in RFC 6013 (historic status, formerly experimental) as an extension of Transmission Control Protocol (TCP) intended to secure it against denial-of-service attacks, such as resource exhaustion by SYN flooding and malicious connection termination by third parties. Unlike the original SYN cookies approach, TCPCT does not conflict with other TCP extensions, but requires TCPCT support in the client (initiator) as well as the server (responder) TCP stack. The immediate reason for the TCPCT extension is deployment of the DNSSEC protocol. Prior to DNSSEC, DNS requests primarily used short UDP packets, but due to the size of DNSSEC exchanges, and shortcomings of IP fragmentation, UDP is less practical for DNSSEC. Thus DNSSEC-enabled requests create a large number of short-lived TCP connections. TCPCT avoids resource exhaustion on server-side by not allocating any resources until the completion of the three-way handshake. Additionally, TCPCT allows the server to release memory immediately after the connection closes, while it persists in the TIME-WAIT state. TCPCT support was partly merged into the Linux kernel in December 2009, but was removed in May 2013 because it was never fully implemented and had a performance cost. TCPCT was deprecated in 2016 in favor of TCP Fast Open. Status of the original RFC was changed to "historic". See also SYN cookies T/TCP (Transactional TCP) TCP Fast Open References Cookie Transactions Computer network security
TCP Cookie Transactions
[ "Technology", "Engineering" ]
337
[ "Cybersecurity engineering", "Computer network stubs", "Computer networks engineering", "Computer network security", "Computing stubs" ]
2,664,153
https://en.wikipedia.org/wiki/Hauyne
Hauyne or haüyne, also called hauynite or haüynite ( ), old name Azure spar, is a rare tectosilicate sulfate mineral with endmember formula . As much as 5 wt % may be present, and also and Cl. It is a feldspathoid and a member of the sodalite group. Hauyne was first described in 1807 from samples discovered in Vesuvian lavas in Monte Somma, Italy, and was named in 1807 by Brunn-Neergard for the French crystallographer René Just Haüy (1743–1822). It is sometimes used as a gemstone. Sodalite group Formulae: haüyne sodalite nosean lazurite tsaregorodtsevite tugtupite vladimirivanovite All these minerals are feldspathoids. Haüyne forms a solid solution with nosean and with sodalite. Complete solid solution exists between synthetic nosean and haüyne at 600 °C, but only limited solid solution occurs in the sodalite-nosean and sodalite-haüyne systems. The characteristic blue color of sodalite-group minerals arises mainly from caged and clusters. Unit cell Haüyne belongs to the hexatetrahedral class of the isometric system, 3m, space group P3n. It has one formula unit per unit cell (Z = 1), which is a cube with side length of 9 Å. More accurate measurements are as follows: a = 8.9 Å a = 9.08 to 9.13 Å a = 9.10 to 9.13 Å a = 9.11(2) Å a = 9.116 Å a = 9.13 Å Structure All silicates have a basic structural unit that is a tetrahedron with an oxygen ion O at each apex, and a silicon ion Si in the middle, forming (SiO4)4−. In tectosilicates (framework silicates) each oxygen ion is shared between two tetrahedra, linking all the tetrahedra together to form a framework. Since each O is shared between two tetrahedra only half of it "belongs" to the Si ion in either tetrahedron, and if no other components are present then the formula is SiO2, as in quartz. Aluminium ions Al, can substitute for some of the silicon ions, forming (AlO4)5− tetrahedra. If the substitution is random the ions are said to be disordered, but in haüyne the Al and Si in the tetrahedral framework are fully ordered. Si has a charge 4+, but the charge on Al is only 3+. If all the cations (positive ions) are Si then the positive charges on the Si's exactly balance the negative charges on the O's. When Al replaces Si there is a deficiency of positive charge, and this is made up by extra positively charged ions (cations) entering the structure, somewhere in between the tetrahedra. In haüyne these extra cations are sodium Na+ and calcium Ca2+, and in addition the negatively charged sulfate group (SO4)2− is also present. In the haüyne structure the tetrahedra are linked to form six-membered rings that are stacked up in an ..ABCABC.. sequence along one direction, and rings of four tetrahedra are stacked up parallel to another direction. The resulting arrangement forms continuous channels that can accommodate a large variety of cations and anions. Appearance Haüyne crystallizes in the isometric system forming rare dodecahedral or pseudo-octahedral crystals that may reach 3 cm across; it also occurs as rounded grains. The crystals are transparent to translucent, with a vitreous to greasy luster. The color is usually bright blue, but it can also be white, grey, yellow, green and pink. In thin section the crystals are colorless or pale blue, and the streak is very pale blue to white. Optical properties Haüyne is isotropic. Truly isotropic minerals have no birefringence, but haüyne is weakly birefringent when it contains inclusions. The refractive index is 1.50; although this is quite low, similar to that of ordinary window glass, it is the largest value for minerals of the sodalite group. It may show reddish orange to purplish pink fluorescence under longwave ultraviolet light. Physical properties Cleavage is distinct to perfect, and twinning is common, as contact, penetration and polysynthetic twins. The fracture is uneven to conchoidal, the mineral is brittle, and it has hardness to 6, almost as hard as feldspar. All the members of the sodalite group have quite low densities, less than that of quartz; haüyne is the densest of them all, but still its specific gravity is only 2.44 to 2.50. If haüyne is placed on a glass slide and treated with nitric acid HNO3, and then the solution is allowed to evaporate slowly, monoclinic needles of gypsum form. This distinguishes haüyne from sodalite, which forms cubic crystals of chlorite under the same conditions. The mineral is not radioactive. Geological setting and associations Haüyne occurs in phonolites and related leucite- or nepheline-rich, silica-poor, igneous rocks; less commonly in nepheline-free extrusives and metamorphic rocks (marble). Associated minerals include nepheline, leucite, titanian andradite, melilite, augite, sanidine, biotite, phlogopite and apatite. Localities The type locality is Lake Nemi, Alban Hills, Rome Province, Latium, Italy. Occurrences include: Canary Islands: A pale blue mineral intermediate between haüyne and lazurite has been found in spinel dunite xenoliths from La Palma, Canary Islands. Ecuador: Phenocrysts found in alkaline extrusive rocks (tephrite), product of effusive volcanism of the Sumaco volcano, of northeast Ecuador. Germany: In ejected rocks of hornblende-haüyne-scapolite rock from the Laach lake volcanic complex, Eifel, Rhineland-Palatinate Italy: Anhedral blue to dark grey phenocrysts in leucite-melilite-bearing lava at Monte Vulture, Melfi, Basilicata, Potenza Italy: Millimetric transparent blue crystals in ejecta consisting mainly of K-feldspar and plagioclase from Albano Laziale, Roma Italy: Ejected blocks in the peperino of the Alban Hills, Rome Province, Latium, contain white octahedral haüyne associated with leucite, garnet, melilite and latiumite. US: Haüyne of metamorphic origin occurs at the Edwards Mine, St. Lawrence County, New York. US: Haüyne occurs in nepheline alnoite with melilite, phlogopite and apatite at Winnett, Petroleum County, Montana, US. US: Haüyne is common in small quantities as phenocrysts in phonolite and lamprophyre at the Cripple Creek, Colorado Mining District, Colorado, US. See also References External links JMol: http://rruff.geo.arizona.edu/AMS/viewJmol.php?id=05334 Feldspathoid Sodalite group Sodium minerals Calcium minerals Aluminium minerals Cubic minerals Minerals in space group 218 Luminescent minerals Gemstones
Hauyne
[ "Physics", "Chemistry" ]
1,625
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
2,664,158
https://en.wikipedia.org/wiki/Center%20of%20percussion
The center of percussion is the point on an extended massive object attached to a pivot where a perpendicular impact will produce no reactive shock at the pivot. Translational and rotational motions cancel at the pivot when an impulsive blow is struck at the center of percussion. The center of percussion is often discussed in the context of a bat, racquet, door, sword or other extended object held at one end. The same point is called the center of oscillation for the object suspended from the pivot as a pendulum, meaning that a simple pendulum with all its mass concentrated at that point will have the same period of oscillation as the compound pendulum. In sports, the center of percussion of a bat, racquet, or club is related to the so-called "sweet spot", but the latter is also related to vibrational bending of the object. Explanation Imagine a rigid beam suspended from a wire by a fixture that can slide freely along the wire at point P, as shown in the Figure. An impulsive blow is applied from the left. If it is below the center of mass (CM) it will cause the beam to rotate counterclockwise around the CM and also cause the CM to move to the right. The center of percussion (CP) is below the CM. If the blow falls above the CP, the rightward translational motion will be bigger than the leftward rotational motion at P, causing the net initial motion of the fixture to be rightward. If the blow falls below the CP the opposite will occur, rotational motion at P will be larger than translational motion and the fixture will move initially leftward. Only if the blow falls exactly on the CP will the two components of motion cancel out to produce zero net initial movement at point P. When the sliding fixture is replaced with a pivot that cannot move left or right, an impulsive blow anywhere but at the CP results in an initial reactive force at the pivot. Calculating the center of percussion General case For a free, rigid beam, an impulse is applied at right angle at a point of impact, defined as a distance from the center of mass (CM). The force results in the change in velocity of the CM, i.e. : where is the mass of the beam. Moreover, the force produces a torque about the CM, which results in the change in angular velocity of the beam, i.e. : where is the moment of inertia around the CM. For any point P a distance on the opposite side of the CM from the point of impact, the change in velocity of point P is: Hence, the acceleration at P due to the impulsive blow is: The center of percussion (CP) is the point where this acceleration is zero (i.e. = 0), while the force is non-zero (i.e. F ≠ 0). Thus, at the center of percussion, the condition is: Therefore, the CP is at a distance from the CM, given by: Note that P, the rotation axis, need not be at the end of the beam, but can be chosen at any distance . Length also defines the center of oscillation of a physical pendulum, that is, the position of the mass of a simple pendulum that has the same period as the physical pendulum. Center of percussion of a uniform beam For the special case of a beam of uniform density of length , the moment of inertia around the CM is: (see moment of inertia for derivation), and for rotation about a pivot at the end, . This leads to: . It follows that the CP is 2/3 of the length of the uniform beam from the pivoted end. Some applications For example, a swinging door that is stopped by a doorstop placed 2/3 of the width of the door will do the job with minimal shaking of the door because the hinged end is subjected to no net reactive force. (This point is also the node in the second vibrational harmonic, which also minimizes vibration.) The sweet spot on a baseball bat is generally defined as the point at which the impact feels best to the batter. The center of percussion defines a place where, if the bat strikes the ball and the batter's hands are at the pivot point, the batter feels no sudden reactive force. However, since a bat is not a rigid object the vibrations produced by the impact also play a role. Also, the pivot point of the swing may not be at the place where the batter's hands are placed. Research has shown that the dominant physical mechanism in determining where the sweet spot is arises from the location of nodes in the vibrational modes of the bat, not the location of the center of percussion. The center of percussion concept can be applied to swords. Being flexible objects, the "sweet spot" for such cutting weapons depends not only on the center of percussion but also on the flexing and vibrational characteristics. References Physical quantities Mechanics Percussion
Center of percussion
[ "Physics", "Mathematics", "Engineering" ]
1,020
[ "Physical phenomena", "Point (geometry)", "Physical quantities", "Quantity", "Geometric centers", "Mechanics", "Mechanical engineering", "Physical properties", "Symmetry" ]
2,668,299
https://en.wikipedia.org/wiki/Java%20Modeling%20Language
The Java Modeling Language (JML) is a specification language for Java programs, using Hoare style pre- and postconditions and invariants, that follows the design by contract paradigm. Specifications are written as Java annotation comments to the source files, which hence can be compiled with any Java compiler. Various verification tools, such as a runtime assertion checker and the Extended Static Checker (ESC/Java) aid development. Overview JML is a behavioural interface specification language for Java modules. JML provides semantics to formally describe the behavior of a Java module, preventing ambiguity with regard to the module designers' intentions. JML inherits ideas from Eiffel, Larch and the Refinement Calculus, with the goal of providing rigorous formal semantics while still being accessible to any Java programmer. Various tools are available that make use of JML's behavioral specifications. Because specifications can be written as annotations in Java program files, or stored in separate specification files, Java modules with JML specifications can be compiled unchanged with any Java compiler. Syntax JML specifications are added to Java code in the form of annotations in comments. Java comments are interpreted as JML annotations when they begin with an @ sign. That is, comments of the form //@ <JML specification> or /*@ <JML specification> @*/ Basic JML syntax provides the following keywords requires Defines a precondition on the method that follows. ensures Defines a postcondition on the method that follows. signals Defines a postcondition for when a given Exception is thrown by the method that follows. signals_only Defines what exceptions may be thrown when the given precondition holds. assignable Defines which fields are allowed to be assigned to by the method that follows. pure Declares a method to be side effect free (like assignable \nothing but can also throw exceptions). Furthermore, a pure method is supposed to always either terminate normally or throw an exception. invariant Defines an invariant property of the class. loop_invariant Defines a loop invariant for a loop. also Combines specification cases and can also declare that a method is inheriting specifications from its supertypes. assert Defines a JML assertion. spec_public Declares a protected or private variable public for specification purposes. Basic JML also provides the following expressions \result An identifier for the return value of the method that follows. \old(<expression>) A modifier to refer to the value of the <expression> at the time of entry into a method. (\forall <decl>; <range-exp>; <body-exp>) The universal quantifier. (\exists <decl>; <range-exp>; <body-exp>) The existential quantifier. a ==> b a implies b a <== b a is implied by b a <==> b a if and only if b as well as standard Java syntax for logical and, or, and not. JML annotations also have access to Java objects, object methods and operators that are within the scope of the method being annotated and that have appropriate visibility. These are combined to provide formal specifications of the properties of classes, fields and methods. For example, an annotated example of a simple banking class may look like public class BankingExample { public static final int MAX_BALANCE = 1000; private /*@ spec_public @*/ int balance; private /*@ spec_public @*/ boolean isLocked = false; //@ public invariant balance >= 0 && balance <= MAX_BALANCE; //@ assignable balance; //@ ensures balance == 0; public BankingExample() { this.balance = 0; } //@ requires 0 < amount && amount + balance < MAX_BALANCE; //@ assignable balance; //@ ensures balance == \old(balance) + amount; public void credit(final int amount) { this.balance += amount; } //@ requires 0 < amount && amount <= balance; //@ assignable balance; //@ ensures balance == \old(balance) - amount; public void debit(final int amount) { this.balance -= amount; } //@ ensures isLocked == true; public void lockAccount() { this.isLocked = true; } //@ requires !isLocked; //@ ensures \result == balance; //@ also //@ requires isLocked; //@ signals_only BankingException; public /*@ pure @*/ int getBalance() throws BankingException { if (!this.isLocked) { return this.balance; } else { throw new BankingException(); } } } Full documentation of JML syntax is available in the JML Reference Manual. Tool support A variety of tools provide functionality based on JML annotations. The Iowa State JML tools provide an assertion checking compiler jmlc which converts JML annotations into runtime assertions, a documentation generator jmldoc which produces Javadoc documentation augmented with extra information from JML annotations, and a unit test generator jmlunit which generates JUnit test code from JML annotations. Independent groups are working on tools that make use of JML annotations. These include: ESC/Java2 , an extended static checker which uses JML annotations to perform more rigorous static checking than is otherwise possible. OpenJML declares itself the successor of ESC/Java2. Daikon, a dynamic invariant generator. KeY, which provides an open source theorem prover with a JML front-end and an Eclipse plug-in (JML Editing) with support for syntax highlighting of JML. Krakatoa, a static verification tool based on the Why verification platform and using the Coq proof assistant. JMLEclipse, a plugin for the Eclipse integrated development environment with support for JML syntax and interfaces to various tools that make use of JML annotations. Sireum/Kiasan, a symbolic execution based static analyzer which supports JML as a contract language. JMLUnit, a tool to generate files for running JUnit tests on JML annotated Java files. TACO, an open source program analysis tool that statically checks the compliance of a Java program against its Java Modeling Language specification. References Gary T. Leavens and Yoonsik Cheon. Design by Contract with JML; Draft tutorial. Gary T. Leavens, Albert L. Baker, and Clyde Ruby. JML: A Notation for Detailed Design; in Haim Kilov, Bernhard Rumpe, and Ian Simmonds (editors), Behavioral Specifications of Businesses and Systems, Kluwer, 1999, chapter 12, pages 175-188. Gary T. Leavens, Erik Poll, Curtis Clifton, Yoonsik Cheon, Clyde Ruby, David Cok, Peter Müller, Joseph Kiniry, Patrice Chalin, and Daniel M. Zimmerman. JML Reference Manual (DRAFT), September 2009. HTML Marieke Huisman, Wolfgang Ahrendt, Daniel Bruns, and Martin Hentschel. Formal specification with JML. 2014. download (CC-BY-NC-ND) External links JML website Java platform Formal specification languages Articles with example Java code
Java Modeling Language
[ "Technology" ]
1,558
[ "Computing platforms", "Java platform" ]
22,500,506
https://en.wikipedia.org/wiki/Doramad%20Radioactive%20Toothpaste
Doramad Radioactive Toothpaste (Doramad Radioaktive Zahncreme) was a brand of toothpaste produced in Germany by Auergesellschaft of Berlin from the 1920s through World War II. It was known for containing thorium, a radioactive metal, and is an example of radioactive quackery. Development The toothpaste was slightly radioactive because it contained small amounts of thorium obtained from monazite sands. Auergesellschaft used thorium and rare-earth elements in making industrial products including mantles for gas lanterns; the toothpaste was produced as a byproduct. Its radioactive content was promoted as imparting health benefits, including antibacterial action and a contribution to strengthening the "defenses of teeth and gums". According to the manufacturer's marketing materials, it was said to work due to the biological properties of the thorium increasing the circulation of the blood in the gums, destroying germs, and increasing the "life force" in the tissues of the mouth. Effect The company promised radiantly white teeth and bacterial extraction due to the ionizing radiation of the radioactive substances. The toothpaste was considered at the time a milestone of technical achievement and was touted as a "miracle remedy". Consequential damage caused by ionizing radiation is mostly unknown. Only after the atomic bombs were used in Hiroshima and Nagasaki were the potential effects of ionizing radiation recognized; thus, the toothpaste's claims were invalid. Written on the packaging was: Besondere biologische Heilwirkungen durch Radium-Strahlen. Tausendfach ärztlich verordnet und empfohlen. Special biological healing effects by radium rays. A thousand times medically prescribed and recommended. On the back of the toothpaste tube was the following: Was leistet Doramad? Durch ihre radioaktive Strahlung steigert sie die Abwehrkräfte von Zahn u. Zahnfleisch. Die Zellen werden mit neuer Lebensenergie geladen, die Bakterien in ihrer zerstörenden Wirksamkeit gehemmt. Daher die vorzügliche Vorbeugungs- und Heilwirkung bei Zahnfleischerkrankungen. Poliert den Schmelz aufs Schonendste weiß und glänzend. Hindert Zahnsteinansatz. Schäumt herrlich, schmeckt neuartig, angenehm, mild u. erfrischend. Ausgiebig im Gebrauch. What does Doramad do? Through its radioactivity, it increases the defenses of teeth and gums. The cells are charged with a new vigorous life energy, which inhibits bacteria in their destructive ability. Hence the exquisite prevention and healing effect on gum diseases. Polishes enamel to the softest shiny white. Prevents tartar approach. Good foam, new taste, pleasant, mild and refreshing. Use extensively. World War II During the German military administration in occupied France during World War II, a group of German scientists stole all the thorium they could while in occupied France. The Alsos Mission thought they were using the heavy elements for the refinement of uranium to be used in an atomic bomb. However, after Allied agents captured and investigated a German chemical company's representative, it was revealed that the scientists were not seeking to develop an atomic bomb at all; rather, they were attempting to make thorium toothpaste. According to physicist Samuel Goudsmit in a 1947 issue of Time, the German chemical company's officials had realized that, at the end of the war, they would no longer be able to make money producing wartime equipment such as gas masks or carbons for searchlights, and they decided cosmetic products would be their best option for future sales. One of the company's officials already had a patent for thorium toothpaste (likely unrelated to Doramad) and, influenced by marketing for Pepsodent "irium" toothpaste in the United States, the company sought to gain a monopoly on all the thorium they could find in order to produce as much thorium toothpaste as they could after the war, which led to the company's scientists stealing all of France's thorium. The identity and fate of the chemical company, the fate of the stolen thorium, and whether or not the thorium toothpaste was actually produced after the war is unknown. See also List of toothpaste brands Index of oral health and dental articles References External links Gehes Codex der pharmazeutischen Spezialpräparate (1926) Brands of toothpaste Oral hygiene Health in Germany Radioactive quackery Thorium
Doramad Radioactive Toothpaste
[ "Chemistry" ]
997
[ "Radioactive quackery", "Radioactivity" ]
22,500,560
https://en.wikipedia.org/wiki/NICO%20Clean%20Tobacco%20Card
The NICO Clean Tobacco Card was a device exported from Japan to the United States in the 1960s, consisting of a small card impregnated with uranium ore. The card was to be placed inside a pack of cigarettes, and the producers claimed that the radiation emitted by the card would reduce tar and nicotine, and enhance the smoking experience. A similar product, the Nicotine Alkaloid Control Plate, was produced in the 1990s but not exported. References Cigarettes Radioactive quackery Smoking in Japan Tobacco in Japan
NICO Clean Tobacco Card
[ "Chemistry" ]
104
[ "Radioactive quackery", "Radioactivity" ]
22,505,779
https://en.wikipedia.org/wiki/Source%20measure%20unit
A source measure unit (SMU) is a type of electronic test equipment which can source voltage and current and measure them as it does so. Overview The source measure unit (SMU), or source-measurement unit, is an electronic instrument that is capable of both sourcing and measuring at the same time. It can precisely force voltage or current and simultaneously measure precise voltage and/or current. SMUs are used for test applications requiring high accuracy, high resolution and measurement flexibility. Such applications include I-V characterizing and testing semiconductors and other non-linear devices and materials, where sourcing voltage and current source span across both positive and negative values. To accomplish this, SMUs have four-quadrant outputs. For characterization purposes SMUs are bench instruments similar to a curve tracer. They are also commonly used in automatic test equipment and usually are equipped with an interface such as GPIB or USB to enable connection to a computer. History Semiconductor characterization led to the development of source measure units. The HP4145A semiconductor parameter analyzer introduced in 1982 was capable of a complete DC characterization of semiconductor devices and materials. It consisted of four independently controlled source monitor units (the precursor to source measure units) enclosed in a mainframe. The Keithley 236 introduced in 1989 was the first stand-alone SMU and allowed system builders to integrate one or more SMUs with a separate PC control. Over time stand-alone SMUs have evolved to offer a broader range of current, voltage, power level and price points for applications beyond semiconductor characterization. Smaller form factors made possible through the use of modern computing technologies have allowed system builders to integrate SMUs into rack and stack systems for larger scale production test applications. Operation A SMU integrates a highly stable DC power source, as a constant current source or as a constant voltage source, and a high precision multimeter. It typically has four terminals, two for source and measurement and two more for kelvin, or remote sense, connection. Power is simultaneously sourced (positive) or sinked (negative) to a pair of terminals at the same time as measuring the current or voltage across those terminals is done. SMU vs. power supply A power supply is mainly intended to provide appropriate power for a particular application. Due to this, the majority of power-supplies are one-quadrant (source only, with fixed polarity), and in most cases constant-voltage operation. Bench power supplies might add constant-current operation as well as providing limited measurement capabilities, but these are in many cases still one-quadrant only and with margins of errors acceptable for coarse lab-work. Some high-end lab power-supplies will have two- or four-quadrant operation (source and sink, with fixed or dual polarity), which is an essential feature of a SMU. However, many of these still have a main focus on providing power to an application, where eventual measurement capabilities has secondary priority. These may have advanced capabilities of controlling the power output, but might lack things like specialized test-modes or monitoring-options tailored for precise and easy power-characterization. This particular class of power-supplies can be regarded as the predecessor for the SMU, where the SMU differs in that it adds features particularly aimed towards characterization. SMU vs. DMM The built-in sourcing capabilities of an SMU work with the instrument's measurement capabilities to reduce measurement uncertainty and support low current and more flexible resistance measurements. In voltage measurements system-level leakage can be suppressed more easily than with separate instruments. In current measurements, the SMU's design reduces voltage burden. For resistance measurements, SMUs provide programmable source values, useful for protecting the device being tested. Significant features Notable features of SMUs include the following: I and V sweeping—Sweep capabilities offer a way to test devices under a range of conditions with different source, delay and measure characteristics. These can include fixed level, linear/log and pulsed sweeps. On-board processor—Some SMUs further improve instrument integration, communication and test time by adding an on-board script processor. User-defined on-board script execution offers capabilities for controlling test sequencing/flow, decision making, and instrument autonomy. Contact check—SMUs can verify good connections to the device under test before the test begins. Some of the problems this function can detect include contact fatigue, breakage, contamination, corrosion, loose or broken connections and relay failures. See also Semiconductor curve tracer Voltage source Current source Digital multimeter Power supply Electrometer Electronic load References External links Video of Keithley 2450 SMU Review and Experiments Source Measurement Unit solutions from National Instruments SMMU07 Source Measurement Multiplex Unit from FRANK Germany Source Measure Unit GS610 Single channel Source Measure Unit from Yokogawa GS820 Two channel Source Measure Unit from Yokogawa Aim-TTi SMU4000 series PowerFlex SMU Electronic test equipment
Source measure unit
[ "Technology", "Engineering" ]
991
[ "Electronic test equipment", "Measuring instruments" ]
24,005,385
https://en.wikipedia.org/wiki/C12H14O3
{{DISPLAYTITLE:C12H14O3}} The molecular formula C12H14O3 (molar mass: 206.241 g/mol) may refer to: Ethyl methylphenylglycidate Acetyleugenol
C12H14O3
[ "Chemistry" ]
56
[ "Isomerism", "Set index articles on molecular formulas" ]
24,005,535
https://en.wikipedia.org/wiki/C29H42O6
{{DISPLAYTITLE:C29H42O6}} The molecular formula C29H42O6 may refer to: Hydrocortisone cypionate, a synthetic glucocorticoid corticosteroid and a corticosteroid ester Kendomycin, an anticancer macrolide Molecular formulas
C29H42O6
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,005,553
https://en.wikipedia.org/wiki/C24H40O3
{{DISPLAYTITLE:C24H40O3}} The molecular formula C24H40O3 (molar mass: 376.57 g/mol, exact mass: 376.2977 u) may refer to: CP 55,940 Lithocholic acid (LCA), or 3α-hydroxy-5β-cholan-24-oic acid Molecular formulas
C24H40O3
[ "Physics", "Chemistry" ]
88
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,005,611
https://en.wikipedia.org/wiki/C16H30O
{{DISPLAYTITLE:C16H30O}} The molecular formula C16H30O (molar mass: 238.41 g/mol, exact mass: 238.2297 u) may refer to: Bombykol Cyclohexadecanone Muscone Molecular formulas
C16H30O
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,006,523
https://en.wikipedia.org/wiki/Submersible%20mixer
A submersible mixer is a mechanical device that is used to mix sludge tanks and other liquid volumes. Submersible mixers are often used in sewage treatment plants to keep solids in suspension in the various process tanks and/or sludge holding tanks. Working principle The submersible mixer is operated by an electric motor, which is coupled to the mixer's propeller, either direct-coupled or via a planetary gear-reducer. The propeller rotates and creates liquid flow in the tank, which in turn keeps the solids in suspension. The submersible mixer is typically installed on a guide rail system, which enables the mixer to be retrieved for periodic inspection and preventive maintenance. Applications Examples of applications where submersible mixers are commonly applied: Anoxic/anaerobic tanks and oxidation basins (activated sludge) at sewage treatment plants IFAS, MBBR, and other fixed film biocarrier processes. Mixing of sewage wet wells Reception tanks and post-digestion tanks at biogas facilities Liquid Manure storage tanks at dairy, hog, and poultry farms Waste processing at slaughterhouses, poultry abattoirs, fish processing plants, etc. References Sewerage
Submersible mixer
[ "Chemistry", "Engineering", "Environmental_science" ]
240
[ "Sewerage", "Environmental engineering", "Water pollution" ]
24,007,403
https://en.wikipedia.org/wiki/Enterprise%20interoperability
Enterprise interoperability is the ability of an enterprise—a company or other large organization—to functionally link activities, such as product design, supply chains, manufacturing, in an efficient and competitive way. The research in interoperability of enterprise practised in is various domains itself (enterprise modelling, ontologies, information systems, architectures and platforms) which it is a question of positioning. Enterprise interoperability topics Interoperability in enterprise architecture Enterprise architecture (EA) presents a high level design of enterprise capabilities that defines successful IT projects in coherence with enterprise principles and business related requirements. EA covers mainly (i) the business capabilities analysis and validation; (ii) the development of business, application, data and technical architectures and solutions, and finally (iii) the control of programme and project implementation and governance. The application of EA methodology feeds the enterprise repository reference frame with sets of building blocks used to compose the targeted system. The interoperability can be considered either as a principle, requirement or constraint that impact the definition of patterns to compose building blocks in the definition of targeted architectural roadmap. In this scope, EA within the TOGAF perspective, aims to reconcile interoperability requirements with potential solutions that make developed systems interoperable. So as to maintain the interoperability challenge quite present in the next steps of system's lifecycle, several models and Frameworks are developed under the topic enterprise interoperability. Enterprise interoperability frameworks To preserve interoperability, several enterprise interoperability frameworks can be identified in the literature: 2003: IDEAS: Interoperability Developments for Enterprise Application and Software. 2004: EIF: The European Interoperability Framework 2004: e-GIF: e-Government Interoperability Framework 2006: FEI: The Framework for Enterprise Interoperability 2006: C4IF: Connection, Communication, Consolidation, Collaboration Interoperability Framework 2007: AIF: Athena Interoperability Framework 2007: Enterprise Architecture Framework for Agile and Interoperable Virtual Enterprises The majority of these frameworks considers enterprise at several aspects, viewpoints or abstraction levels: business, process, knowledge, application, technology, data, technic, etc. and proposes guidelines to support modeling and connection capabilities between these levels. The semantic challenge is considered as transversal to all these abstraction levels. Setting up and applying guidelines and methodologies developed within these frameworks requires modeling efforts that identify and connect artifacts. Interoperability in software engineering The evolution of IT technologies aims to outsource IT capabilities to vendors to manage for use on demand. The evolution pathway starts form packaged solutions and goes through Infrastructure as a service (Iaas), Platform as a service (Paas), Software as a service (Saas) and recently the Cloud. Interoperability efforts are still mainly expected among these levels: strategy to business business to processes processes to application Dealing with business process definition, alignment, collaboration and interoperability, several international standards propose methodologies and guidelines in these perspectives: ISO 15704—Requirements for enterprise-reference architectures and methodologies CEN-ISO DIS 19439—Framework for Enterprise Modeling CEN-ISO WD 19440—Constructs for Enterprise Modeling ISO 18629—Process specification language ISO/IEC 15414—ODP Reference Model—Enterprise Language In addition, recent standards (BPMN, BPEL, etc.) and their implementation technologies propose relevant integration capabilities. Furthermore, model driven-engineering provides capabilities that connect, transform and refine models to support interoperability. Metrics for interoperability maturity assessment The following approaches propose some metrics to assess the interoperability maturity, LISI: Levels of Information Systems Interoperability OIM: Organizational Interoperability Model NMI: NC3TA reference Model for Interoperability LCIM: Levels of Conceptual Interoperability Model EIMM: Enterprise Interoperability Maturity Model Smart Grid Interoperability Maturity Model Rating System For the several interoperability aspects identified previously, the listed maturity approaches define interoperability categories (or dimensions) and propose qualitative as well as qualitative cross cutting issues to assess them. While interoperability aspects are not covered by a single maturity approach, some propositions go deeply in the definition of metric dimensions at one interoperability aspect such as the business interoperability measurement proposed by Aneesh. See also INTEROP-VLab References External links INTEROP-VLab Interoperability Enterprise modelling Management cybernetics Knowledge representation
Enterprise interoperability
[ "Engineering" ]
909
[ "Systems engineering", "Enterprise modelling", "Telecommunications engineering", "Interoperability" ]
24,007,866
https://en.wikipedia.org/wiki/Chlormayenite
Chlormayenite (after Mayen, Germany), Ca12Al14O32[☐4Cl2], is a rare calcium aluminium oxide mineral of cubic symmetry. It was originally reported from Eifel volcanic complex (Germany) in 1964. It is also found at pyrometamorphic sites such as in the Hatrurim Formation of Israel and in some burned coal dumps. It occurs in thermally altered limestone xenoliths within basalts in Mayen, Germany and Klöch, Styria, Austria. In the Hatrurim of Israel it occurs in thermally altered limestones. It occurs with calcite, ettringite, wollastonite, larnite, brownmillerite, gehlenite, diopside, pyrrhotite, grossular, spinel, afwillite, jennite, portlandite, jasmundite, melilite, kalsilite and corundum in the limestone xenoliths. In the Hatrurim it occurs with spurrite, larnite, grossite and brownmillerite. Synthetic Ca12Al14O33 and Ca12Al14O32(OH)2 are known, they are stabilized by moisture instead of chlorine. The formula can be written as [Ca12Al14O32]O, which refers to the unique feature: anion diffusion process. Chlormayenite is also found as calcium aluminate in cement where its formula is also written as 11CaO·7 Al2O3·CaCl2, or C11A7CaCl2 in the cement chemist notation. See also Calcium aluminate cements References Oxide minerals Calcium minerals Aluminium minerals Cubic minerals Minerals in space group 220 Cement Concrete
Chlormayenite
[ "Engineering" ]
366
[ "Structural engineering", "Concrete" ]
24,008,131
https://en.wikipedia.org/wiki/C24H27NO2
{{DISPLAYTITLE:C24H27NO2}} The molecular formula C24H27NO2 (molar mass: 361.48 g/mol) may refer to: Levophenacylmorphan Octocrylene N-Phenethylnordesomorphine Molecular formulas
C24H27NO2
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,008,134
https://en.wikipedia.org/wiki/C15H22O5
{{DISPLAYTITLE:C15H22O5}} The molecular formula C15H22O5 (molar mass: 282.33 g/mol) may refer to: Artemisinin, a drug used to treat multi-drug resistant strains of falciparum malaria Octyl gallate, an antioxidant and food preservative Molecular formulas
C15H22O5
[ "Physics", "Chemistry" ]
81
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,008,138
https://en.wikipedia.org/wiki/C15H22O3
{{DISPLAYTITLE:C15H22O3}} The molecular formula C15H22O3 (molar mass: 255.33 g/mol) may refer to: Gemfibrozil, an oral drug used to lower lipid levels Nardosinone, a sesquiterpene Octyl salicylate, an ingredient in sunscreens Sterpuric acid, a sesquiterpene Xanthoxin, a carotenoid Molecular formulas
C15H22O3
[ "Physics", "Chemistry" ]
107
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,008,145
https://en.wikipedia.org/wiki/C32H48O9
{{DISPLAYTITLE:C32H48O9}} The molecular formula C32H48O9 (molar mass: 576.72 g/mol, exact mass: 576.3298 u) may refer to: Cerberin, a cardiac glycoside Oleandrin, a cardiac glycoside Molecular formulas
C32H48O9
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,008,469
https://en.wikipedia.org/wiki/Eudysmic%20ratio
The eudysmic ratio (also spelled eudismic ratio) represents the difference in pharmacologic activity between the two enantiomers of a drug. In most cases where a chiral compound is biologically active, one enantiomer is more active than the other. The eudysmic ratio is the ratio of activity between the two. A eudysmic ratio significantly differing from 1 means that they are statistically different in activity. Eudisimic ratio (ER) reflects the degree of enantioselectivity of the biological systems. For example, (S)-propranolol (ER = 130) meaning that (S)-propranolol is 130 times more active than its (R)-enantiomer. Terminology The eutomer is the enantiomer having the desired pharmacological activity, e.g., as an active ingredient in a drug. The distomer, on the other hand, is the enantiomer of the eutomer which may have undesired bioactivity or may be bio-inert. A racemic mixture is an equal mixture of both enantiomers, which may be easier to manufacture than a single enantiomeric form. It is often the case that only a single one of the enantiomers contains all of the wanted bioactivity, the distomer is often less active, has no desired activity or may even be toxic. In some cases, the eudysmic ratio is so high, that it is desired to separate out the two enantiomers instead of leaving it as a racemic product. It is also possible that the distomer is not simply completely inactive but actually antagonizes the effects of the eutomer. There are a few examples of chiral drugs where both the enantiomers contribute, in different ways, to the overall desired effect. An interesting situation is that in which the distomer antagonizes a side-effect of the eutomer for the desired action, mutually beneficial action form therapeutic standpoint.  This is convincingly demonstrated by the diuretic indacrinone.  The (R)-(+)-isomer, the eutomer, is responsible for the diuretic action and undesired uric acid retention, a side-effect common to many diuretics.  The (S)-(-)-isomer, the distomer, acts as a uricosuric agent and thus antagonizes the side-effect caused by the (R)-isomer.  A superficial examination of these facts might suggest the marketing of this product as a racemate (1:1 mixture of both enantiomers) to be desirable, since both enantiomers are complementing each other, but for optimal action, the ideal eutomer to distomer ratio for indacrinone has been determined to be 9:1. This is a classical case of a non-racemic drug. Alternatively, it is possible that in the body the distomer converts, at least in part, into the eutomer. Calculation One way the eudysmic ratio is computed is by dividing the EC50 or the IC50 of the eutomer by the same measurement of the distomer. Whether one chooses to use the EC50 or IC50 depends on the drug in question. Examples Citalopram: steps were taken to separate the more potent enantiomer, escitalopram. Thalidomide is a drug whose two enantiomers cause distinctly different effects from one another. The unforeseen teratogenicity of the (R)-(+)-isomer caused it to become an important case study of stereochemistry in medicine. Although it is possible to chemically isolate just the desired (S)-(−)-isomer from the racemic mixture, the two enantiomers rapidly interconvert in vivo; thus rendering their separation to be of little use. Methorphan is another drug whose two enantiomers possess very different binding profiles, with the L enantiomer being a potent opioid analgesic, and the D enantiomer being a commonly used over-the-counter cough suppressant which acts as an NMDA-antagonist but possesses nearly no opioid activity. In the case of morphinan, the eudysmic ratio is preserved after metabolism as the D and L metabolites possess the same pharmacological targets as the corresponding methorphan enantiomers, but are considerably more potent than their parent compounds. Amino acids are also an example of eudysmic ratio. Nearly all of the amino acids in the human body are called "L" amino acids; despite being chiral, the body almost exclusively creates and uses amino acids in this one configuration. D amino acids, the enantiomers — or "mirror images" — of the amino acids in the human body cannot be incorporated into proteins. D-aspartate and D-serine are two notable counterexamples, since they do not appear to ever be incorporated into proteins, but instead act individually as signalling molecules. However, mammals can metabolize significant amount of D amino acids by oxidizing them to alpha-ketoacids (most of which are non-chiral) and then transaminases can create L amino acids. There are no reasons to believe that humans are exceptional, they have all required enzymes (DDO, DAO). Some common foods contain near-racemic mixtures of amino acids. See also Enantiopure drug References Pharmacodynamics Stereochemistry
Eudysmic ratio
[ "Physics", "Chemistry" ]
1,188
[ "Pharmacology", "Pharmacodynamics", "Stereochemistry", "Space", "nan", "Spacetime" ]
24,008,985
https://en.wikipedia.org/wiki/Nuclear%20Power%20School
The Nuclear Power School (NPS) is a technical training institution operated by the United States Navy in Goose Creek, South Carolina. It serves as a core component of the Navy’s program to prepare enlisted sailors, officers, and civilians employed at the Knolls Atomic Power Laboratory and Bettis Atomic Power Laboratory for the operation and maintenance of nuclear power plants aboard surface ships and submarines in the U.S. nuclear navy. , the U.S. Navy manages 98 nuclear power plants, including 71 submarines (each powered by a single reactor), 11 aircraft carriers (each with two reactors), two Moored Training Ships (MTS), and two land-based training plants. NPS is the cornerstone of the Navy’s nuclear training pipeline. Enlisted personnel typically attend Nuclear Field "A" School before beginning at NPS, while officers and some civilian contractors enter the program with a college degree. The program culminates in certification as a nuclear operator at one of the Navy’s two Nuclear Power Training Units (NPTU). The curriculum at NPS is widely regarded as one of the most grueling in the U.S. military. Overview Prospective enlisted enrollees in the Nuclear Power Program must have qualifying line scores on the ASVAB exam, may need to pass the NFQT (Nuclear Field Qualification Test), and must undergo a NACLC investigation for attaining a "Secret" security clearance. Additionally, each applicant must pass an interview with the Advanced Programs Coordinator in the associated recruiting district. All officer students have had college-level courses in calculus and calculus-based physics. Acceptance to the officer program requires successful completion of interviews at Naval Reactors in Washington, D.C., and a final approval via a direct interview with the Director, Naval Nuclear Propulsion, a unique eight-year, four-star admiral position which was originally held by the program's founder, Admiral Hyman G. Rickover. Women were allowed into the Naval Nuclear Field from 1978 until 1980, when the Navy began only allowing men again. With the repeal of the Combat Exclusion Law in the 1994 Defense Authorization Act, and the decision to open combatant ships to women, the Navy once again began accepting women into NNPS for duty aboard nuclear-powered surface combatant ships. In 2010 the Navy lifted the ban on women on submarines, and one year later the first female officers reported for the first time onboard US Navy Submarines. The first female enlisted sailors reported onboard submarines in 2015. In November 2015, the first female Reactor Officer, Commander Erica L. Hoffmann, took leadership of Reactor Department onboard . CVN Reactor Officer is the most senior shipboard nuclear officer position in the Navy, with a pre-requisite of completing a commanding officer tour on board a non-nuclear surface ship before the officer can receive a Reactor Officer assignment. Following graduation from Boot Camp, enlisted personnel proceed to Nuclear Field "A" School for training in rating as Machinist's Mate (MMN), Electrician's Mate (EMN), or Electronics Technician (ETN). Active duty obligation is six years. Applicants must enlist for four years and concurrently execute an agreement to extend their enlistment for 24 months to accommodate the additional training involved. Personnel in the Nuclear Field program will be enlisted in paygrade E-3. Advancement to paygrade E-4 is authorized only after personnel complete all advancement-in-rate requirements (to include minimum time in rate) and Class "A" School, provided eligibility in the Nuclear Field program is maintained. If Nuclear Field Class "A" School training is not completed, the member may be administratively reduced to E-2 or E-1, depending on the member's time in rate at the date of disenrollment. Upon acceptance of automatic advancement to paygrade E-4, the member will be obligated for 12-months of the two-year extension, in addition to the four-year enlistment, regardless of whether or not advanced training (i.e. NPS/NPTU) is completed. They then continue to Nuclear Power School for an additional six months of College level classroom instruction. Graduates of the Nuclear Power School proceed to an additional six months of training at a Nuclear Power Training Unit (NPTU). This training involves the operation and maintenance of nuclear reactor plants and steam plants. Graduates of NPTU are qualified as nuclear operators, and most graduates immediately receive assignments to serve on submarines and aircraft carriers in the fleet. Upon completion of training at NPS and NPTU, the sailor is obligated to the remaining 12 months of the two year extension resulting in a total of six years active duty obligation for those who complete the program. A few students from each NPTU class are selected as a Junior Staff Instructor (JSI) based on top academic performance throughout the program, evaluation for aptitude to be an instructor, and willingness to incur an additional 24 month service obligation (for a total of eight years on active duty). JSIs receive additional instructor training at the NPTU and then train students themselves for 24 months before eventually continuing on to serve in the fleet. Additionally, a few MMN graduates from each NPTU class are selected to undergo further training in the Engineering Laboratory Technician (ELT) specialty. ELTs are responsible for collection, analysis, and controls of reactor plant and steam generator water chemistry, as well as radiological analysis and controls. Upon completion of ELT training graduates are given assignments to the fleet. History of locations After Admiral Rickover became chief of a new section in the Bureau of Ships, the Nuclear Power Division, he began work with Alvin M. Weinberg, the Oak Ridge National Laboratory (ORNL) director of research, to initiate and develop the Oak Ridge School of Reactor Technology (ORSORT) and to begin the design of the pressurized water reactor for submarine propulsion. Training for Fleet operators was subsequently conducted by civilian engineers at Idaho Falls, Idaho (1955-1958) and West Milton, New York (1955-1956). The first formal Nuclear Power School was established in New London, Connecticut in January 1956 with a pilot course offered for six officers and fourteen enlisted men. This school remained in use through Class 62-2 in 1962, after which the school was relocated to Bainbridge, Maryland. Subsequent locations were United States Naval Training Center Bainbridge, Maryland (1962-1976); Mare Island Naval Shipyard, California (1958-1976); Naval Training Center Orlando, Florida (1976-1998) and its current location, Goose Creek, South Carolina. In 1986, Nuclear Field A School was established in Orlando to provide nuclear in-rate training to Sailors prior to attending Nuclear Power School. In 1993, in response to the Base Realignment and Closure-directed closure of NTC Orlando by the end of Fiscal Year 1999, the Nuclear Field A School and Nuclear Power School were joined to create Naval Nuclear Power Training Command. A move from Orlando, Florida to Goose Creek, South Carolina began in May 1998 and was completed in January 1999. Construction of the new command allowed Nuclear Field A School and Nuclear Power School to be located in the same building. Many improvements were added to the command to improve each sailor's quality of life and the effectiveness of training. The Bachelor Enlisted Quarters include microwaves and refrigerators along with semiprivate rooms joined by a common bath. The complex also includes a galley, recreation building, and recreation fields conveniently located for the sailors' use. At full capacity, the NNPTC complex can accommodate over 3,600 students and 480 staff members. Naval Health Clinic Charleston is located across NNPTC Circle from the NNPTC site and is a short walk from the main Rickover Center building. College credit (enlisted training) The American Council of Education recommends an average of 60-80 semester-hours of college credit, in the lower-division baccalaureate/associate degree category, for completion of the entire curriculum including both Nuclear Field "A" School and Naval Nuclear Power School. The variation in total amount depends on the specific pipeline completed — MM, EM, or ET. Further, under the Servicemembers Opportunity Colleges degree program for the Navy (SOCNAV), the residency requirements at these civilian institutions are reduced to only 10-25%, allowing a student to take as little as nine units of coursework (typically three courses) through the degree-granting institution to complete their Associate in Applied Science degree in nuclear engineering technology or as much as 67 units to complete a bachelor's degree in Nuclear Engineering Technology or Nuclear Energy Engineering Technology. The following select colleges offer college credit and degree programs to graduates of the U.S. Naval Nuclear Power School (NNPS): Thomas Edison State University School of Applied Science and Technology Bachelor of Science in Applied Science and Technology (BSAST) Degree is designed for graduates of the U.S. Navy nuclear power program and degrees granted after October 2010 are accredited by the Technology Accreditation Commission (TAC) of the Accreditation Board for Engineering and Technology (ABET). Old Dominion University's Batten College of Engineering & Technology offers a Bachelor of Science in Engineering Technology accredited by the Technology Accreditation Commission (TAC) of the Accreditation Board for Engineering and Technology (ABET). The Nuclear Engineering Technology Option of the Mechanical Engineering Technology major is a special program available to graduates of the U.S. Navy Nuclear Power School., Excelsior College School of Business and Technology's Bachelor of Science Nuclear Engineering Technology Degree. The Excelsior College baccalaureate degree program in nuclear engineering technology is also accredited by TAC of ABET. Rensselaer Polytechnic Institute Department of Mechanical, Aerospace, and Nuclear Engineering, in cooperation with the Education for Working Professionals Office and the U.S. Navy, have developed undergraduate degree programs in nuclear engineering for graduates of the U.S. Navy Nuclear Power Training School. Nuclear Power Training Unit The Kesselring Site in New York has the longest operational history of the NPTUs. In 2012 it celebrated the 50,000th sailor qualified at the site. However, two other NPTU sites also provided operational training during the Cold War. From the early 1950s to the mid-1990s, Naval Reactors Facility (NRF) in Idaho trained nearly 40,000 Navy personnel in surface and submarine nuclear power plant operations with three nuclear propulsion prototypes — A1W, S1W, and S5G. From 1959 until 1993, over 14,000 Naval operators were trained at the S1C prototype at Windsor, Connecticut. The current Nuclear Power Training Unit is located at the Charleston Naval Weapons Station, Joint Base Charleston (>17,000 acres, 27 square miles) which includes the moored training ships (MTS), , , and The unit's first MTS, , was inactivated in early 2021. References Nuclear technology Nuclear organizations Educational institutions established in the 1950s Military education and training in the United States United States Navy schools and training Education in Goose Creek, South Carolina
Nuclear Power School
[ "Physics", "Engineering" ]
2,220
[ "Nuclear technology", "Nuclear organizations", "Energy organizations", "Nuclear physics" ]
24,009,505
https://en.wikipedia.org/wiki/C27H43NO8
{{DISPLAYTITLE:C27H43NO8}} The molecular formula C27H43NO8 (molar mass: 509.63 g/mol, exact mass: 509.2989 u) may refer to: Colforsin Veracevine
C27H43NO8
[ "Chemistry" ]
59
[ "Isomerism", "Set index articles on molecular formulas" ]
24,009,510
https://en.wikipedia.org/wiki/C11H14O3
{{DISPLAYTITLE:C11H14O3}} The molecular formula C11H14O3 (molar mass: 194.22 g/mol) may refer to: Butylparaben tert-Butyl peroxybenzoate Zingerone (also called vanillylacetone) Methoxyeugenol Molecular formulas
C11H14O3
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,009,615
https://en.wikipedia.org/wiki/Fibronectin%20type%20III%20domain
The Fibronectin type III domain is an evolutionarily conserved protein domain that is widely found in animal proteins. The fibronectin protein in which this domain was first identified contains 16 copies of this domain. The domain is about 100 amino acids long and possesses a beta sandwich structure. Of the three fibronectin-type domains, type III is the only one without disulfide bonding present. Fibronectin domains are found in a wide variety of extracellular proteins. They are widely distributed in animal species, but also found sporadically in yeast, plant and bacterial proteins. Human proteins containing this domain ABI3BP; ANKFN1; ASTN2; AXL; BOC; BZRAP1; C20orf75; CDON; CHL1; CMYA5; CNTFR; CNTN1; CNTN2; CNTN3; CNTN4; CNTN5; CNTN6; COL12A1; COL14A1; COL20A1; COL7A1; CRLF1; CRLF3; CSF2RB; CSF3R; DCC; DSCAM; DSCAML1; EBI3; EGFLAM; EPHA1; EPHA10; EPHA2; EPHA3; EPHA4; EPHA5; EPHA6; EPHA7; EPHA8; EPHB1; EPHB2; EPHB3; EPHB4; EPHB6; EPOR; FANK1; FLRT1; FLRT2; FLRT3; FN1; FNDC1; FNDC3A; FNDC3B; FNDC4; FNDC5; FNDC7; FNDC8; FSD1; FSD1L; FSD2; GHR; HCFC1; HCFC2; HUGO; IFNGR2; IGF1R; IGSF22; IGSF9; IGSF9B; IL4R; IL11RA; IL12B; IL12RB1; IL12RB2; IL20RB; IL23R; IL27RA; IL31RA; IL6R; IL6ST; IL7R; INSR; INSRR; ITGB4; KAL1; KALRN; L1CAM; LEPR; LIFR; LRFN2; LRFN3; LRFN4; LRFN5; LRIT1; LRRN1; LRRN3; MERTK; MID1; MID2; MPL; MYBPC1; MYBPC2; MYBPC3; MYBPH; MYBPHL; MYLK; MYOM1; MYOM2; MYOM3; NCAM1; NCAM2; NEO1; NFASC; NOPE; NPHS1; NRCAM; OBSCN; OBSL1; OSMR; PHYHIP; PHYHIPL; PRLR; PRODH2; PTPRB; PTPRC; PTPRD; PTPRF; PTPRG; PTPRH; PTPRJ; PTPRK; PTPRM; PTPRO; PTPRS; PTPRT; PTPRU; PTPRZ1; PTPsigma; PUNC; RIMBP2; ROBO1; ROBO2; ROBO3; ROBO4; ROS1; SDK1; SDK2; SNED1; SORL1; SPEG; TEK; TIE1; TNC; TNN; TNR; TNXB; TRIM36; TRIM42; TRIM46; TRIM67; TRIM9; TTN; TYRO3; UMODL1; USH2A; VASN; VWA1; dJ34F7.1; fmi; See also Monobodies are engineered (synthetic) antibody mimetics based on a fibronectin type III domain (specifically, the 10th FN3 domain of human fibronectin). Monobodies feature either diversified loops or diversified strands of a flat beta-sheet surface, which serve as interaction epitopes. Monobody binders have been selected a wide variety of target molecules, and have expanded beyond the potential range of binding interfaces observed in both natural and synthetic antibodies. References Protein domains Single-pass transmembrane proteins
Fibronectin type III domain
[ "Biology" ]
936
[ "Protein domains", "Protein classification" ]
18,744,044
https://en.wikipedia.org/wiki/Chromosome%20landing
Chromosomal landing is a genetic technique used to identify and isolate clones in a genetic library. Chromosomal landing reduces the problem of analyzing large, and/or highly repetitive genomes by minimizing the need for chromosome walking. It is based on the principle that the expected average between-marker distances can be smaller than the average insert length of a clone library containing the gene of interest. From the abstract of : The strategy of chromosome walking is based on the assumption that it is difficult and time consuming to find DNA markers that are physically close to a gene of interest. Recent technological developments invalidate this assumption for many species. As a result, the mapping paradigm has now changed such that one first isolates one or more DNA marker(s) at a physical distance from the targeted gene that is less than the average insert size of the genomic library being used for clone isolation. The DNA marker is then used to screen the library and isolate (or 'land' on) the clone containing the gene, without any need for chromosome walking and its associated problems. Chromosome landing, together with the technology that has made it possible, is likely to become the main strategy by which map-based cloning is applied to isolate both major genes and genes underlying quantitative traits in plant species. See also Primer walking References Molecular biology Genetic engineering
Chromosome landing
[ "Chemistry", "Engineering", "Biology" ]
268
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Genetic engineering", "Molecular biology stubs", "Molecular biology", "Biochemistry" ]
18,745,015
https://en.wikipedia.org/wiki/Fractal%20analysis
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign a fractal dimension and other fractal characteristics to a dataset which may be a theoretical dataset, or a pattern or signal extracted from phenomena including topography, natural geometric objects, ecology and aquatic sciences, sound, market fluctuations, heart rates, frequency domain in electroencephalography signals, digital images, molecular motion, and data science. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. Fractal analysis is valuable in expanding our knowledge of the structure and function of various systems, and as a potential tool to mathematically assess novel areas of study. Fractal calculus was formulated which is a generalization of ordinary calculus. Underlying principles Fractals have fractional dimensions, which are a measure of complexity that indicates the degree to which the objects fill the available space. The fractal dimension measures the change in "size" of a fractal set with the changing observational scale, and is not limited by integer values. This is possible given that a smaller section of the fractal resembles the entirety, showing the same statistical properties at different scales. This characteristic is termed scale invariance, and can be further categorized as self-similarity or self-affinity, the latter scaled anisotropically (depending on the direction). Whether the view of the fractal is expanding or contracting, the structure remains the same and appears equivalently complex. Fractal analysis uses these underlying properties to help in the understanding and characterization of complex systems. It is also possible to expand the use of fractals to the lack of a single characteristic time scale, or pattern. Further information on the Origins: Fractal Geometry Types of fractal analysis There are various types of fractal analysis, including box counting, lacunarity analysis, mass methods, and multifractal analysis. A common feature of all types of fractal analysis is the need for benchmark patterns against which to assess outputs. These can be acquired with various types of fractal generating software capable of generating benchmark patterns suitable for this purpose, which generally differ from software designed to render fractal art. Other types include detrended fluctuation analysis and the Hurst absolute value method, which estimate the hurst exponent. It is suggested to use more than one approach in order to compare results and increase the robustness of one's findings. Applications Ecology and evolution Unlike theoretical fractal curves which can be easily measured and the underlying mathematical properties calculated; natural systems are sources of heterogeneity and generate complex space-time structures that may only demonstrate partial self-similarity. Using fractal analysis, it is possible to analyze and recognize when features of complex ecological systems are altered since fractals are able to characterize the natural complexity in such systems. Thus, fractal analysis can help to quantify patterns in nature and to identify deviations from these natural sequences. It helps to improve our overall understanding of ecosystems and to reveal some of the underlying structural mechanisms of nature. For example, it was found that the structure of an individual tree’s xylem follows the same architecture as the spatial distribution of the trees in the forest, and that the distribution of the trees in the forest shared the same underlying fractal structure as the branches, scaling identically to the point of being able to use the pattern of the trees’ branches mathematically to determine the structure of the forest stand. The use of fractal analysis for understanding structures, and spatial and temporal complexity in biological systems has already been well studied and its use continues to increase in ecological research. Despite its extensive use, it still receives some criticism. Animal behaviour Patterns in animal behaviour exhibit fractal properties on spatial and temporal scales. Fractal analysis helps in understanding the behaviour of animals and how they interact with their environments on multiple scales in space and time. Various animal movement signatures in their respective environments have been found to demonstrate spatially non-linear fractal patterns. This has generated ecological interpretations such as the Lévy Flight Foraging hypothesis, which has proven to be a more accurate description of animal movement for some species. Spatial patterns and animal behaviour sequences in fractal time have an optimal complexity range, which can be thought of as the homeostatic state on the spectrum where the complexity sequence should regularly fall. An increase or a loss in complexity, either becoming more stereotypical or conversely more random in their behaviour patterns, indicates that there has been an alteration in the functionality of the individual. Using fractal analysis, it is possible to examine the movement sequential complexity of animal behaviour and to determine whether individuals are experiencing deviations from their optimal range, suggesting a change in condition. For example, it has been used to assess welfare of domestic hens, stress in bottlenose dolphins in response to human disturbance, and parasitic infection in Japanese macaques and sheep. The research is furthering the field of behavioural ecology by simplifying and quantifying very complex relationships. When it comes to animal welfare and conservation, fractal analysis makes it possible to identify potential sources of stress on animal behaviour, stressors that may not always be discernible through classical behaviour research. This approach is more objective than classical behaviour measurements, such as frequency-based observations that are limited by the counts of behaviours, but is able to delve into the underlying reason for the behaviour. Another important advantage of fractal analysis is the ability to monitor the health of wild and free-ranging animal populations in their natural habitats without invasive measurements. Applications include Applications of fractal analysis include: Heart rate analysis Human gait, balance, and activity Human anatomy Diagnostic imaging Cancer research Fractal analysis of complex networks Classification of histopathology slides in medicine Fractal landscape or Coastline complexity Electrical engineering Enzyme/enzymology (Michaelis-Menten kinetics) Generation of new music Generation of various art forms Search and rescue Signal and image compression Urban growth Neuroscience Diagnostic imaging Pathology Geology Geography Archaeology Seismology Soil studies Computer and video game design, especially computer graphics for organic environments and as part of procedural generation Fractography and fracture mechanics Fractal antennas — Small size antennas using fractal shapes Small angle scattering theory of fractally rough systems Generation of patterns for camouflage, such as MARPAT Digital sundial Technical analysis of price series (see Elliott wave principle) Fractal calculus See also Multifractal Rescaled range Analysis on fractals References Further reading Fractals and Fractal Analysis Fractal analysis Benoit – Fractal Analysis Software Fractal Analysis Methods for Human Heartbeat and Gait Dynamics Chaos theory Dynamical systems Dimension theory Fractals
Fractal analysis
[ "Physics", "Mathematics" ]
1,416
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations", "Mechanics", "Dynamical systems" ]
287,061
https://en.wikipedia.org/wiki/Pagoda
A pagoda is a tiered tower with multiple eaves common to Thailand, Cambodia, Nepal, China, Japan, Korea, Myanmar, Vietnam, and other parts of Asia. Most pagodas were built to have a religious function, most often Buddhist, but sometimes Taoist, and were often located in or near viharas. The pagoda traces its origins to the stupa, while its design was developed in ancient India. Chinese pagodas () are a traditional part of Chinese architecture. In addition to religious use, since ancient times Chinese pagodas have been praised for the spectacular views they offer, and many classical poems attest to the joy of scaling pagodas. The oldest and tallest pagodas were built of wood, but most that survived were built of brick or stone. Some pagodas are solid with no interior. Hollow pagodas have no higher floors or rooms, but the interior often contains an altar or a smaller pagoda, as well as a series of staircases for the visitor to climb to see the view from an opening on one side of each tier. Most have between three and 13 tiers (almost always an odd number) and the classic gradual tiered eaves. In some countries, the term may refer to other religious structures. In Vietnam and Cambodia, due to French translation, the English term pagoda is a more generic term referring to a place of worship, although pagoda is not an accurate word to describe a Buddhist vihara. The architectural structure of the stupa has spread across Asia, taking on many diverse forms specific to each region. Many Philippine bell towers are highly influenced by pagodas through Chinese workers hired by the Spaniards. Etymology One proposed etymology is from a South Chinese pronunciation of the term for an eight-cornered tower, , and reinforced by the name of a famous pagoda encountered by many early European visitors to China, the "Pázhōu tǎ" (), standing just south of Guangzhou at Whampoa Anchorage. Another proposed etymology is Persian butkada, from but, "idol" and kada, "temple, dwelling." Yet another etymology is from the Sinhala word dāgaba, derived from Sanskrit dhātugarbha or Pali dhātugabbha: "relic womb/chamber" or "reliquary shrine", i.e. a stupa, by way of Portuguese. History The origin of the pagoda can be traced to the stupa (3rd century BCE). The stupa, a dome shaped monument, was used as a commemorative monument to house sacred relics and writings. In East Asia, the architecture of Chinese towers and Chinese pavilions blended into pagoda architecture, eventually also spreading to Southeast Asia. Their construction was popularized by the efforts of Buddhist missionaries, pilgrims, rulers, and ordinary devotees to honor Buddhist relics. Japan has a total of 22 five-storied timber pagodas constructed before 1850. China The earliest styles of Chinese pagodas were square-base and circular-base, with octagonal-base towers emerging in the 5th–10th centuries. The highest Chinese pagoda from the pre-modern age is the Liaodi Pagoda of Kaiyuan Monastery, Dingxian, Hebei, completed in the year 1055 AD under Emperor Renzong of Song and standing at a total height of 84 m (275 ft). Although it no longer stands, the tallest pre-modern pagoda in Chinese history was the of Chang'an, built by Emperor Yang of Sui, and possibly the short-lived 6th century Yongning Pagoda (永宁宝塔) of Luoyang at roughly 137 metres. The tallest pre-modern pagoda still standing is the Liaodi Pagoda. In April 2007 a new wooden pagoda Tianning Temple of Changzhou was opened to the public, the tallest in China, standing 154 m (505 ft). Symbolism and geomancy Chinese iconography is noticeable in Chinese and other East Asian pagoda architectures. Also prominent is Buddhist iconography such as the image of the Shakyamuni and Gautama Buddha in the abhaya mudra. In an article on Buddhist elements in Han dynasty art, Wu Hung suggests that in these temples, Buddhist symbolism was fused with native Chinese traditions into a unique system of symbolism. Some believed reverence at pagodas could bring luck to students taking the Chinese civil service examinations. When a pagoda of Yihuang County in Fuzhou collapsed in 1210, local inhabitants believed the disaster correlated with the recent failure of many exam candidates in the prefectural examinations The pagoda was rebuilt in 1223 and had a list inscribed on it of the recently successful examination candidates, in hopes that it would reverse the trend and win the county supernatural favor. Architecture Pagodas come in many different sizes, with taller ones often attracting lightning strikes, inspiring a tradition that the finial decoration of the top of the structure can seize demons. Today many pagodas have been fitted with wires making the finial into a lightning rod. Wooden pagodas possess certain characteristics thought to resist earthquake damage. These include the friction damping and sliding effect of the complex wooden dougong joints, the structural isolation of floors, the effects of wide eaves analogous to a balancing toy, and the Shinbashira phenomenon that the center column is bolted to the rest of the superstructure. Pagodas traditionally have an odd number of levels, a notable exception being the eighteenth-century orientalist pagoda designed by Sir William Chambers at Kew Gardens in London. The pagodas in Himalayas are derived from Newari architecture, very different from Chinese and Japanese styles. Construction materials Wood During the Southern and Northern dynasties, pagodas were mostly built of wood, as were other ancient Chinese structures. Wooden pagodas are resistant to earthquakes, and no Japanese pagoda has been destroyed by an earthquake, but they are prone to fire, natural rot, and insect infestation. Examples of wooden pagodas: White Horse Pagoda at White Horse Temple, Luoyang Futuci Pagoda in Xuzhou, built in the Three Kingdoms period (–265) Many of the pagodas in Stories About Buddhist Temples in Luoyang, a Northern Wei text The literature of subsequent eras also provides evidence of the domination of wooden pagoda construction. The famous Tang dynasty poet, Du Mu, once wrote: The oldest standing fully wooden pagoda in China today is the Pagoda of Fugong Temple in Ying County, Shanxi, built in the 11th century during the Song/Liao dynasty (see Song architecture). Transition to brick and stone During the Northern Wei and Sui dynasties (386–618) experiments began with the construction of brick and stone pagodas. Even at the end of the Sui, however, wood was still the most common material. For example, Emperor Wen of the Sui dynasty (reigned 581–604) once issued a decree for all counties and prefectures to build pagodas to a set of standard designs, however since they were all built of wood none have survived. Only the Songyue Pagoda has survived, a circular-based pagoda built out of brick in 523 AD. Brick The earliest extant brick pagoda is the 40-metre-tall Songyue Pagoda in Dengfeng Country, Henan. This curved, circle-based pagoda was built in 523 during the Northern Wei dynasty, and has survived for 15 centuries. Much like the later pagodas found during the following Tang dynasty, this temple featured tiers of eaves encircling its frame, as well as a spire crowning the top. Its walls are 2.5 m thick, with a ground floor diameter of 10.6 m. Another early brick pagoda is the Sui dynasty Guoqing Pagoda built in 597. Stone The earliest large-scale stone pagoda is a Four Gates Pagoda at Licheng, Shandong, built in 611 during the Sui dynasty. Like the Songyue Pagoda, it also features a spire at its top, and is built in the pavilion style. Brick and stone One of the earliest brick and stone pagodas was a three-storey construction built in the (first) Jin dynasty (266–420), by Wang Jun of Xiangyang. However, it is now destroyed. Brick and stone went on to dominate Tang, Song, Liao and Jin dynasty pagoda construction. An example is the Giant Wild Goose Pagoda (652 AD), built during the early Tang dynasty. The Porcelain Pagoda of Nanjing has been one of the most famous brick and stone pagoda in China throughout history. The Zhou dynasty started making the ancient pagodas about 3,500 years ago. De-emphasis over time Pagodas, in keeping with the tradition of the White Horse Temple, were generally placed in the center of temples until the Sui and Tang dynasties. During the Tang, the importance of the main hall was elevated and the pagoda was moved beside the hall, or out of the temple compound altogether. In the early Tang, Daoxuan wrote a Standard Design for Buddhist Temple Construction in which the main hall replaced the pagoda as the center of the temple. The design of temples was also influenced by the use of traditional Chinese residences as shrines, after they were philanthropically donated by the wealthy or the pious. In such pre-configured spaces, building a central pagoda might not have been either desirable or possible. In the Song dynasty (960–1279), the Chan (Zen) sect developed a new 'seven part structure' for temples. The seven parts—the Buddha hall, dharma hall, monks' quarters, depository, gate, pure land hall and toilet facilities—completely exclude pagodas, and can be seen to represent the final triumph of the traditional Chinese palace/courtyard system over the original central-pagoda tradition established 1000 years earlier by the White Horse Temple in 67. Although they were built outside of the main temple itself, large pagodas in the tradition of the past were still built. This includes the two Ming dynasty pagodas of Famen Temple and the Chongwen Pagoda in Jingyang of Shaanxi. A prominent, later example of converting a palace to a temple is Beijing's Yonghe Temple, which was the residence of Yongzheng Emperor before he ascended the throne. It was donated for use as a lamasery after his death in 1735. Styles of eras Han dynasty Examples of Han dynasty era tower architecture predating Buddhist influence and the full-fledged Chinese pagoda can be seen in the four pictures below. Michael Loewe writes that during the Han dynasty (202 BC – 220 AD) period, multi-storied towers were erected for religious purposes, as astronomical observatories, as watchtowers, or as ornate buildings that were believed to attract the favor of spirits, deities, and immortals. Sui and Tang Pagodas built during the Sui and Tang dynasty usually had a square base, with a few exceptions such as the Daqin Pagoda: Dali kingdom Song, Liao, Jin, Yuan Pagodas of the Five Dynasties, Northern and Southern Song, Liao, Jin, and Yuan dynasties incorporated many new styles, with a greater emphasis on hexagonal and octagonal bases for pagodas: Ming and Qing Pagodas in the Ming and Qing dynasties generally inherited the styles of previous eras, although there were some minor variations: Notable pagodas Tiered towers with multiple eaves: Dâu Temple, Bắc Ninh, Vietnam, built in 187 Changu Narayan Temple, Bhaktapur, Nepal, originally built in 4th century CE, rebuilt in 1702 Pashupatinath Temple, Kathmandu, Nepal, built in the 5th century Trấn Quốc Pagoda, Hanoi, Vietnam, built in 545 Songyue Pagoda on Mount Song, Henan, China, built in 523 Mireuksa at Iksan, Korea, built in the early 7th century Bunhwangsa at Gyeongju, Korea, built in 634 Xumi Pagoda at Zhengding, Hebei, China, built in 636 Daqin Pagoda in China, built in 640 Hwangnyongsa Wooden nine-story pagoda on Hwangnyongsa, Gyeongju, Korea, built in 645 Pagoda at Hōryū-ji, Ikaruga, Nara, Japan, built in the 7th century, one of the oldest wooden buildings in the world Giant Wild Goose Pagoda, made of brick, built in Xi'an, China in 704 Small Wild Goose Pagoda, built in Xi'an, China in 709 Seokgatap on Bulguksa, Gyeongju, South Korea, built in 751, made of granite. In 1966, the Mugujeonggwang Great Dharani Sutra, the oldest extant woodblock print, was found with several other treasures in the second story of this pagoda. Dabotap on Bulguksa, Gyeongju, Korea, built in 751 Tiger Hill Pagoda, built in 961 outside of Suzhou, China Lingxiao Pagoda at Zhengding, Hebei, China, built in 1045 Iron Pagoda of Kaifeng, built in 1049, during the Song dynasty Liaodi Pagoda of Dingzhou, built in 1055 during the Song dynasty Pagoda of Fogong Temple, built in 1056 in Ying County, Shanxi, China Pizhi Pagoda of Lingyan Temple, Shandong, China, 11th century Beisi Pagoda at Suzhou, Jiangsu, China, built in 1162 Liuhe Pagoda (Six Harmonies Pagoda) of Hangzhou, Zhejiang, China, built in 1165 during the Song dynasty Ichijō-ji, Kasai, Hyōgo, Japan, built in 1171 Bình Sơn Pagoda of Vĩnh Khánh Temple, Vĩnh Phúc, Vietnam, built in the Trần dynasty (about the 13th century) Phổ Minh pagoda of Phổ Minh Temple, Vietnam, built in 1305 Prashar Lake temple, dedicated to the Rishi Prashar, the patron of the Mandi region in India. The temple was constructed by Raja Ban Sen in the 14th century, with the rishi being present in the form of a pindi stone. The Porcelain Tower of Nanjing, built between 1402 and 1424, a wonder of the medieval world in Nanjing, China. Tsui Sing Lau Pagoda in Ping Shan, Hong Kong, built in 1486 Bajrayogini Temple, Kathmandu, Nepal, built in 16th century by Pratap Malla Taleju Temple, a temple in Kathmandu, Nepal, built in 1564 Gokarneshwor Mahadev temple, Nepal, built in 1582 Pazhou Pagoda on Whampoa (Huangpu) Island, Guangzhou (Canton), China, built in 1600 Phước Duyên Pagoda of Thiên Mụ Temple, in Huế, Vietnam, built in 1844 on the order of the Thiệu Trị Emperor Palsangjeon, a five-story pagoda at Beopjusa, Korea built in 1605 Tō-ji, the tallest wooden structure in Kyoto, Japan, built in 1644 Nyatapola at Bhaktapur, Kathmandu Valley built during 1701–1702 The Great Pagoda at Kew Gardens, London, UK, built in 1762 Reading Pagoda of Reading, Pennsylvania, built in 1908 Kek Lok Si's main pagoda in Penang, Malaysia, exhibits a combination of Chinese, Burmese and Thai Buddhist architecture, built in 1930 Seven-storey Pagoda in Chinese Garden at Jurong East, Singapore, built in 1975 Dragon and Tiger Pagodas in Kaohsiung, Taiwan, built in 1976 The pagoda of Japan Pavilion at Epcot, Florida, built in 1982 Pagoda of Tianning Temple, the tallest pagoda in the world since its completion in April 2007, stands at 153.7 m in height. Nepalese Peace Pagoda in Brisbane, Australia built for the World Expo '88 Pagoda Avalokitesvara, Indonesia, tallest pagoda in Indonesia, stands at 45 meters, built in 2004. Sun and Moon Pagodas in Guilin, Guangxi, China, twin pagodas on Shan Lake, originally built in the 10th century and reconstructed using historical description on the original foundation in 2001 Stupas called "pagodas": Global Vipassana Pagoda, the largest unsupported domed stone structure in the world Mingun Pahtodawgyi, a monumental uncompleted stupa began by King Bodawpaya in 1790. If completed, it would be the largest in the world at 150 meters. Pha That Luang, the holiest wat, pagoda, and stupa in Laos, in Vientiane Phra Pathommachedi the highest pagoda or stupa in Thailand Nakhon Pathom, Thailand Shwedagon Pagoda, a gilded pagoda and stupa located in Yangon, Myanmar. It is the most sacred Buddhist pagoda for the Burmese with relics of the past four Buddhas enshrined within. Shwezigon Pagoda in Nyaung-U, Myanmar. Completed during the reign of King Kyanzittha in 1102, it is a prototype of Burmese stupas. Uppatasanti Pagoda, a 325-foot tall landmark in Naypyidaw, Myanmar, built from 2006 to 2009, which houses a Buddha tooth relic Places called "pagoda" but which are not tiered structures with multiple eaves: One Pillar Pagoda: Hanoi, Vietnam, is an icon of Vietnamese culture. It was built in 1049, destroyed, and rebuilt in 1954. Structures that evoke pagoda architecture: The Dragon House of Sanssouci Park, an eighteenth-century German attempt at imitating Chinese architecture The Panasonic Pagoda, or Pagoda Tower, at the Indianapolis Motor Speedway. This 13-story pagoda, used as the control tower for races such as the Indy 500, has been transformed several times since it was first built in 1913. Jin Mao Tower in Shanghai, built between 1994 and 1999 Petronas Towers in Kuala Lumpur, the tallest buildings in the world from 1998 to 2004 Taipei 101 in Taiwan, record setter for height (508 m) in 2004 and currently (2021) the world's tenth tallest completed building Structures not generally thought of as pagodas, but which have some pagoda-like characteristics: The Hall of Prayer for Good Harvests at the Temple of Heaven Wongudan Altar in Korea See also Architecture of the Song dynasty Cetiya Chaitya Pyatthat Kath-Kuni architecture Chinese architecture Gongbei – Chinese Muslim mausoleum with pagoda-style architecture Japanese pagoda List of pagodas in Beijing Chaoyang North Tower Guanghui Temple Huatai Pagoda Notes References Benn, Charles (2002). China's Golden Age: Everyday Life in the Tang Dynasty. Oxford: Oxford University Press. . Brook, Timothy. (1998). The Confusions of Pleasure: Commerce and Culture in Ming China. Berkeley: University of California Press. Fazio, Michael W., Moffett, Marian and Wodehouse, Lawrence. A World History of Architecture. Published 2003. McGraw-Hill Professional. . Fu, Xinian. (2002). "The Three Kingdoms, Western and Eastern Jin, and Northern and Southern Dynasties," in Chinese Architecture, 61–90. Edited by Nancy S. Steinhardt. New Haven: Yale University Press. . Govinda, A. B. Psycho-cosmic symbolism of the Buddhist stupa. 1976, Emeryville, California. Dharma Publications. Hymes, Robert P. (1986). Statesmen and Gentlemen: The Elite of Fu-Chou, Chiang-Hsi, in Northern and Southern Sung. Cambridge: Cambridge University Press. . Kieschnick, John. The Impact of Buddhism on Chinese Material Culture. Published 2003. Princeton University Press . . Loewe, Michael. (1968). Everyday Life in Early Imperial China during the Han Period 202 BC–AD 220. London: B.T. Batsford Ltd.; New York: G.P. Putnam's Sons. Steinhardt, Nancy Shatzman (1997). Liao Architecture. Honolulu: University of Hawaii Press. External links Oriental architecture.com Culzean Pagoda (Monkey House) – the only stone built pagoda in Britain "Why so few Japanese pagodas have ever fallen down" (The Economist) Chinese pagoda gallery (211 pics) The Bei-Hai (Beijing), The Flower Pagoda (Guangdong), The Great Gander Pagoda (Xian), The White Pagoda (Liaoyang) The Songyue Pagoda at China.org.cn Structure of Pagodas, including the underground palace, base, body and steeple, at China.org.cn The Herbert Offen Research Collection of the Phillips Library at the Peabody Essex Museum Buddhist buildings Buddhist temples Towers Buddhist architecture Hindu buildings Hindu temples Hindu architecture Indian architectural history Chinese architectural history Japanese architectural history Architecture in Korea Architecture in Vietnam Building types Architecture in Nepal Religious towers
Pagoda
[ "Engineering" ]
4,207
[ "Structural engineering", "Towers" ]
287,137
https://en.wikipedia.org/wiki/Snub%20dodecahedron
In geometry, the snub dodecahedron, or snub icosidodecahedron, is an Archimedean solid, one of thirteen convex isogonal nonprismatic solids constructed by two or more types of regular polygon faces. The snub dodecahedron has 92 faces (the most of the 13 Archimedean solids): 12 are pentagons and the other 80 are equilateral triangles. It also has 150 edges, and 60 vertices. It has two distinct forms, which are mirror images (or "enantiomorphs") of each other. The union of both forms is a compound of two snub dodecahedra, and the convex hull of both forms is a truncated icosidodecahedron. Kepler first named it in Latin as dodecahedron simum in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from either the dodecahedron or the icosahedron, called it snub icosidodecahedron, with a vertical extended Schläfli symbol and flat Schläfli symbol Cartesian coordinates Let be the real zero of the cubic polynomial , where is the golden ratio. Let the point be given by Let the rotation matrices and be given by represents the rotation around the axis through an angle of counterclockwise, while being a cyclic shift of represents the rotation around the axis through an angle of . Then the 60 vertices of the snub dodecahedron are the 60 images of point under repeated multiplication by and/or , iterated to convergence. (The matrices and generate the 60 rotation matrices corresponding to the 60 rotational symmetries of a regular icosahedron.) The coordinates of the vertices are integral linear combinations of and . The edge length equals Negating all coordinates gives the mirror image of this snub dodecahedron. As a volume, the snub dodecahedron consists of 80 triangular and 12 pentagonal pyramids. The volume of one triangular pyramid is given by: and the volume of one pentagonal pyramid by: The total volume is The circumradius equals The midradius equals . This gives an interesting geometrical interpretation of the number . The 20 "icosahedral" triangles of the snub dodecahedron described above are coplanar with the faces of a regular icosahedron. The midradius of this "circumscribed" icosahedron equals 1. This means that is the ratio between the midradii of a snub dodecahedron and the icosahedron in which it is inscribed. The triangle–triangle dihedral angle is given by The triangle–pentagon dihedral angle is given by Metric properties For a snub dodecahedron whose edge length is 1, the surface area is Its volume is Alternatively, this volume may be written as where Its circumradius is Its midradius is There are two inscribed spheres, one touching the triangular faces, and one, slightly smaller, touching the pentagonal faces. Their radii are, respectively: The four positive real roots of the sextic equation in R2 are the circumradii of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74). The snub dodecahedron has the highest sphericity of all Archimedean solids. If sphericity is defined as the ratio of volume squared over surface area cubed, multiplied by a constant of 36 (where this constant makes the sphericity of a sphere equal to 1), the sphericity of the snub dodecahedron is about 0.947. Orthogonal projections The snub dodecahedron has two especially symmetric orthogonal projections as shown below, centered on two types of faces: triangles and pentagons, corresponding to the A2 and H2 Coxeter planes. Geometric relations The snub dodecahedron can be generated by taking the twelve pentagonal faces of the dodecahedron and pulling them outward so they no longer touch. At a proper distance this can create the rhombicosidodecahedron by filling in square faces between the divided edges and triangle faces between the divided vertices. But for the snub form, pull the pentagonal faces out slightly less, only add the triangle faces and leave the other gaps empty (the other gaps are rectangles at this point). Then apply an equal rotation to the centers of the pentagons and triangles, continuing the rotation until the gaps can be filled by two equilateral triangles. (The fact that the proper amount to pull the faces out is less in the case of the snub dodecahedron can be seen in either of two ways: the circumradius of the snub dodecahedron is smaller than that of the icosidodecahedron; or, the edge length of the equilateral triangles formed by the divided vertices increases when the pentagonal faces are rotated.) The snub dodecahedron can also be derived from the truncated icosidodecahedron by the process of alternation. Sixty of the vertices of the truncated icosidodecahedron form a polyhedron topologically equivalent to one snub dodecahedron; the remaining sixty form its mirror-image. The resulting polyhedron is vertex-transitive but not uniform. Alternatively, combining the vertices of the snub dodecahedron given by the Cartesian coordinates (above) and its mirror will form a semiregular truncated icosidodecahedron. The comparisons between these regular and semiregular polyhedrons is shown in the figure to the right. Cartesian coordinates for the vertices of this alternative snub dodecahedron are obtained by selecting sets of 12 (of 24 possible even permutations contained in the five sets of truncated icosidodecahedron Cartesian coordinates). The alternations are those with an odd number of minus signs in these three sets: and an even number of minus signs in these two sets: where is the golden ratio. The mirrors of both the regular truncated icosidodecahedron and this alternative snub dodecahedron are obtained by switching the even and odd references to both sign and position permutations. Related polyhedra and tilings This semiregular polyhedron is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n = 6, and hyperbolic plane for any higher n. The series can be considered to begin with n = 2, with one set of faces degenerated into digons. Snub dodecahedral graph In the mathematical field of graph theory, a snub dodecahedral graph is the graph of vertices and edges of the snub dodecahedron, one of the Archimedean solids. It has 60 vertices and 150 edges, and is an Archimedean graph. See also Planar polygon to polyhedron transformation animation ccw and cw spinning snub dodecahedron References (Section 3-9) External links Editable printable net of a Snub Dodecahedron with interactive 3D view The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra Mark S. Adams and Menno T. Kosters. Volume Solutions to the Snub Dodecahedron Chiral polyhedra Uniform polyhedra Archimedean solids Snub tilings
Snub dodecahedron
[ "Physics" ]
1,623
[ "Symmetry", "Uniform polytopes", "Snub tilings", "Tessellation", "Uniform polyhedra" ]
287,152
https://en.wikipedia.org/wiki/Gray%20%28unit%29
The gray (symbol: Gy) is the unit of ionizing radiation dose in the International System of Units (SI), defined as the absorption of one joule of radiation energy per kilogram of matter. It is used as a unit of the radiation quantity absorbed dose that measures the energy deposited by ionizing radiation in a unit mass of absorbing material, and is used for measuring the delivered dose in radiotherapy, food irradiation and radiation sterilization. It is important in predicting likely acute health effects, such as acute radiation syndrome and is used to calculate equivalent dose using the sievert, which is a measure of the stochastic health effect on the human body. The gray is also used in radiation metrology as a unit of the radiation quantity kerma; defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation in a sample of matter per unit mass. The unit was named after British physicist Louis Harold Gray, a pioneer in the measurement of X-ray and radium radiation and their effects on living tissue. The gray was adopted as part of the International System of Units in 1975. The corresponding cgs unit to the gray is the rad (equivalent to 0.01 Gy), which remains common largely in the United States, though "strongly discouraged" in the style guide for U.S. National Institute of Standards and Technology. Applications The gray has a number of fields of application in measuring dose: Radiobiology The measurement of absorbed dose in tissue is of fundamental importance in radiobiology and radiation therapy as it is the measure of the amount of energy the incident radiation deposits in the target tissue. The measurement of absorbed dose is a complex problem due to scattering and absorption, and many specialist dosimeters are available for these measurements, and can cover applications in 1-D, 2-D and 3-D. In radiation therapy, the amount of radiation applied varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers). The average radiation dose from an abdominal X-ray is 0.7 millisieverts (0.0007 Sv), that from an abdominal CT scan is 8 mSv, that from a pelvic CT scan is 6 mGy, and that from a selective CT scan of the abdomen and the pelvis is 14 mGy. Radiation protection The absorbed dose also plays an important role in radiation protection, as it is the starting point for calculating the stochastic health risk of low levels of radiation, which is defined as the probability of cancer induction and genetic damage. The gray measures the total absorbed energy of radiation, but the probability of stochastic damage also depends on the type and energy of the radiation and the types of tissues involved. This probability is related to the equivalent dose in sieverts (Sv), which has the same dimensions as the gray. It is related to the gray by weighting factors described in the articles on equivalent dose and effective dose. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H." The accompanying diagrams show how absorbed dose (in grays) is first obtained by computational techniques, and from this value the equivalent doses are derived. For X-rays and gamma rays the gray is numerically the same value when expressed in sieverts, but for alpha particles one gray is equivalent to 20 sieverts, and a radiation weighting factor is applied accordingly. Radiation poisoning The gray is conventionally used to express the severity of what are known as "tissue effects" from doses received in acute exposure to high levels of ionizing radiation. These are effects that are certain to happen, as opposed to the uncertain effects of low levels of radiation that have a probability of causing damage. A whole-body acute exposure to 5 grays or more of high-energy radiation usually leads to death within 14 days. LD1 is 2.5 Gy, LD50 is 5 Gy and LD99 is 8 Gy. The LD50 dose represents 375 joules for a 75 kg adult. Absorbed dose in matter The gray is used to measure absorbed dose rates in non-tissue materials for processes such as radiation hardening, food irradiation and electron irradiation. Measuring and controlling the value of absorbed dose is vital to ensuring correct operation of these processes. Kerma Kerma ("kinetic energy released per unit mass") is used in radiation metrology as a measure of the liberated energy of ionisation due to irradiation, and is expressed in grays. Importantly, kerma dose is different from absorbed dose, depending on the radiation energies involved, partially because ionization energy is not accounted for. Whilst roughly equal at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons. Kerma, when applied to air, is equivalent to the legacy roentgen unit of radiation exposure, but there is a difference in the definition of these two units. The gray is defined independently of any target material, however, the roentgen was defined specifically by the ionisation effect in dry air, which did not necessarily represent the effect on other media. Development of the absorbed dose concept and the gray Wilhelm Röntgen discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques. Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU, and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn. One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation. This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air. In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the gram roentgen (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units. In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI. The CCU decided to define the SI unit of absorbed radiation as energy deposited by reabsorbed charged particles per unit mass of absorbent material, which is how the rad had been defined, but in MKS units it would be equivalent to the joule per kilogram. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was thus equal to 100 rad. Notably, the centigray (numerically equivalent to the rad) is still widely used to describe absolute absorbed doses in radiotherapy. The adoption of the gray by the 15th General Conference on Weights and Measures as the unit of measure of the absorption of ionizing radiation, specific energy absorption, and of kerma in 1975 was the culmination of over half a century of work, both in the understanding of the nature of ionizing radiation and in the creation of coherent radiation quantities and units. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units. See also (Gy·cm2) (SI base units) Sievert, SI derived unit of dose equivalent radiation Notes References External links An account of chronological differences between USA and ICRP dosimetry systems. Nuclear physics Radiation protection Radioactivity SI derived units Units of radiation dose
Gray (unit)
[ "Physics", "Chemistry", "Mathematics" ]
2,090
[ "Quantity", "Units of radiation dose", "Radioactivity", "Nuclear physics", "Units of measurement" ]
287,229
https://en.wikipedia.org/wiki/Inverse%20function%20theorem
In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function. In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth. The theorem was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem. Statements For functions of a single variable, the theorem states that if is a continuously differentiable function with nonzero derivative at the point ; then is injective (or bijective onto the image) in a neighborhood of , the inverse is continuously differentiable near , and the derivative of the inverse function at is the reciprocal of the derivative of at : It can happen that a function may be injective near a point while . An example is . In fact, for such a function, the inverse cannot be differentiable at , since if were differentiable at , then, by the chain rule, , which implies . (The situation is different for holomorphic functions; see #Holomorphic inverse function theorem below.) For functions of more than one variable, the theorem states that if is a continuously differentiable function from an open subset of into , and the derivative is invertible at a point (that is, the determinant of the Jacobian matrix of at is non-zero), then there exist neighborhoods of in and of such that and is bijective. Writing , this means that the system of equations has a unique solution for in terms of when . Note that the theorem does not say is bijective onto the image where is invertible but that it is locally bijective where is invertible. Moreover, the theorem says that the inverse function is continuously differentiable, and its derivative at is the inverse map of ; i.e., In other words, if are the Jacobian matrices representing , this means: The hard part of the theorem is the existence and differentiability of . Assuming this, the inverse derivative formula follows from the chain rule applied to . (Indeed, ) Since taking the inverse is infinitely differentiable, the formula for the derivative of the inverse shows that if is continuously times differentiable, with invertible derivative at the point , then the inverse is also continuously times differentiable. Here is a positive integer or . There are two variants of the inverse function theorem. Given a continuously differentiable map , the first is The derivative is surjective (i.e., the Jacobian matrix representing it has rank ) if and only if there exists a continuously differentiable function on a neighborhood of such near , and the second is The derivative is injective if and only if there exists a continuously differentiable function on a neighborhood of such near . In the first case (when is surjective), the point is called a regular value. Since , the first case is equivalent to saying is not in the image of critical points (a critical point is a point such that the kernel of is nonzero). The statement in the first case is a special case of the submersion theorem. These variants are restatements of the inverse functions theorem. Indeed, in the first case when is surjective, we can find an (injective) linear map such that . Define so that we have: Thus, by the inverse function theorem, has inverse near ; i.e., near . The second case ( is injective) is seen in the similar way. Example Consider the vector-valued function defined by: The Jacobian matrix of it at is: with the determinant: The determinant is nonzero everywhere. Thus the theorem guarantees that, for every point in , there exists a neighborhood about over which is invertible. This does not mean is invertible over its entire domain: in this case is not even injective since it is periodic: . Counter-example If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example and has discontinuous derivative and , which vanishes arbitrarily close to . These critical points are local max/min points of , so is not one-to-one (and not invertible) on any interval containing . Intuitively, the slope does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation. Methods of proof As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the Banach fixed-point theorem (which can also be used as the key step in the proof of existence and uniqueness of solutions to ordinary differential equations). Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem (see Generalizations below). An alternate proof in finite dimensions hinges on the extreme value theorem for functions on a compact set. This approach has an advantage that the proof generalizes to a situation where there is no Cauchy completeness (see ). Yet another proof uses Newton's method, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible. Proof for single-variable functions We want to prove the following: Let be an open set with a continuously differentiable function defined on , and suppose that . Then there exists an open interval with such that maps bijectively onto the open interval , and such that the inverse function is continuously differentiable, and for any , if is such that , then . We may without loss of generality assume that . Given that is an open set and is continuous at , there exists such that and In particular, This shows that is strictly increasing for all . Let be such that . Then . By the intermediate value theorem, we find that maps the interval bijectively onto . Denote by and . Then is a bijection and the inverse exists. The fact that is differentiable follows from the differentiability of . In particular, the result follows from the fact that if is a strictly monotonic and continuous function that is differentiable at with , then is differentiable with , where (a standard result in analysis). This completes the proof. A proof using successive approximation To prove existence, it can be assumed after an affine transformation that and , so that . By the mean value theorem for vector-valued functions, for a differentiable function , . Setting , it follows that Now choose so that for . Suppose that and define inductively by and . The assumptions show that if then . In particular implies . In the inductive scheme and . Thus is a Cauchy sequence tending to . By construction as required. To check that is C1, write so that . By the inequalities above, so that . On the other hand if , then . Using the geometric series for , it follows that . But then tends to 0 as and tend to 0, proving that is C1 with . The proof above is presented for a finite-dimensional space, but applies equally well for Banach spaces. If an invertible function is Ck with , then so too is its inverse. This follows by induction using the fact that the map on operators is Ck for any (in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its determinant). The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné, Serge Lang, Roger Godement and Lars Hörmander. A proof using the contraction mapping principle Here is a proof based on the contraction mapping theorem. Specifically, following T. Tao, it uses the following consequence of the contraction mapping theorem. Basically, the lemma says that a small perturbation of the identity map by a contraction map is injective and preserves a ball in some sense. Assuming the lemma for a moment, we prove the theorem first. As in the above proof, it is enough to prove the special case when and . Let . The mean value inequality applied to says: Since and is continuous, we can find an such that for all in . Then the early lemma says that is injective on and . Then is bijective and thus has an inverse. Next, we show the inverse is continuously differentiable (this part of the argument is the same as that in the previous proof). This time, let denote the inverse of and . For , we write or . Now, by the early estimate, we have and so . Writing for the operator norm, As , we have and is bounded. Hence, is differentiable at with the derivative . Also, is the same as the composition where ; so is continuous. It remains to show the lemma. First, we have: which is to say This proves the first part. Next, we show . The idea is to note that this is equivalent to, given a point in , find a fixed point of the map where such that and the bar means a closed ball. To find a fixed point, we use the contraction mapping theorem and checking that is a well-defined strict-contraction mapping is straightforward. Finally, we have: since As might be clear, this proof is not substantially different from the previous one, as the proof of the contraction mapping theorem is by successive approximation. Applications Implicit function theorem The inverse function theorem can be used to solve a system of equations i.e., expressing as functions of , provided the Jacobian matrix is invertible. The implicit function theorem allows to solve a more general system of equations: for in terms of . Though more general, the theorem is actually a consequence of the inverse function theorem. First, the precise statement of the implicit function theorem is as follows: given a map , if , is continuously differentiable in a neighborhood of and the derivative of at is invertible, then there exists a differentiable map for some neighborhoods of such that . Moreover, if , then ; i.e., is a unique solution. To see this, consider the map . By the inverse function theorem, has the inverse for some neighborhoods . We then have: implying and Thus has the required property. Giving a manifold structure In differential geometry, the inverse function theorem is used to show that the pre-image of a regular value under a smooth map is a manifold. Indeed, let be such a smooth map from an open subset of (since the result is local, there is no loss of generality with considering such a map). Fix a point in and then, by permuting the coordinates on , assume the matrix has rank . Then the map is such that has rank . Hence, by the inverse function theorem, we find the smooth inverse of defined in a neighborhood of . We then have which implies That is, after the change of coordinates by , is a coordinate projection (this fact is known as the submersion theorem). Moreover, since is bijective, the map is bijective with the smooth inverse. That is to say, gives a local parametrization of around . Hence, is a manifold. (Note the proof is quite similar to the proof of the implicit function theorem and, in fact, the implicit function theorem can be also used instead.) More generally, the theorem shows that if a smooth map is transversal to a submanifold , then the pre-image is a submanifold. Global version The inverse function theorem is a local result; it applies to each point. A priori, the theorem thus only shows the function is locally bijective (or locally diffeomorphic of some class). The next topological lemma can be used to upgrade local injectivity to injectivity that is global to some extent. Proof: First assume is compact. If the conclusion of the theorem is false, we can find two sequences such that and each converge to some points in . Since is injective on , . Now, if is large enough, are in a neighborhood of where is injective; thus, , a contradiction. In general, consider the set . It is disjoint from for any subset where is injective. Let be an increasing sequence of compact subsets with union and with contained in the interior of . Then, by the first part of the proof, for each , we can find a neighborhood of such that . Then has the required property. (See also for an alternative approach.) The lemma implies the following (a sort of) global version of the inverse function theorem: Note that if is a point, then the above is the usual inverse function theorem. Holomorphic inverse function theorem There is a version of the inverse function theorem for holomorphic maps. The theorem follows from the usual inverse function theorem. Indeed, let denote the Jacobian matrix of in variables and for that in . Then we have , which is nonzero by assumption. Hence, by the usual inverse function theorem, is injective near with continuously differentiable inverse. By chain rule, with , where the left-hand side and the first term on the right vanish since and are holomorphic. Thus, for each . Similarly, there is the implicit function theorem for holomorphic functions. As already noted earlier, it can happen that an injective smooth function has the inverse that is not smooth (e.g., in a real variable). This is not the case for holomorphic functions because of: Formulations for manifolds The inverse function theorem can be rephrased in terms of differentiable maps between differentiable manifolds. In this context the theorem states that for a differentiable map (of class ), if the differential of , is a linear isomorphism at a point in then there exists an open neighborhood of such that is a diffeomorphism. Note that this implies that the connected components of and containing p and F(p) have the same dimension, as is already directly implied from the assumption that dFp is an isomorphism. If the derivative of is an isomorphism at all points in then the map is a local diffeomorphism. Generalizations Banach spaces The inverse function theorem can also be generalized to differentiable maps between Banach spaces and . Let be an open neighbourhood of the origin in and a continuously differentiable function, and assume that the Fréchet derivative of at 0 is a bounded linear isomorphism of onto . Then there exists an open neighbourhood of in and a continuously differentiable map such that for all in . Moreover, is the only sufficiently small solution of the equation . There is also the inverse function theorem for Banach manifolds. Constant rank theorem The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point. Specifically, if has constant rank near a point , then there are open neighborhoods of and of and there are diffeomorphisms and such that and such that the derivative is equal to . That is, "looks like" its derivative near . The set of points such that the rank is constant in a neighborhood of is an open dense subset of ; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain. When the derivative of is injective (resp. surjective) at a point , it is also injective (resp. surjective) in a neighborhood of , and hence the rank of is constant on that neighborhood, and the constant rank theorem applies. Polynomial functions If it is true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials. Selections When with , is times continuously differentiable, and the Jacobian at a point is of rank , the inverse of may not be unique. However, there exists a local selection function such that for all in a neighborhood of , , is times continuously differentiable in this neighborhood, and ( is the Moore–Penrose pseudoinverse of ). Over a real closed field The inverse function theorem also holds over a real closed field k (or an O-minimal structure). Precisely, the theorem holds for a semialgebraic (or definable) map between open subsets of that is continuously differentiable. The usual proof of the IFT uses Banach's fixed point theorem, which relies on the Cauchy completeness. That part of the argument is replaced by the use of the extreme value theorem, which does not need completeness. Explicitly, in , the Cauchy completeness is used only to establish the inclusion . Here, we shall directly show instead (which is enough). Given a point in , consider the function defined on a neighborhood of . If , then and so , since is invertible. Now, by the extreme value theorem, admits a minimal at some point on the closed ball , which can be shown to lie in using . Since , , which proves the claimed inclusion. Alternatively, one can deduce the theorem from the one over real numbers by Tarski's principle. See also Nash–Moser theorem Notes References . Multivariable calculus Differential topology Inverse functions Theorems in real analysis Theorems in calculus de:Satz von der impliziten Funktion#Satz von der Umkehrabbildung
Inverse function theorem
[ "Mathematics" ]
3,803
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theorems in real analysis", "Topology", "Differential topology", "Multivariable calculus" ]
287,301
https://en.wikipedia.org/wiki/Photoflash%20capacitor
A photoflash capacitor is a high-voltage electrolytic capacitor used in camera flashes and in solid-state laser power supplies. Their usual purpose is to briefly power a flash lamp, used to illuminate a photographic subject or optically pump a laser rod. As flash tubes require very high current for a very short time to operate, photoflash capacitors are designed to supply high discharge current pulses without excessive internal heating. Fundamentals The principal properties of a capacitor are capacitance, working voltage, equivalent series resistance (ESR), equivalent series inductance (ESL), and working temperature Compared with electrolytic capacitors usually used for power supply filtering at power frequency, a photoflash capacitor is designed to have lower ESR, ESL, and capacitance manufacturing tolerance, but does not need as high a working temperature. Design The light energy emitted by a flash is supplied by the capacitor, and is proportional to the product of the capacitance and the voltage squared; photoflash capacitors may have capacitance in the range 80-240 microfarads (μF) and voltages from 180 to 330 volts for flash units built into small disposable and compact cameras, increasing for units delivering higher light energy. A typical manufacturer's range includes capacitors operating at 330–380V, with capacitance from 80 to 1,500 μF While normal electrolytic capacitors are often operated at not more than half their nominal voltage due to their high derating, photoflash capacitors are typically operated at their nominal working voltage (labelled as "WV" or "W.V." rather than just "V"). Photoflash capacitors are not subject to the high temperatures of cased electronic equipment in continuous operation, with nearby components and sometimes the capacitors themselves dissipating heat; they are often rated at a maximum operating temperature rate of typically 55 °C, compared to 85 °C–105 °C or more for capacitors for continuous use in electronic equipment. In most electronic applications an electrolytic capacitor can have a capacitance much larger than its nominal value without detracting from circuit performance; general-purpose electrolytics are often specified to have capacitance between 20% below and 80% above rated value, although tighter tolerances are available. The light energy of a flash is proportional to the capacitance and large variations are not acceptable, typical tolerance is -10+20%. Photoflash capacitors are designed to deliver a brief pulse of very high current, and are consequently sometimes used in railgun and coilgun designs. References Capacitors Flash photography
Photoflash capacitor
[ "Physics" ]
574
[ "Capacitance", "Capacitors", "Physical quantities" ]
287,555
https://en.wikipedia.org/wiki/Pseudo-Riemannian%20manifold
In mathematical physics, a pseudo-Riemannian manifold, also called a semi-Riemannian manifold, is a differentiable manifold with a metric tensor that is everywhere nondegenerate. This is a generalization of a Riemannian manifold in which the requirement of positive-definiteness is relaxed. Every tangent space of a pseudo-Riemannian manifold is a pseudo-Euclidean vector space. A special case used in general relativity is a four-dimensional Lorentzian manifold for modeling spacetime, where tangent vectors can be classified as timelike, null, and spacelike. Introduction Manifolds In differential geometry, a differentiable manifold is a space that is locally similar to a Euclidean space. In an n-dimensional Euclidean space any point can be specified by n real numbers. These are called the coordinates of the point. An n-dimensional differentiable manifold is a generalisation of n-dimensional Euclidean space. In a manifold it may only be possible to define coordinates locally. This is achieved by defining coordinate patches: subsets of the manifold that can be mapped into n-dimensional Euclidean space. See Manifold, Differentiable manifold, Coordinate patch for more details. Tangent spaces and metric tensors Associated with each point in an -dimensional differentiable manifold is a tangent space (denoted ). This is an -dimensional vector space whose elements can be thought of as equivalence classes of curves passing through the point . A metric tensor is a non-degenerate, smooth, symmetric, bilinear map that assigns a real number to pairs of tangent vectors at each tangent space of the manifold. Denoting the metric tensor by we can express this as The map is symmetric and bilinear so if are tangent vectors at a point to the manifold then we have for any real number . That is non-degenerate means there is no non-zero such that for all . Metric signatures Given a metric tensor g on an n-dimensional real manifold, the quadratic form associated with the metric tensor applied to each vector of any orthogonal basis produces n real values. By Sylvester's law of inertia, the number of each positive, negative and zero values produced in this manner are invariants of the metric tensor, independent of the choice of orthogonal basis. The signature of the metric tensor gives these numbers, shown in the same order. A non-degenerate metric tensor has and the signature may be denoted , where . Definition A pseudo-Riemannian manifold is a differentiable manifold M that is equipped with an everywhere non-degenerate, smooth, symmetric metric tensor g. Such a metric is called a pseudo-Riemannian metric. Applied to a vector field, the resulting scalar field value at any point of the manifold can be positive, negative or zero. The signature of a pseudo-Riemannian metric is , where both p and q are non-negative. The non-degeneracy condition together with continuity implies that p and q remain unchanged throughout the manifold (assuming it is connected). Lorentzian manifold A Lorentzian manifold is an important special case of a pseudo-Riemannian manifold in which the signature of the metric is (equivalently, ; see Sign convention). Such metrics are called Lorentzian metrics. They are named after the Dutch physicist Hendrik Lorentz. Applications in physics After Riemannian manifolds, Lorentzian manifolds form the most important subclass of pseudo-Riemannian manifolds. They are important in applications of general relativity. A principal premise of general relativity is that spacetime can be modeled as a 4-dimensional Lorentzian manifold of signature or, equivalently, . Unlike Riemannian manifolds with positive-definite metrics, an indefinite signature allows tangent vectors to be classified into timelike, null or spacelike. With a signature of or , the manifold is also locally (and possibly globally) time-orientable (see Causal structure). Properties of pseudo-Riemannian manifolds Just as Euclidean space can be thought of as the local model of a Riemannian manifold, Minkowski space with the flat Minkowski metric is the local model of a Lorentzian manifold. Likewise, the model space for a pseudo-Riemannian manifold of signature (p, q) is pseudo-Euclidean space , for which there exist coordinates xi such that Some theorems of Riemannian geometry can be generalized to the pseudo-Riemannian case. In particular, the fundamental theorem of Riemannian geometry is true of all pseudo-Riemannian manifolds. This allows one to speak of the Levi-Civita connection on a pseudo-Riemannian manifold along with the associated curvature tensor. On the other hand, there are many theorems in Riemannian geometry that do not hold in the generalized case. For example, it is not true that every smooth manifold admits a pseudo-Riemannian metric of a given signature; there are certain topological obstructions. Furthermore, a submanifold does not always inherit the structure of a pseudo-Riemannian manifold; for example, the metric tensor becomes zero on any light-like curve. The Clifton–Pohl torus provides an example of a pseudo-Riemannian manifold that is compact but not complete, a combination of properties that the Hopf–Rinow theorem disallows for Riemannian manifolds. See also Causality conditions Globally hyperbolic manifold Hyperbolic partial differential equation Orientable manifold Spacetime Notes References External links Bernhard Riemann Differential geometry Riemannian geometry Riemannian manifolds Smooth manifolds
Pseudo-Riemannian manifold
[ "Mathematics" ]
1,132
[ "Riemannian manifolds", "Space (mathematics)", "Metric spaces" ]
287,668
https://en.wikipedia.org/wiki/Anthracene
Anthracene is a solid polycyclic aromatic hydrocarbon (PAH) of formula C14H10, consisting of three fused benzene rings. It is a component of coal tar. Anthracene is used in the production of the red dye alizarin and other dyes, as a Scintillator to detect high energy particles, as production of pharmaceutical drugs. Anthracene is colorless but exhibits a blue (400–500 nm peak) fluorescence under ultraviolet radiation. History and etymology Crude anthracene (with a melting point of only 180°) was discovered in 1832 by Jean-Baptiste Dumas and Auguste Laurent who crystalized it from a fraction of coal tar later known as "anthracene oil". Since their (inaccurate) measurements showed the proportions of carbon and hydrogen of it to be the same as in naphthalene, Laurent called it paranaphtaline in his 1835 publication of the discovery, which is translated to English as paranaphthalene. Two years later, however, he decided to rename the compound to its modern name derived from because after discovering other polyaromatic hydrocarbons he decided it was only one of isomers of naphthalene. This notion was disproved in 1850s and 1860s. Occurrence and production Coal tar, which contains around 1.5% anthracene, remains a major source of this material. Common impurities are phenanthrene and carbazole. The mineral form of anthracene is called freitalite and is related to a coal deposit. A classic laboratory method for the preparation of anthracene is by cyclodehydration of o-methyl- or o-methylene-substituted diarylketones in the so-called Elbs reaction, for example from o-tolyl phenyl ketone. Reactions Reduction Reduction of anthracene with alkali metals yields the deeply colored radical anion salts M+[anthracene]− (M = Li, Na, K). Hydrogenation gives 9,10-dihydroanthracene, preserving the aromaticity of the two flanking rings. Cycloadditions In any solvent except water, anthracene photodimerizes by the action of UV light: The dimer, called dianthracene (or sometimes paranthracene), is connected by a pair of new carbon-carbon bonds, the result of the [4+4] cycloaddition. It reverts to anthracene thermally or with UV irradiation below 300 nm. Substituted anthracene derivatives behave similarly. The reaction is affected by the presence of oxygen. Anthracene also reacts with dienophile singlet oxygen in a [4+2]-cycloaddition (Diels–Alder reaction): With electrophiles Chemical oxidation occurs readily, giving anthraquinone, C14H8O2 (below), for example using hydrogen peroxide and vanadyl acetylacetonate. Electrophilic substitution of anthracene occurs at the 9 position. For example, formylation affords 9-anthracenecarboxaldehyde. Substitution at other positions is effected indirectly, for example starting with anthroquinone. Bromination of anthracene gives 9,10-dibromoanthracene. Uses Anthracene is converted mainly to anthraquinone, a precursor to dyes. Niche Anthracene, a wide band-gap organic semiconductor is used as a scintillator for detectors of high-energy photons, electrons and alpha particles. Plastics, such as polyvinyltoluene, can be doped with anthracene to produce a plastic scintillator that is approximately water-equivalent for use in radiation therapy dosimetry. Anthracene's emission spectrum peaks at between 400 nm and 440 nm. It is also used in wood preservatives, insecticides, and coating materials. Anthracene is commonly used as a UV tracer in conformal coatings applied to printed wiring boards. The anthracene tracer allows the conformal coating to be inspected under UV light. Derivatives A variety of anthracene derivatives find specialized uses. Derivatives having a hydroxyl group are 1-hydroxyanthracene and 2-hydroxyanthracene, homologous to phenol and naphthols, and hydroxyanthracene (also called anthrol, and anthracenol) are pharmacologically active. Anthracene may also be found with multiple hydroxyl groups, as in 9,10-dihydroxyanthracene. Some anthracene derivatives are used as pharmaceutical drugs, including bisantrene, trazitiline, and benzoctamine. Electronics More recently, crystalline anthracene was found to be a useful wide band-gap semiconductor in devices such as organic field-effect transistors and Scintillators for detecting high-energy subatomic particles. Occurrence Anthracene, as many other polycyclic aromatic hydrocarbons, is generated during combustion processes. Exposure to humans happens mainly through tobacco smoke and ingestion of food contaminated with combustion products. Toxicology Many investigations indicate that anthracene is noncarcinogenic: "consistently negative findings in numerous in vitro and in vivo genotoxicity tests". Early experiments suggested otherwise because crude samples were contaminated with other polycyclic aromatic hydrocarbons. Furthermore, it is readily biodegraded in soil. It is especially susceptible to degradation in the presence of light. The International Agency for Research on Cancer (IARC) classifies anthracene as IARC group 2B, possibly carcinogenic to humans. See also 9,10-Dithioanthracene, derivative with two thiol groups added to the central ring Phenanthrene Tetracene References Cited sources External links IARC – Monograph 32 National Pollutant Inventory – Polycyclic Aromatic Hydrocarbon Fact Sheet European Chemicals Agency – ECHA Organic semiconductors Phosphors and scintillators IARC Group 2B carcinogens Ionising radiation detectors Acenes PBT substances Polycyclic aromatic hydrocarbons
Anthracene
[ "Chemistry", "Technology", "Engineering" ]
1,289
[ "Luminescence", "Radioactive contamination", "Semiconductor materials", "Molecular electronics", "Measuring instruments", "Ionising radiation detectors", "Phosphors and scintillators", "Organic semiconductors" ]
288,044
https://en.wikipedia.org/wiki/Gibbs%20paradox
In statistical mechanics, a semi-classical derivation of entropy that does not take into account the indistinguishability of particles yields an expression for entropy which is not extensive (is not proportional to the amount of substance in question). This leads to a paradox known as the Gibbs paradox, after Josiah Willard Gibbs, who proposed this thought experiment in 1874‒1875. The paradox allows for the entropy of closed systems to decrease, violating the second law of thermodynamics. A related paradox is the "mixing paradox". If one takes the perspective that the definition of entropy must be changed so as to ignore particle permutation, in the thermodynamic limit, the paradox is averted. Illustration of the problem Gibbs considered the following difficulty that arises if the ideal gas entropy is not extensive. Two containers of an ideal gas sit side-by-side. The gas in container #1 is identical in every respect to the gas in container #2 (i.e. in volume, mass, temperature, pressure, etc). Accordingly, they have the same entropy S. Now a door in the container wall is opened to allow the gas particles to mix between the containers. No macroscopic changes occur, as the system is in equilibrium. But if the formula for entropy is not extensive, the entropy of the combined system will not be 2S. In fact, the particular non-extensive entropy quantity considered by Gibbs predicts additional entropy (more than 2S). Closing the door then reduces the entropy again to S per box, in apparent violation of the second law of thermodynamics. As understood by Gibbs, and reemphasized more recently, this is a misuse of Gibbs' non-extensive entropy quantity. If the gas particles are distinguishable, closing the doors will not return the system to its original state many of the particles will have switched containers. There is a freedom in defining what is "ordered", and it would be a mistake to conclude that the entropy has not increased. In particular, Gibbs' non-extensive entropy quantity for an ideal gas is not intended for situations where the number of particles changes. The paradox is averted by assuming the indistinguishability (at least effective indistinguishability) of the particles in the volume. This results in the extensive Sackur–Tetrode equation for entropy, as derived next. Calculating the entropy of ideal gas, and making it extensive In classical mechanics, the state of an ideal gas of energy U, volume V and with N particles, each particle having mass m, is represented by specifying the momentum vector p and the position vector x for each particle. This can be thought of as specifying a point in a 6N-dimensional phase space, where each of the axes corresponds to one of the momentum or position coordinates of one of the particles. The set of points in phase space that the gas could occupy is specified by the constraint that the gas will have a particular energy: and be contained inside of the volume V (let's say V is a cube of side X so that ): for and The first constraint defines the surface of a 3N-dimensional hypersphere of radius (2mU)1/2 and the second is a 3N-dimensional hypercube of volume VN. These combine to form a 6N-dimensional hypercylinder. Just as the area of the wall of a cylinder is the circumference of the base times the height, so the area φ of the wall of this hypercylinder is: The entropy is proportional to the logarithm of the number of states that the gas could have while satisfying these constraints. In classical physics, the number of states is infinitely large, but according to quantum mechanics it is finite. Before the advent of quantum mechanics, this infinity was regularized by making phase space discrete. Phase space was divided up in blocks of volume h3N. The constant h thus appeared as a result of a mathematical trick and thought to have no physical significance. However, using quantum mechanics one recovers the same formalism in the semi-classical limit, but now with h being the Planck constant. One can qualitatively see this from Heisenberg's uncertainty principle; a volume in N phase space smaller than h3N (h is the Planck constant) cannot be specified. To compute the number of states we must compute the volume in phase space in which the system can be found and divide that by h3N. This leads us to another problem: The volume seems to approach zero, as the region in phase space in which the system can be is an area of zero thickness. This problem is an artifact of having specified the energy U with infinite accuracy. In a generic system without symmetries, a full quantum treatment would yield a discrete non-degenerate set of energy eigenstates. An exact specification of the energy would then fix the precise state the system is in, so the number of states available to the system would be one, the entropy would thus be zero. When we specify the internal energy to be U, what we really mean is that the total energy of the gas lies somewhere in an interval of length around U. Here is taken to be very small, it turns out that the entropy doesn't depend strongly on the choice of for large N. This means that the above "area" φ must be extended to a shell of a thickness equal to an uncertainty in momentum , so the entropy is given by: where the constant of proportionality is k, the Boltzmann constant. Using Stirling's approximation for the Gamma function which omits terms of less than order N, the entropy for large N becomes: This quantity is not extensive as can be seen by considering two identical volumes with the same particle number and the same energy. Suppose the two volumes are separated by a barrier in the beginning. Removing or reinserting the wall is reversible, but the entropy increases when the barrier is removed by the amount which is in contradiction to thermodynamics if you re-insert the barrier. This is the Gibbs paradox. The paradox is resolved by postulating that the gas particles are in fact indistinguishable. This means that all states that differ only by a permutation of particles should be considered as the same state. For example, if we have a 2-particle gas and we specify AB as a state of the gas where the first particle (A) has momentum p1 and the second particle (B) has momentum p2, then this state as well as the BA state where the B particle has momentum p1 and the A particle has momentum p2 should be counted as the same state. For an N-particle gas, there are N! states which are identical in this sense, if one assumes that each particle is in a different single particle state. One can safely make this assumption provided the gas isn't at an extremely high density. Under normal conditions, one can thus calculate the volume of phase space occupied by the gas, by dividing Equation 1 by N!. Using the Stirling approximation again for large N, ln(N!) ≈ N ln(N) − N, the entropy for large N is: which can be easily shown to be extensive. This is the Sackur–Tetrode equation. Mixing paradox A closely related paradox to the Gibbs paradox is the mixing paradox. The Gibbs paradox is a special case of the "mixing paradox" which contains all the salient features. The difference is that the mixing paradox deals with arbitrary distinctions in the two gases, not just distinctions in particle ordering as Gibbs had considered. In this sense, it is a straightforward generalization to the argument laid out by Gibbs. Again take a box with a partition in it, with gas A on one side, gas B on the other side, and both gases are at the same temperature and pressure. If gas A and B are different gases, there is an entropy that arises once the gases are mixed, the entropy of mixing. If the gases are the same, no additional entropy is calculated. The additional entropy from mixing does not depend on the character of the gases; it only depends on the fact that the gases are different. The two gases may be arbitrarily similar, but the entropy from mixing does not disappear unless they are the same gas – a paradoxical discontinuity. This "paradox" can be explained by carefully considering the definition of entropy. In particular, as concisely explained by Edwin Thompson Jaynes, definitions of entropy are arbitrary. As a central example in Jaynes' paper points out, one can develop a theory that treats two gases as similar even if those gases may in reality be distinguished through sufficiently detailed measurement. As long as we do not perform these detailed measurements, the theory will have no internal inconsistencies. (In other words, it does not matter that we call gases A and B by the same name if we have not yet discovered that they are distinct.) If our theory calls gases A and B the same, then entropy does not change when we mix them. If our theory calls gases A and B different, then entropy does increase when they are mixed. This insight suggests that the ideas of "thermodynamic state" and of "entropy" are somewhat subjective. The differential increase in entropy (dS) as a result of mixing dissimilar gases, multiplied by the temperature (T), equals the minimum amount of work we must do to restore the gases to their original separated state. Suppose that two gases are different, but that we are unable to detect their differences. If these gases are in a box, segregated from one another by a partition, how much work does it take to restore the system's original state after we remove the partition and let the gases mix? None – simply reinsert the partition. Even though the gases have mixed, there was never a detectable change of state in the system, because by hypothesis the gases are experimentally indistinguishable. As soon as we can distinguish the difference between gases, the work necessary to recover the pre-mixing macroscopic configuration from the post-mixing state becomes nonzero. This amount of work does not depend on how different the gases are, but only on whether they are distinguishable. This line of reasoning is particularly informative when considering the concepts of indistinguishable particles and correct Boltzmann counting. Boltzmann's original expression for the number of states available to a gas assumed that a state could be expressed in terms of a number of energy "sublevels" each of which contain a particular number of particles. While the particles in a given sublevel were considered indistinguishable from each other, particles in different sublevels were considered distinguishable from particles in any other sublevel. This amounts to saying that the exchange of two particles in two different sublevels will result in a detectably different "exchange macrostate" of the gas. For example, if we consider a simple gas with N particles, at sufficiently low density that it is practically certain that each sublevel contains either one particle or none (i.e. a Maxwell–Boltzmann gas), this means that a simple container of gas will be in one of N! detectably different "exchange macrostates", one for each possible particle exchange. Just as the mixing paradox begins with two detectably different containers, and the extra entropy that results upon mixing is proportional to the average amount of work needed to restore that initial state after mixing, so the extra entropy in Boltzmann's original derivation is proportional to the average amount of work required to restore the simple gas from some "exchange macrostate" to its original "exchange macrostate". If we assume that there is in fact no experimentally detectable difference in these "exchange macrostates" available, then using the entropy which results from assuming the particles are indistinguishable will yield a consistent theory. This is "correct Boltzmann counting". It is often said that the resolution to the Gibbs paradox derives from the fact that, according to the quantum theory, like particles are indistinguishable in principle. By Jaynes' reasoning, if the particles are experimentally indistinguishable for whatever reason, Gibbs paradox is resolved, and quantum mechanics only provides an assurance that in the quantum realm, this indistinguishability will be true as a matter of principle, rather than being due to an insufficiently refined experimental capability. Non-extensive entropy of two ideal gases and how to fix it In this section, we present in rough outline a purely classical derivation of the non-extensive entropy for an ideal gas considered by Gibbs before "correct counting" (indistinguishability of particles) is accounted for. This is followed by a brief discussion of two standard methods for making the entropy extensive. Finally, we present a third method, due to R. Swendsen, for an extensive (additive) result for the entropy of two systems if they are allowed to exchange particles with each other. Setup We will present a simplified version of the calculation. It differs from the full calculation in three ways: The ideal gas consists of particles confined to one spatial dimension. We keep only the terms of order , dropping all terms of size n or less, where n is the number of particles. For our purposes, this is enough, because this is where the Gibbs paradox shows up and where it must be resolved. The neglected terms play a role when the number of particles is not very large, such as in computer simulation and nanotechnology. Also, they are needed in deriving the Sackur–Tetrode equation. The subdivision of phase space into units of the Planck constant (h) is omitted. Instead, the entropy is defined using an integral over the "accessible" portion of phase space. This serves to highlight the purely classical nature of the calculation. We begin with a version of Boltzmann's entropy in which the integrand is all of accessible phase space: The integral is restricted to a contour of available regions of phase space, subject to conservation of energy. In contrast to the one-dimensional line integrals encountered in elementary physics, the contour of constant energy possesses a vast number of dimensions. The justification for integrating over phase space using the canonical measure involves the assumption of equal probability. The assumption can be made by invoking the ergodic hypothesis as well as the Liouville's theorem of Hamiltonian systems. (The ergodic hypothesis underlies the ability of a physical system to reach thermal equilibrium, but this may not always hold for computer simulations (see the Fermi–Pasta–Ulam–Tsingou problem) or in certain real-world systems such as non-thermal plasmas.) Liouville's theorem assumes a fixed number of dimensions that the system 'explores'. In calculations of entropy, the number dimensions is proportional to the number of particles in the system, which forces phase space to abruptly change dimensionality when particles are added or subtracted. This may explain the difficulties in constructing a clear and simple derivation for the dependence of entropy on the number of particles. For the ideal gas, the accessible phase space is an (n − 1)-sphere (also called a hypersphere) in the n-dimensional space: To recover the paradoxical result that entropy is not extensive, we integrate over phase space for a gas of monatomic particles confined to a single spatial dimension by . Since our only purpose is to illuminate a paradox, we simplify notation by taking the particle's mass and the Boltzmann constant equal to unity: . We represent points in phase-space and its x and v parts by n and 2n dimensional vectors:   where     and   To calculate entropy, we use the fact that the (n-1)-sphere, has an -dimensional "hypersurface volume" of For example, if n = 2, the 1-sphere is the circle , a "hypersurface" in the plane. When the sphere is even-dimensional (n odd), it will be necessary to use the gamma function to give meaning to the factorial; see below. Gibbs paradox in a one-dimensional gas Gibbs paradox arises when entropy is calculated using an dimensional phase space, where is also the number of particles in the gas. These particles are spatially confined to the one-dimensional interval . The volume of the surface of fixed energy is The subscripts on are used to define the 'state variables' and will be discussed later, when it is argued that the number of particles, lacks full status as a state variable in this calculation. The integral over configuration space is . As indicated by the underbrace, the integral over velocity space is restricted to the "surface area" of the dimensional hypersphere of radius , and is therefore equal to the "area" of that hypersurface. Thus {| class="toccolours collapsible collapsed" width="50%" style="text-align:left"x ! Click to view the algebraic steps |- | We begin with: Both terms on the right hand side have dominant terms. Using the Stirling approximation for large M, , we have: Terms are neglected if they exhibit less variation with a parameter, and we compare terms that vary with the same parameter. Entropy is defined with an additive arbitrary constant because the area in phase space depends on what units are used. For that reason it does not matter if entropy is large or small for a given value of E. We instead to seek how entropy varies with E, i.e., we seek : An expression such as is much less important than an expression like An expression like is much less important than an expression like . Note that the logarithm is not a strongly increasing function. The neglect of terms proportional to n compared with terms proportional to n ln n is only justified if n is extremely large. Combining the important terms: |} After approximating the factorial and dropping the small terms, we obtain In the second expression, the term was subtracted and added, using the fact that . This was done to highlight exactly how the "entropy" defined here fails to be an extensive property of matter. The first two terms are extensive: if the volume of the system doubles, but gets filled with the same density of particles with the same energy, then each of these terms doubles. But the third term is neither extensive nor intensive and is therefore wrong. The arbitrary constant has been added because entropy can usually be viewed as being defined up to an arbitrary additive constant. This is especially necessary when entropy is defined as the logarithm of a phase space volume measured in units of momentum-position. Any change in how these units are defined will add or subtract a constant from the value of the entropy. Two standard ways to make the classical entropy extensive As discussed above, an extensive form of entropy is recovered if we divide the volume of phase space, , by n!. An alternative approach is to argue that the dependence on particle number cannot be trusted on the grounds that changing also changes the dimensionality of phase space. Such changes in dimensionality lie outside the scope of Hamiltonian mechanics and Liouville's theorem. For that reason it is plausible to allow the arbitrary constant to be a function of . Defining the function to be, , we have: which has extensive scaling: Swendsen's particle-exchange approach Following Swendsen, we allow two systems to exchange particles. This essentially 'makes room' in phase space for particles to enter or leave without requiring a change in the number of dimensions of phase space. The total number of particles is : particles have coordinates . The total energy of these particles is particles have coordinates . The total energy of these particles is The system is subject to the constraints, and Taking the integral over phase space, we have: The question marks (?) serve as a reminder that we may not assume that the first nA particles (i.e. 1 through nA) are in system A while the other particles (nB through N) are in system B. (This is further discussed in the next section.) Taking the logarithm and keeping only the largest terms, we have: This can be interpreted as the sum of the entropy of system A and system B, both extensive. And there is a term, , that is not extensive. Visualizing the particle-exchange approach in three dimensions The correct (extensive) formulas for systems A and B were obtained because we included all the possible ways that the two systems could exchange particles. The use of combinations (i.e. N particles choose NA) was used to ascertain the number of ways N particles can be divided into system A containing nA particles and system B containing nB particles. This counting is not justified on physical grounds, but on the need to integrate over phase space. As will be illustrated below, phase space contains not a single nA-sphere and a single nB-sphere, but instead pairs of n-spheres, all situated in the same -dimensional velocity space. The integral over accessible phase space must include all of these n-spheres, as can be seen in the figure, which shows the actual velocity phase space associated with a gas that consists of three particles. Moreover, this gas has been divided into two systems, A and B. If we ignore the spatial variables, the phase space of a gas with three particles is three dimensional, which permits one to sketch the n-spheres over which the integral over phase space must be taken. If all three particles are together, the split between the two gases is 3|0. Accessible phase space is delimited by an ordinary sphere (2-sphere) with a radius that is either or (depending which system has the particles). If the split is 2|1, then phase space consists of circles and points. Each circle occupies two dimensions, and for each circle, two points lie on the third axis, equidistant from the center of the circle. In other words, if system A has 2 particles, accessible phase space consists of 3 pairs of n-spheres, each pair being a 1-sphere and a 0-sphere: Note that References Further reading External links Gibbs paradox and its resolutions – varied collected papers Entropy Statistical mechanics Thermodynamics Particle statistics Physical paradoxes
Gibbs paradox
[ "Physics", "Chemistry", "Mathematics" ]
4,598
[ "Thermodynamic properties", "Physical quantities", "Particle statistics", "Quantity", "Entropy", "Thermodynamics", "Asymmetry", "Statistical mechanics", "Wikipedia categories named after physical quantities", "Symmetry", "Dynamical systems" ]
288,050
https://en.wikipedia.org/wiki/Particle-induced%20X-ray%20emission
Particle-Induced X-Ray Emission or Proton-Induced X-Ray Emission (PIXE) is a technique used for determining the elemental composition of a material or a sample. When a material is exposed to an ion beam, atomic interactions occur that give off EM radiation of wavelengths in the x-ray part of the electromagnetic spectrum specific to an element. PIXE is a powerful, yet non-destructive elemental analysis technique now used routinely by geologists, archaeologists, art conservators and others to help answer questions of provenance, dating and authenticity. The technique was first proposed in 1970 by Sven Johansson of Lund University, Sweden, and developed over the next few years with his colleagues Roland Akselsson and Thomas B Johansson. Recent extensions of PIXE using tightly focused beams (down to 1 μm) gives the additional capability of microscopic analysis. This technique, called microPIXE, can be used to determine the distribution of trace elements in a wide range of samples. A related technique, particle-induced gamma-ray emission (PIGE) can be used to detect some light elements. Additionally a multiplexed instrument combining PIXE with Mass Spectrometry of molecules: PDI-PIXE-MS or PIXE-MS. See below. Theory Three types of spectra can be collected from a PIXE experiment: X-ray emission spectrum. Rutherford backscattering spectrum. Proton transmission spectrum. X-ray emission Quantum theory states that orbiting electrons of an atom must occupy discrete energy levels in order to be stable. Bombardment with ions of sufficient energy (usually MeV protons) produced by an ion accelerator, will cause inner shell ionization of atoms in a specimen. Outer shell electrons drop down to replace inner shell vacancies, however only certain transitions are allowed. X-rays of a characteristic energy of the element are emitted. An energy dispersive detector is used to record and measure these X-rays. Only elements heavier than fluorine can be detected. The lower detection limit for a PIXE beam is given by the ability of the X-rays to pass through the window between the chamber and the X-ray detector. The upper limit is given by the ionisation cross section, the probability of the K electron shell ionisation, this is maximal when the velocity of the proton matches the velocity of the electron (10% of the speed of light), therefore 3 MeV proton beams are optimal. Proton backscattering Protons can also interact with the nucleus of the atoms in the sample through elastic collisions, Rutherford backscattering, often repelling the proton at angles close to 180 degrees. The backscatter give information on the sample thickness and composition. The bulk sample properties allow for the correction of X-ray photon loss within the sample. Proton transmission The transmission of protons through a sample can also be used to get information about the sample. Channeling is one of the processes that can be used to study crystals. Protein analysis Protein analysis using microPIXE allow for the determination of the elemental composition of liquid and crystalline proteins. microPIXE can quantify the metal content of protein molecules with a relative accuracy of between 10% and 20%. The advantage of microPIXE is that given a protein of known sequence, the X-ray emission from sulfur can be used as an internal standard to calculate the number of metal atoms per protein monomer. Because only relative concentrations are calculated there are only minimal systematic errors, and the results are totally internally consistent. The relative concentrations of DNA to protein (and metals) can also be measured using the phosphate groups of the bases as an internal calibration. Data analysis Analysis of the data collected can be performed by the programs Dan32, the front end to gupix. Limitations In order to get a meaningful sulfur signal from the analysis, the buffer should not contain sulfur (i.e. no BES, DDT, HEPES, MES, MOPSO or PIPES compounds). Excessive amounts of chlorine in the buffer should also be avoided, since this will overlap with the sulfur peak; KBr and NaBr are suitable alternatives. Due to the low penetration depth of protons and heavy charged particles, PIXE is limited to analyzing the top micrometer of a given sample. Advantages There are many advantages to using a proton beam over an electron beam. There is less crystal charging from Bremsstrahlung radiation, although there is some from the emission of Auger electrons, and there is significantly less than if the primary beam was itself an electron beam. Because of the higher mass of protons relative to electrons, there is less lateral deflection of the beam; this is important for proton beam writing applications. Scanning Two-dimensional maps of elemental compositions can be generated by scanning the microPIXE beam across the target. Cell and Tissue Analysis Whole cell and tissue analysis is possible using a microPIXE beam, this method is also referred to as nuclear microscopy. Artifact Analysis MicroPIXE is a useful technique for the non-destructive analysis of paintings and antiques. Although it provides only an elemental analysis, it can be used to distinguish and measure layers within the thickness of an artifact. The technique is comparable with destructive techniques such as the ICP family of analyses. Proton Beam Writing Proton beams can be used for writing (proton beam writing) through either the hardening of a polymer (by proton induced cross-linking), or through the degradation of a proton sensitive material. This may have important effects in the field of nanotechnology. Particle Desorption Ionization Particle Induced X-ray Emission Mass Spectrometry PDI-PIXE-MS This is a technique, PIXE-MS, for short, which combines PIXE with Mass Spectrometry of molecules. Elemental determinations are performed by PIXE with a heavy ion, such as oxygen, while simultaneously collecting the molecular ions for mass analysis in a quadrupole mass spectrometer, or time-of-flight (TOF) instrument. ICP-MS only determines elemental constituents using mass spectrometry, not molecular information. Sequential scanning may be done with a hydrogen ion beam and then a heavy ion beam to desorb and ionize the analyte sample. This technique allows for the analysis of both the elemental constituents as well as, simultaneously, the molecular ions, or molecular speciation, present in a sample, using a heavy ion beam. This makes use, typically, of a 4 MeV accelerator with samples prepared in glycerol, on carbon felt. External links Examination of Leonardo da Vinci's Madonna of the Yarnwinder using PIXE Application of PIXE to the study of Renaissance style enameled gold jewelry https://www.researchgate.net/publication/359992537_PDI-PIXE-MS_Particle_Desorption_Ionization_Particle-Induced_X-Ray_Emission_Mass_Spectrometry https://www.researchgate.net/publication/360804974_PIXEMS_PNNL031209_-_Pacific_Northwest_National_Laboratory_Presentation_-_Particle_Desorption_Ionization_Particle-Induced_X-Ray_Emission_Mass_Spectrometry_-_PDI-PIXE-MS https://www.researchgate.net/publication/359992231_Sproch_N_Ashbaugh_MD_Morse_D_Grant_P_McIntyre_Jr_LC_Antolak_A_Fernando_Q_PDPIXE-MS_Particle_Desorption_Particle_Induced_X-ray_Emission_Mass_Spectrometry_Proceedings_of_the_49th_ASMS_Conference_on_Mass https://kynol.com/ Surface science Experimental physics Experimental particle physics Nuclear physics Particle physics Protein methods Emission spectroscopy
Particle-induced X-ray emission
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
1,641
[ "Biochemistry methods", "Ion beam methods", "Spectrum (physical sciences)", "Protein methods", "Emission spectroscopy", "Protein biochemistry", "Surface science", "Experimental particle physics", "Experimental physics", "Particle physics", "Condensed matter physics", "Nuclear physics", "Spec...
288,209
https://en.wikipedia.org/wiki/Soft%20tissue
Soft tissue connects and surrounds or supports internal organs and bones, and includes muscle, tendons, ligaments, fat, fibrous tissue, lymph and blood vessels, fasciae, and synovial membranes. Soft tissue is tissue in the body that is not hardened by the processes of ossification or calcification such as bones and teeth. It is sometimes defined by what it is not – such as "nonepithelial, extraskeletal mesenchyme exclusive of the reticuloendothelial system and glia". Composition The characteristic substances inside the extracellular matrix of soft tissue are the collagen, elastin and ground substance. Normally the soft tissue is very hydrated because of the ground substance. The fibroblasts are the most common cell responsible for the production of soft tissues' fibers and ground substance. Variations of fibroblasts, like chondroblasts, may also produce these substances. Mechanical characteristics At small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose (wavy, crimped). With increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these fibers produce a strong growth in tissue stiffness. The composite behavior is analogous to a nylon stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues, the collagen limits the deformation and protects the tissues from injury. Human soft tissue is highly deformable, and its mechanical properties vary significantly from one person to another. Impact testing results showed that the stiffness and the damping resistance of a test subject's tissue are correlated with the mass, velocity, and size of the striking object. Such properties may be useful for forensics investigation when contusions were induced. When a solid object impacts a human soft tissue, the energy of the impact will be absorbed by the tissues to reduce the effect of the impact or the pain level; subjects with more soft tissue thickness tended to absorb the impacts with less aversion. Soft tissues have the potential to undergo large deformations and still return to the initial configuration when unloaded, i.e. they are hyperelastic materials, and their stress-strain curve is nonlinear. The soft tissues are also viscoelastic, incompressible and usually anisotropic. Some viscoelastic properties observable in soft tissues are: relaxation, creep and hysteresis. In order to describe the mechanical response of soft tissues, several methods have been used. These methods include: hyperelastic macroscopic models based on strain energy, mathematical fits where nonlinear constitutive equations are used, and structurally based models where the response of a linear elastic material is modified by its geometric characteristics. Pseudoelasticity Even though soft tissues have viscoelastic properties, i.e. stress as function of strain rate, it can be approximated by a hyperelastic model after precondition to a load pattern. After some cycles of loading and unloading the material, the mechanical response becomes independent of strain rate. Despite the independence of strain rate, preconditioned soft tissues still present hysteresis, so the mechanical response can be modeled as hyperelastic with different material constants at loading and unloading. By this method the elasticity theory is used to model an inelastic material. Fung has called this model as pseudoelastic to point out that the material is not truly elastic. Residual stress In physiological state soft tissues usually present residual stress that may be released when the tissue is excised. Physiologists and histologists must be aware of this fact to avoid mistakes when analyzing excised tissues. This retraction usually causes a visual artifact. Fung-elastic material Fung developed a constitutive equation for preconditioned soft tissues which is with quadratic forms of Green-Lagrange strains and , and material constants. is the strain energy function per volume unit, which is the mechanical strain energy for a given temperature. Isotropic simplification The Fung-model, simplified with isotropic hypothesis (same mechanical properties in all directions). This written in respect of the principal stretches (): , where a, b and c are constants. Simplification for small and big stretches For small strains, the exponential term is very small, thus negligible. On the other hand, the linear term is negligible when the analysis rely only on big strains. Gent-elastic material where is the shear modulus for infinitesimal strains and is a stiffening parameter, associated with limiting chain extensibility. This constitutive model cannot be stretched in uni-axial tension beyond a maximal stretch , which is the positive root of Remodeling and growth Soft tissues have the potential to grow and remodel reacting to chemical and mechanical long term changes. The rate the fibroblasts produce tropocollagen is proportional to these stimuli. Diseases, injuries and changes in the level of mechanical load may induce remodeling. An example of this phenomenon is the thickening of farmer's hands. The remodeling of connective tissues is well known in bones by the Wolff's law (bone remodeling). Mechanobiology is the science that study the relation between stress and growth at cellular level. Growth and remodeling have a major role in the cause of some common soft tissue diseases, like arterial stenosis and aneurisms and any soft tissue fibrosis. Other instance of tissue remodeling is the thickening of the cardiac muscle in response to the growth of blood pressure detected by the arterial wall. Imaging techniques There are certain issues that have to be kept in mind when choosing an imaging technique for visualizing soft tissue extracellular matrix (ECM) components. The accuracy of the image analysis relies on the properties and the quality of the raw data and, therefore, the choice of the imaging technique must be based upon issues such as: Having an optimal resolution for the components of interest; Achieving high contrast of those components; Keeping the artifact count low; Having the option of volume data acquisition; Keeping the data volume low; Establishing an easy and reproducible setup for tissue analysis. The collagen fibers are approximately 1-2 μm thick. Thus, the resolution of the imaging technique needs to be approximately 0.5 μm. Some techniques allow the direct acquisition of volume data while other need the slicing of the specimen. In both cases, the volume that is extracted must be able to follow the fiber bundles across the volume. High contrast makes segmentation easier, especially when color information is available. In addition, the need for fixation must also be addressed. It has been shown that soft tissue fixation in formalin causes shrinkage, altering the structure of the original tissue. Some typical values of contraction for different fixation are: formalin (5% - 10%), alcohol (10%), bouin (<5%). Imaging methods used in ECM visualization and their properties. Clinical significance Soft tissue disorders are medical conditions affecting soft tissue. Soft tissue injuries are some of the most chronically painful and difficult conditions to treat because it is very difficult to see what is going on under the skin with the soft connective tissues, fascia, joints, muscles and tendons. Musculoskeletal specialists, manual therapists, neuromuscular physiologists and neurologists specialize in treating injuries and ailments in the soft tissue areas of the body. These specialized clinicians often develop innovative ways to manipulate the soft tissue to speed natural healing and relieve the mysterious pain that often accompanies soft tissue injuries. This area of expertise has become known as soft tissue therapy and is rapidly expanding as technology continues to improve the ability of these specialists to identify problem areas. A promising new method of treating wounds and soft tissue injuries is via platelet-derived growth factor. There is a close overlap between the term "soft tissue disorder" and rheumatism. Sometimes the term "soft tissue rheumatic disorders" is used to describe these conditions. Soft tissue sarcomas are many types of cancer that can develop in the soft tissues. See also Biomaterial Biomechanics Davis's law Rheology References External links Biomechanics Tissues (biology) Continuum mechanics
Soft tissue
[ "Physics" ]
1,753
[ "Biomechanics", "Mechanics", "Classical mechanics", "Continuum mechanics" ]
288,216
https://en.wikipedia.org/wiki/Blue%20giant
In astronomy, a blue giant is a hot star with a luminosity class of III (giant) or II (bright giant). In the standard Hertzsprung–Russell diagram, these stars lie above and to the right of the main sequence. The term applies to a variety of stars in different phases of development, all evolved stars that have moved from the main sequence but have little else in common, so blue giant simply refers to stars in a particular region of the HR diagram rather than a specific type of star. They are much rarer than red giants, because they only develop from more massive and less common stars, and because they have short lives in the blue giant stage. Because O-type and B-type stars with a giant luminosity classification are often somewhat more luminous than their normal main-sequence counterparts of the same temperatures and because many of these stars are relatively nearby to Earth on the galactic scale of the Milky Way Galaxy, many of the bright stars in the night sky are examples of blue giants, including Beta Centauri (B1III); Mimosa (B0.5III); Bellatrix (B2III); Epsilon Canis Majoris (B2II); and Alpha Lupi (B1.5III) among others. The name blue giant is sometimes misapplied to other high-mass luminous stars, such as main-sequence stars, simply because they are large and hot. Properties Blue giant is not a strictly defined term and it is applied to a wide variety of different types of stars. They have in common a moderate increase in size and luminosity compared to main-sequence stars of the same mass or temperature, and are hot enough to be called blue, meaning spectral class O, B, and sometimes early A. Their temperatures exceed around 10,000 K, and they have zero age main sequence (ZAMS) masses greater than about twice the Sun (), and absolute magnitudes around 0 or brighter. These stars are only 5–10 times the radius of the Sun (), compared to red giants which are up to . The coolest and least luminous stars referred to as blue giants are on the horizontal branch, intermediate-mass stars that have passed through a red giant phase and are now burning helium in their cores. Depending on mass and chemical composition these stars gradually move blue wards until they exhaust the helium in their cores and then they return redwards to the asymptotic giant branch (AGB). The RR Lyrae variable stars, usually with spectral types of A, lie across the middle of the horizontal branch. Horizontal-branch stars hotter than the RR Lyrae gap are generally considered to be blue giants, and sometimes the RR Lyrae stars themselves are called blue giants despite some of them being F class. The hottest stars, blue horizontal branch (BHB) stars, are called extreme horizontal branch (EHB) stars and can be hotter than main-sequence stars of the same luminosity. In these cases they are called blue subdwarf (sdB) stars rather than blue giants, named for their position to the left of the main sequence on the HR diagram rather than for their increased luminosity and temperature compared to when they were themselves main-sequence stars. There are no strict upper limits for giant stars, but early O types become increasingly difficult to classify separately from main sequence and supergiant stars, have almost identical sizes and temperatures to the main-sequence stars from which they develop, and very short lifetimes. A good example is Plaskett's star, a close binary consisting of two O type giants both over , temperatures over 30,000 K, and more than 100,000 times the luminosity of the Sun (). Astronomers still differ over whether to classify at least one of the stars as a supergiant, based on subtle differences in the spectral lines. Evolution Stars found in the blue giant region of the HR diagram can be in very different stages of their lives, but all are evolved stars that have largely exhausted their core hydrogen supplies. In the simplest case, a hot luminous star begins to expand as its core hydrogen is exhausted, and first becomes a blue subgiant then a blue giant, becoming both cooler and more luminous. Intermediate-mass stars will continue to expand and cool until they become red giants. Massive stars also continue to expand as hydrogen shell burning progresses, but they do so at approximately constant luminosity and move horizontally across the HR diagram. In this way they can quickly pass through blue giant, bright blue giant, blue supergiant, and yellow supergiant classes, until they become red supergiants. The luminosity class for such stars is determined from spectral lines that are sensitive to the surface gravity of the star, with more expanded and luminous stars being given I (supergiant) classifications while somewhat less expanded and more luminous stars are given luminosity II or III. Because they are massive stars with short lives, many blue giants are found in O–B associations, that are large collections of loosely bound young stars. BHB stars are more evolved and have helium burning cores, although they still have an extensive hydrogen envelope. They also have moderate masses around so they are often much older than more massive blue giants. The BHB takes its name from the prominent horizontal grouping of stars seen on colour-magnitude diagrams for older clusters, where core helium burning stars of the same age are found at a variety of temperatures with roughly the same luminosity. These stars also evolve through the core helium burning stage at constant luminosity, first increasing in temperature then decreasing again as they move toward the AGB. However, at the blue end of the horizontal branch, it forms a "blue tail" of stars with lower luminosity, and occasionally a "blue hook" of even hotter stars. There are other highly evolved hot stars not generally referred to as blue giants: Wolf–Rayet stars, highly luminous and distinguished by their extreme temperatures and prominent helium and nitrogen emission lines; post-AGB stars forming planetary nebulae, similar to Wolf–Rayet stars but smaller and less massive; blue stragglers, uncommon luminous blue stars observed apparently on the main sequence in clusters where main-sequence stars of their luminosity should have evolved into giants or supergiants; and the true blue supergiants, the most massive stars evolved beyond blue giants and identified by the effects of greater expansion on their spectra. A purely theoretical group of stars could be formed when red dwarfs finally exhaust their core hydrogen trillions of years into the future. These stars are convective through their depth and are expected to very slowly increase both their temperature and luminosity as they accumulate more and more helium until eventually they cannot sustain fusion and they quickly collapse to white dwarfs. Although these stars can become hotter than the Sun they will never become more luminous, so are hardly blue giants as we see them today. The name blue dwarf has been coined although that name could easily be confusing. References Stellar phenomena Giant stars
Blue giant
[ "Physics" ]
1,438
[ "Physical phenomena", "Stellar phenomena" ]
288,262
https://en.wikipedia.org/wiki/CSIRO
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is an Australian Government agency that is responsible for scientific research and its commercial and industrial applications. CSIRO works with leading organisations around the world. From its headquarters in Canberra, CSIRO maintains more than 50 sites across Australia and in France, Chile and the United States, employing about 5,500 people. Federally funded scientific research in Australia began in 1916 with the creation of the Advisory Council of Science and Industry. However, the council struggled due to insufficient funding. In 1926, research efforts were revitalised with the establishment of the Council for Scientific and Industrial Research (CSIR), which strengthened national science leadership and increased research funding. CSIR grew rapidly, achieving significant early successes. In 1949, legislative changes led to the renaming of the organisation as Commonwealth Scientific and Industrial Research Organisation (CSIRO). Notable developments by CSIRO have included the invention of atomic absorption spectroscopy, essential components of the early Wi-Fi technology, development of the first commercially successful polymer banknote, the invention of the insect repellent Aerogard and the introduction of a series of biological controls into Australia, such as the introduction of myxomatosis and rabbit calicivirus for the control of rabbit populations. Structure CSIRO is governed by a board appointed by the Australian Government, currently chaired by Kathryn Fagg. There are eight directors inclusive of the chief executive, presently Doug Hilton, who are responsible for management of the organisation. Research and focus areas CSIRO is structured into Research Business Units, National Facilities and Collections, and Services. Research Business Units As at 2023, CSIRO's research areas are identified as "Impact science" and organised into the following Business Units: Agriculture and Food Health and Biosecurity Data61 Energy Manufacturing Mineral Resources and Environment (being the amalgamation of the former Land and Water and Oceans & Atmosphere BUs) National facilities and collections National facilities CSIRO manages national research facilities and scientific infrastructure on behalf of the nation to assist with the delivery of research. The national facilities and specialised laboratories are available to both international and Australian users from industry and research. As at 2019, the following National Facilities are listed: Australian Centre for Disease Preparedness (ACDP) Australia Telescope National Facility – radio telescopes included in the Facility include the Australia Telescope Compact Array, the Parkes Observatory, Mopra Observatory and the Australian Square Kilometre Array Pathfinder Canberra Deep Space Communication Complex Energy Centre and National Solar Energy Centre Marine National Facility (R.V. "Investigator") New Norcia ground station NovaSAR-1 satellite Pawsey Supercomputing Centre Collections CSIRO manages a number of collections of animal and plant specimens that contribute to national and international biological knowledge. The National Collections contribute to taxonomic, genetic, agricultural and ecological research. As at 2019, CSIRO's Collections are listed as the following: Australian National Algae Culture Collection The Atlas of Living Australia Australian Tree Seed Centre Australian National Fish Collection Australian National Insect Collection Australian National Herbarium Australian National Soil Archive (managed through A&F) Australian National Wildlife Collection Cape Grim Air Archive Services In 2019, CSIRO Services are itemised as follows: Materials and infrastructure services Agricultural and environmental analysis Environmental services Biological, food and medical science services Australian Animal Health Laboratory services Other services are noted as including education, publishing, infrastructure technologies, Small and Medium Enterprise engagement and CSIRO Futures. History Evolution of the organisation A precursor to CSIRO, the Advisory Council of Science and Industry, was established in 1916 on the initiative of prime minister Billy Hughes. However, the advisory council struggled with insufficient funding during the First World War. In 1920 the council was renamed the Commonwealth Institute of Science and Industry, and was led by George Handley Knibbs (1921–26), but continued to struggle financially. Implementing the 1923 Imperial Conference's call for colonies to broaden their economic base, in 1926 the Australian Parliament modified the principal Act for national scientific research (the Institute of Science and Industry Act 1920) by passing The Science and Industry Research Act 1926. The same conference led to the creation of the Department of Scientific and Industrial Research in New Zealand. The new Act replaced the institute with the Council for Scientific and Industrial Research (CSIR). With encouragement from prime minister Stanley Bruce, strengthened national science leadership and increased research funding, CSIR grew rapidly and achieved significant early successes. The council was structured to represent the federal structure of government in Australia, and had state-level committees and a central council. In addition to an improved structure, CSIR benefited from strong bureaucratic management under George Julius, David Rivett, and Arnold Richardson. Research focused on primary and secondary industries. Early in its existence, CSIR established divisions studying animal health and animal nutrition. After the Great Depression, research was extended into manufacturing and other secondary industries. In 1949 the Act was changed again, and the entity name amended to the Commonwealth Scientific and Industrial Research Organisation. The amendment enlarged and reconstituted the organisation and its administrative structure. Under Ian Clunies Ross as chairman, CSIRO pursued new areas such as radio astronomy and industrial chemistry. CSIRO still operates under the provisions of the 1949 Act in a wide range of scientific inquiry. Participation by women in CSIRO research was severely limited by the Australian government policy, in place until 1966, forcing women public servants out of their jobs when they married. Even unmarried women were considered a poor investment because they might eventually marry. Single women such as Helen Newton Turner nevertheless made major contributions. Since 1949, CSIRO has expanded its activities to almost every field of primary, secondary and tertiary industry, including the environment, human nutrition, conservation, urban and rural planning, and water. It works with leading organisations around the world and maintains more than 50 sites across Australia and in France, Chile and the United States of America, employing about 5500 people. Inventions Notable inventions and breakthroughs by CSIRO include: A4 DSP chip Aerogard, insect repellent Atomic absorption spectroscopy Biological control of Salvinia Development of Linola (a flax variety with low alpha-linolenic acid content) with a longer life used as a stockfeed Distance measuring equipment (DME) used for aviation navigation Gene shears Interscan Microwave landing system, a microwave approach and landing system for aircraft Use of myxomatosis and calicivirus to control rabbit numbers Parkes Radio Telescope The permanent pleat for fabrics Plasma sintering Polymer banknote Production of metals from their halides Relenza flu drug Sirosmelt lance "Softly" woollens detergent Phase-contrast X-ray imaging Method to use titanium in 3D printing UltraBattery Essential components of Wi-Fi technology Zebedee - Mobile Handheld 3D Lidar Mapping technology Historic research CSIRO had a pioneering role in the scientific discovery of the universe through radio "eyes". A team led by Paul Wild built and operated (from 1948) the world's first solar radiospectrograph, and from 1967 the radioheliograph at Culgoora in New South Wales. For three decades, the Division of Radiophysics had a world-leading role in solar research, attracting prominent solar physicists from around the world. CSIRO owned the first computer in Australia, CSIRAC, built as part of a project began in the Sydney Radiophysics Laboratory in 1947. The CSIR Mk 1 ran its first program in 1949, the fifth electronic computer in the world. It was over 1,000 times faster than the mechanical calculators available at the time. It was decommissioned in 1955 and recommissioned in Melbourne as CSIRAC in 1956 as a general purpose computing machine used by over 700 projects until 1964. The CSIRAC is the only surviving first-generation computer in the world. Between 1965 and 1985, George Bornemissza of CSIRO's Division of Entomology founded and led the Australian Dung Beetle Project. Bornemissza, upon settling in Australia from Hungary in 1951, noticed that the pastureland was covered in dry cattle dung pads which did not seem to be recycled into the soil and caused areas of rank pasture which were unpalatable to the cattle. He proposed that the reason for this was that native Australian dung beetles, which had co-evolved alongside the marsupials (which produce dung very different in its composition from cattle), were not adapted to utilise cattle dung for their nutrition and breeding since cattle had only relatively recently been introduced to the continent in the 1880s. The Australian Dung Beetle Project sought, therefore, to introduce species of dung beetle from South Africa and Europe (which had co-evolved alongside bovids) in order to improve the fertility and quality of cattle pastures. Twenty-three species were successfully introduced throughout the duration of the project and also had the effect of reducing the pestilent bush fly population by 90%. Domain name CSIRO was the first Australian organisation to start using the Internet and was able to register the second-level domain csiro.au (as opposed to csiro.org.au or csiro.com.au). Guidelines were introduced in 1996 to regulate the use of the .au domain. Governance and management When CSIR was formed in 1926, it was led initially by an executive committee of three people, two of whom were designated as the chairman and the chief executive. Since then the roles and responsibilities of the chair and chief executive have changed many times. From 1927 to 1986 the head of CSIR (and from 1949, CSIRO) was the chairman, who was responsible for the management of the organisation, supported by the chief executive. From 1 July 1959 to 4 December 1986 CSIRO had no chief executive; the chairman undertook both functions. In 1986, when the Australian Government changed the structure of CSIRO to include a board of non-executive members plus the chief executive to lead CSIRO, the roles changed. The chief executive is now responsible for management of the organisation in accordance with the strategy, plans and policies approved by the CSIRO Board which, led by the chair of the board, is responsible to the Australian Government for the overall strategy, governance and performance of CSIRO. As with its governance structure, the priorities and structure of CSIRO, and the teams and facilities that implement its research, have changed as Australia's scientific challenges have evolved. Numerous CSIRO scientists have gone onto distinguished careers in the university sector. Several have been appointed to the role of Vice-Chancellor/President. They include: Sir George Currie (UNZ 1952–62, Western Australia 1945–52), Paul Wellings CBE (Wollongong 2012–21, Lancaster 2002–12), Michael Barber AO (Flinders 2008–14), Mark Smith CBE (Southampton 2019–ff, Lancaster 2012–19), Annabelle Duncan (UNE 2014–19), Attila Brungs (UNSW 2021–ff, UTS 2014–21), Alex Zelinsky AO (Newcastle (2018–ff), Andrew Parfitt (UTS 2021–ff), Chris Moran (UNE 2023–ff). Chairs Chief executives Controversies Total Wellbeing Diet In 2005 the CSIRO gained worldwide attention, including some criticism, for promoting a high-protein, low-carbohydrate diet of their own creation called Total Wellbeing Diet. The CSIRO published the diet in a book which sold over half a million copies in Australia and over 100,000 overseas. The diet was criticised in an editorial by Nature for giving scientific credence to a "fashionable" diet sponsored by meat and dairy industries. 802.11 patent In the early 1990s, CSIRO radio astronomy scientists John O'Sullivan, Graham Daniels, Terence Percival, Diethelm Ostry and John Deane undertook research directed to finding a way to make wireless networks work as fast as wired networks within confined spaces such as office buildings. The technique they developed, involving a particular combination of forward error correction, frequency-domain interleaving, and multi-carrier modulation, became the subject of , which was granted on 23 January 1996. In 1997 Macquarie University professor David Skellern and his colleague Neil Weste established the company Radiata, Inc., which took a nonexclusive licence to the CSIRO patent for the purpose of developing commercially viable integrated circuit devices implementing the patented technology. During this period, the IEEE 802.11 Working Group was developing the 802.11a wireless LAN standard. CSIRO did not participate directly in the standards process, however David Skellern was an active participant as secretary of the Working Group, and representative of Radiata. In 1998 it became apparent that the CSIRO patent would be pertinent to the standard. In response to a request from Victor Hayes of Lucent Technologies, who was chair of the 802.11 Working Group, CSIRO confirmed its commitment to make non-exclusive licenses available to implementers of the standard on reasonable and non-discriminatory terms. In 1999, Cisco Systems, Inc. and Broadcom Corporation each invested A$4 million in Radiata, representing an 11% stake for each investor and valuing the company at around A$36 million. In September 2000, Radiata demonstrated a chip set complying with the recently finalised IEEE 802.11a Wi-Fi standard, and capable of handling transmission rates of up to 54 Mbit/s, at a major international exhibition. In November 2000, Cisco acquired Radiata in exchange for US$295 million in Cisco common stock with the intention of incorporating the Radiata Baseband Processor and Radio chips into its Aironet family of wireless LAN products. Cisco subsequently took a large write-down on the Radiata acquisition, following the 2001 telecoms crash, and in 2004 it shut down its internal development of wireless chipsets based on the Radiata technology in order to focus on software development and emerging new technologies. Controversy over the CSIRO patent arose in 2006 after the organisation won an injunction against Buffalo Technology in an infringement suit filed in Federal Court in the Eastern District of Texas. The injunction was subsequently suspended on appeal, with the Court of Appeals for the Federal Circuit finding that the judge in Texas should have allowed a trial to proceed on Buffalo's challenge to the validity of the CSIRO patent. In 2007, CSIRO declined to provide an assurance to the IEEE that it would not sue companies which refused to take a license for use in 802.11n-compliant devices, while at the same time continuing to defend legal challenges to the validity of the patent brought by Intel, Dell, Microsoft, Hewlett-Packard and Netgear. In April 2009, Hewlett-Packard broke ranks with the rest of the industry becoming the first to reach a settlement of its dispute with CSIRO. This agreement was followed quickly by settlements with Microsoft, Fujitsu and Asus and then Dell, Intel, Nintendo, Toshiba, Netgear, Buffalo, D-Link, Belkin, SMC, Accton, and 3Com. The controversy grew after CSIRO sued US carriers AT&T, Verizon and T-Mobile in 2010, with the organisation being accused of being "Australia's biggest patent troll", a wrathful "patent bully", and of imposing a "WiFi tax" on American innovation. Further fuel was added to the controversy after a settlement with the carriers, worth around $229 million, was announced in March 2012. Encouraged in part by an announcement by the Australian Minister for Tertiary Education, Skills Science and Research, Senator Chris Evans, an article in Ars Technica portrayed CSIRO as a shadowy organisation responsible for US consumers being compelled to make "a multimillion dollar donation" on the basis of a questionable patent claiming "decades old" technology. The resulting debate became so heated that the author was compelled to follow up with a defence of the original article. An alternative view was also published on The Register, challenging a number of the assertions made in the Ars Technica piece. Total income to CSIRO from the patent is currently estimated at nearly $430 million. On 14 June 2012, the CSIRO inventors received the European Patent Office (EPO) European Inventor Award (EIA), in the category of "Non-European Countries". Genetically modified wheat trials On 14 July 2011, Greenpeace activists vandalised a crop of GM wheat, circumventing the scientific trials being undertaken. Greenpeace was forced to pay reparations to CSIRO of $280,000 for the criminal damage, and were accused by the sentencing judge, Justice Hilary Penfold, of cynically using junior members of the organisation with good standing to avoid custodial sentences, while the offenders were given 9-month suspended sentences. Following the attack Greenpeace criticised CSIRO for a close relationship with industry that had led to an increase in genetically modified crops, even though a core aim of CSIRO is Cooperative Research "working hand in hand with industry [to] build partnerships and engage with industry to generate impact". Climate change censorship: Clive Spash On 25 November 2009, a debate was held in the Australian Senate concerning the alleged involvement of the CSIRO and the Labor government in censorship. The debate was called for by opposition parties after evidence came to light that a paper critical of carbon emissions trading was being suppressed. At the time, the Labor government was trying to get such a scheme through the Senate. After the debate, the Science Minister, Kim Carr, was forced to release the paper, but when doing so in the Senate he also delivered a letter from the CEO of the CSIRO, Megan Clark, which attacked the report's author and threatened him with unspecified punishment. The author of the paper, Clive Spash, was cited in the press as having been bullied and harassed, and later gave a radio interview about this. In the midst of the affair, CSIRO management had considered releasing the paper with edits that Nature reported would be "tiny". Spash claimed the changes actually demanded amounted to censorship and resigned. He later posted on his website a document detailing the text that CSIRO management demanded be deleted; by itself, this document forms a coherent set of statements criticising emissions trading without any additional wording needed. In subsequent Senate Estimates hearings during 2010, Senator Carr and Clark went on record claiming the paper was originally stopped from publication solely due to its low quality not meeting CSIRO standards. At the time of its attempted suppression, the paper had been accepted for publication in an academic journal, New Political Economy, which in 2010 had been ranked by the Australian Research Council as an 'A class' publication. In an ABC radio interview, Spash called for a Senate enquiry into the affair and the role played by senior management and the Science Minister. After these events, the Sydney Morning Herald reported that "Questions are being raised about the closeness of BHP Billiton and the CSIRO under its chief executive, Megan Clark". After his resignation, an unedited version of the paper was released by Spash as a discussion paper, and later published as an academic journal article. CSIRO–Novartis–DataTrace scandal On 11 April 2013, the Sydney Morning Herald ran a story on how CSIRO had "duped" the Swiss-based pharmaceutical giant Novartis into purchasing an anti-counterfeit technology for its vials of injectable Voltaren. The invention was marketed by a small Australian company called DataTrace DNA as a method of identifying fake vials, on the basis that a unique tracer code developed by CSIRO was embedded in the product. However, the code sold to Novartis for more than A$2M was apparently not unique, and was based on a "cheap tracer ... bought in bulk from a Chinese distributor". Novartis was contractually bound not to reverse-engineer the tracer to verify its uniqueness. The Sydney Morning Herald report alleges that this was done with the knowledge of key CSIRO personnel. CSIRO has since conducted a full review of the allegations and found no evidence to support them. Alleged bullying, harassment and victimisation Around 2008–2012, CSIRO fell under the spotlight for allegedly exhibiting a culture of workplace bullying and harassment. Former CSIRO employees started to surface with experiences of workplace bullying and other unreasonable behaviour by current and former CSIRO staff members. CSIRO took the allegations seriously and responded to the articles on a number of occasions. The shadow minister for innovation, industry, science and research, Sophie Mirabella, wrote to the government requesting it establish an inquiry. Mirabella said she is aware of as many as 100 cases of alleged workplace harassment. On 20 July 2012 Comcare issued CSIRO with an Improvement Notice with regard to handling and management of workplace misconduct/code of conduct type investigations and allegations. On 24 June 2013 Mirabella advised the Australian House of Representatives that in relation to the worker's compensation claim for psychological injuries of ex-CSIRO employee, Martin Williams, which was vigorously defended by Comcare on the advice of the CSIRO, that CSIRO officers had provided false testimony on no less than 128 occasions under oath when the matter went before the Administrative Appeals Tribunal. Mirabella stated, "even in establishing the framework for this inquiry it is obvious there's an inappropriate 'hands on' approach by CSIRO." In response to the allegations Clark commissioned Dennis Pearce, who is assisted by an investigation team from HWL Ebsworth Lawyers, to conduct an independent investigation into allegations of workplace bullying and other unreasonable behaviour. Mirabella continued to question the independence of the investigation. The first stage of the investigation published its findings at the end of July 2013, and the final stage was scheduled to be complete by February 2014. Post the Pearce Report, CSIRO overhauled its relevant policies and put in place training and whistleblower procedures to address the situation. CSIRO and climate change In August 2015 the CSIRO discontinued its annual July and August survey, conducted over the previous five years, polling to create a long-term view of how Australians viewed global warming and their support for action. In the previous 2013 poll, 86 per cent agreed with the statement that climate change was occurring and only 7.6 per cent disagreed. On 11 February 2016, Dr Larry Marshall – a former venture capitalist with Southern Cross Venture Holdings, who had been appointed CEO of the CSIRO on 1 January 2015, caused an international outcry after describing Australia's national climate change discussion as "more like religion than science," a week after announcing hundreds of job cuts to the organisation that will reduce the effectiveness of its climate research team. In "an open letter to the Australian Government and CSIRO", 2,800 of the leading climate scientists from 60 countries say the announcement of cuts to the CSIRO's Oceans and Atmosphere research program has alarmed the global climate research community. They say the decision shows a lack of insight and a misunderstanding of the importance of the depth and significance of Australian contributions to global and regional climate research. The CSIRO has been the target of successive funding cuts under the Morrison government, starting with cuts targeting climate science research initiated by Tony Abbott. Trademark dispute with Cisco In 2015, Cisco Systems filed a trademark infringement lawsuit against CSIRO, claiming that the colours and style of CSIRO's logo were too similar to Cisco's. An Australian court ruled in CSIRO's favor and ordered Cisco to pay CSIRO's court costs. See also Australia Telescope National Facility Australian Animal Health Laboratory Australian Bird and Bat Banding Scheme Australian Dung Beetle Project Australian Space Research Institute Backing Australia's Ability Biosecurity in Australia Cooperative Research Centres Council for Scientific and Industrial Research – Ghana Council of Scientific and Industrial Research, India Council for Scientific and Industrial Research, South Africa CSIRO Oceans and Atmosphere CSIRO Publishing Defence Science and Technology Group Fraunhofer Society, Germany George Bornemissza Goyder Institute for Water Research, a research collaboration with universities and SA government Parkes Observatory Peter Rathjen SINTEF, Norway Susan Wijffels Netherlands Organisation for Applied Scientific Research Waste management in Australia Yingjie Jay Guo Notes References External links CSIRO US website CSIROpedia Official CSIRO history site Commonwealth of Australia. Commonwealth Scientific and Industrial Research Organisation (CSIRO). (1949–) National Library of Australia, Trove, People and Organisation record for CSIRO Commonwealth of Australia. Council for Scientific and Industrial Research (CSIR). (1926–1949) National Library of Australia, Trove, People and Organisation record for CSIR Australian e-Health Research Centre (AeHRC) Centre for Liveability Real Estate Issues Scientific organisations based in Australia Space programme of Australia Atmospheric dispersion modeling Research institutes in Australia Forest research institutes Life sciences industry Industry in Australia Organisations based in Canberra Research institutes established in 1916 1916 establishments in Australia Robotics in Australia
CSIRO
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
5,027
[ "Environmental modelling", "Atmospheric dispersion modeling", "Life sciences industry", "Environmental engineering" ]
288,291
https://en.wikipedia.org/wiki/Ricci%20flow
In the mathematical fields of differential geometry and geometric analysis, the Ricci flow ( , ), sometimes also referred to as Hamilton's Ricci flow, is a certain partial differential equation for a Riemannian metric. It is often said to be analogous to the diffusion of heat and the heat equation, due to formal similarities in the mathematical structure of the equation. However, it is nonlinear and exhibits many phenomena not present in the study of the heat equation. The Ricci flow, so named for the presence of the Ricci tensor in its definition, was introduced by Richard Hamilton, who used it through the 1980s to prove striking new results in Riemannian geometry. Later extensions of Hamilton's methods by various authors resulted in new applications to geometry, including the resolution of the differentiable sphere conjecture by Simon Brendle and Richard Schoen. Following the possibility that the singularities of solutions of the Ricci flow could identify the topological data predicted by William Thurston's geometrization conjecture, Hamilton produced a number of results in the 1990s which were directed towards the conjecture's resolution. In 2002 and 2003, Grigori Perelman presented a number of fundamental new results about the Ricci flow, including a novel variant of some technical aspects of Hamilton's program. Perelman's work is now widely regarded as forming the proof of the Thurston conjecture and the Poincaré conjecture, regarded as a special case of the former. It should be emphasized that the Poincare conjecture has been a well-known open problem in the field of geometric topology since 1904. These results by Hamilton and Perelman are considered as a milestone in the fields of geometry and topology. Mathematical definition On a smooth manifold , a smooth Riemannian metric automatically determines the Ricci tensor . For each element of , by definition is a positive-definite inner product on the tangent space at . If given a one-parameter family of Riemannian metrics , one may then consider the derivative , which then assigns to each particular value of and a symmetric bilinear form on . Since the Ricci tensor of a Riemannian metric also assigns to each a symmetric bilinear form on , the following definition is meaningful. Given a smooth manifold and an open real interval , a Ricci flow assigns, to each in the interval , a Riemannian metric on such that . The Ricci tensor is often thought of as an average value of the sectional curvatures, or as an algebraic trace of the Riemann curvature tensor. However, for the analysis of existence and uniqueness of Ricci flows, it is extremely significant that the Ricci tensor can be defined, in local coordinates, by a formula involving the first and second derivatives of the metric tensor. This makes the Ricci flow into a geometrically-defined partial differential equation. The analysis of the ellipticity of the local coordinate formula provides the foundation for the existence of Ricci flows; see the following section for the corresponding result. Let be a nonzero number. Given a Ricci flow on an interval , consider for between and . Then . So, with this very trivial change of parameters, the number −2 appearing in the definition of the Ricci flow could be replaced by any other nonzero number. For this reason, the use of −2 can be regarded as an arbitrary convention, albeit one which essentially every paper and exposition on Ricci flow follows. The only significant difference is that if −2 were replaced by a positive number, then the existence theorem discussed in the following section would become a theorem which produces a Ricci flow that moves backwards (rather than forwards) in parameter values from initial data. The parameter is usually called , although this is only as part of standard informal terminology in the mathematical field of partial differential equations. It is not physically meaningful terminology. In fact, in the standard quantum field theoretic interpretation of the Ricci flow in terms of the renormalization group, the parameter corresponds to length or energy, rather than time. Normalized Ricci flow Suppose that is a compact smooth manifold, and let be a Ricci flow for in the interval . Define so that each of the Riemannian metrics has volume 1; this is possible since is compact. (More generally, it would be possible if each Riemannian metric had finite volume.) Then define to be the antiderivative of which vanishes at . Since is positive-valued, is a bijection onto its image . Now the Riemannian metrics , defined for parameters , satisfy Here denotes scalar curvature. This is called the normalized Ricci flow equation. Thus, with an explicitly defined change of scale and a reparametrization of the parameter values, a Ricci flow can be converted into a normalized Ricci flow. The converse also holds, by reversing the above calculations. The primary reason for considering the normalized Ricci flow is that it allows a convenient statement of the major convergence theorems for Ricci flow. However, it is not essential to do so, and for virtually all purposes it suffices to consider Ricci flow in its standard form. Moreover, the normalized Ricci flow is not generally meaningful on noncompact manifolds. Existence and uniqueness Let be a smooth closed manifold, and let be any smooth Riemannian metric on . Making use of the Nash–Moser implicit function theorem, showed the following existence theorem: There exists a positive number and a Ricci flow parametrized by such that converges to in the topology as decreases to 0. He showed the following uniqueness theorem: If and are two Ricci flows as in the above existence theorem, then for all The existence theorem provides a one-parameter family of smooth Riemannian metrics. In fact, any such one-parameter family also depends smoothly on the parameter. Precisely, this says that relative to any smooth coordinate chart on , the function is smooth for any . Dennis DeTurck subsequently gave a proof of the above results which uses the Banach implicit function theorem instead. His work is essentially a simpler Riemannian version of Yvonne Choquet-Bruhat's well-known proof and interpretation of well-posedness for the Einstein equations in Lorentzian geometry. As a consequence of Hamilton's existence and uniqueness theorem, when given the data , one may speak unambiguously of the Ricci flow on with initial data , and one may select to take on its maximal possible value, which could be infinite. The principle behind virtually all major applications of Ricci flow, in particular in the proof of the Poincaré conjecture and geometrization conjecture, is that, as approaches this maximal value, the behavior of the metrics can reveal and reflect deep information about . Convergence theorems Complete expositions of the following convergence theorems are given in and . The three-dimensional result is due to . Hamilton's proof, inspired by and loosely modeled upon James Eells and Joseph Sampson's epochal 1964 paper on convergence of the harmonic map heat flow, included many novel features, such as an extension of the maximum principle to the setting of symmetric 2-tensors. His paper (together with that of Eells−Sampson) is among the most widely cited in the field of differential geometry. There is an exposition of his result in . In terms of the proof, the two-dimensional case is properly viewed as a collection of three different results, one for each of the cases in which the Euler characteristic of is positive, zero, or negative. As demonstrated by , the negative case is handled by the maximum principle, while the zero case is handled by integral estimates; the positive case is more subtle, and Hamilton dealt with the subcase in which has positive curvature by combining a straightforward adaptation of Peter Li and Shing-Tung Yau's gradient estimate to the Ricci flow together with an innovative "entropy estimate". The full positive case was demonstrated by Bennett , in an extension of Hamilton's techniques. Since any Ricci flow on a two-dimensional manifold is confined to a single conformal class, it can be recast as a partial differential equation for a scalar function on the fixed Riemannian manifold . As such, the Ricci flow in this setting can also be studied by purely analytic methods; correspondingly, there are alternative non-geometric proofs of the two-dimensional convergence theorem. The higher-dimensional case has a longer history. Soon after Hamilton's breakthrough result, Gerhard Huisken extended his methods to higher dimensions, showing that if almost has constant positive curvature (in the sense of smallness of certain components of the Ricci decomposition), then the normalized Ricci flow converges smoothly to constant curvature. found a novel formulation of the maximum principle in terms of trapping by convex sets, which led to a general criterion relating convergence of the Ricci flow of positively curved metrics to the existence of "pinching sets" for a certain multidimensional ordinary differential equation. As a consequence, he was able to settle the case in which is four-dimensional and has positive curvature operator. Twenty years later, Christoph Böhm and Burkhard Wilking found a new algebraic method of constructing "pinching sets", thereby removing the assumption of four-dimensionality from Hamilton's result (). Simon Brendle and Richard Schoen showed that positivity of the isotropic curvature is preserved by the Ricci flow on a closed manifold; by applying Böhm and Wilking's method, they were able to derive a new Ricci flow convergence theorem (). Their convergence theorem included as a special case the resolution of the differentiable sphere theorem, which at the time had been a long-standing conjecture. The convergence theorem given above is due to , which subsumes the earlier higher-dimensional convergence results of Huisken, Hamilton, Böhm & Wilking, and Brendle & Schoen. Corollaries The results in dimensions three and higher show that any smooth closed manifold which admits a metric of the given type must be a space form of positive curvature. Since these space forms are largely understood by work of Élie Cartan and others, one may draw corollaries such as Suppose that is a smooth closed 3-dimensional manifold which admits a smooth Riemannian metric of positive Ricci curvature. If is simply-connected then it must be diffeomorphic to the 3-sphere. So if one could show directly that any smooth closed simply-connected 3-dimensional manifold admits a smooth Riemannian metric of positive Ricci curvature, then the Poincaré conjecture would immediately follow. However, as matters are understood at present, this result is only known as a (trivial) corollary of the Poincaré conjecture, rather than vice versa. Possible extensions Given any larger than two, there exist many closed -dimensional smooth manifolds which do not have any smooth Riemannian metrics of constant curvature. So one cannot hope to be able to simply drop the curvature conditions from the above convergence theorems. It could be possible to replace the curvature conditions by some alternatives, but the existence of compact manifolds such as complex projective space, which has a metric of nonnegative curvature operator (the Fubini-Study metric) but no metric of constant curvature, makes it unclear how much these conditions could be pushed. Likewise, the possibility of formulating analogous convergence results for negatively curved Riemannian metrics is complicated by the existence of closed Riemannian manifolds whose curvature is arbitrarily close to constant and yet admit no metrics of constant curvature. Li–Yau inequalities Making use of a technique pioneered by Peter Li and Shing-Tung Yau for parabolic differential equations on Riemannian manifolds, proved the following "Li–Yau inequality". Let be a smooth manifold, and let be a solution of the Ricci flow with such that each is complete with bounded curvature. Furthermore, suppose that each has nonnegative curvature operator. Then, for any curve with , one has showed the following alternative Li–Yau inequality. Let be a smooth closed -manifold, and let be a solution of the Ricci flow. Consider the backwards heat equation for -forms, i.e. ; given and , consider the particular solution which, upon integration, converges weakly to the Dirac delta measure as increases to . Then, for any curve with , one has where . Both of these remarkable inequalities are of profound importance for the proof of the Poincaré conjecture and geometrization conjecture. The terms on the right hand side of Perelman's Li–Yau inequality motivates the definition of his "reduced length" functional, the analysis of which leads to his "noncollapsing theorem". The noncollapsing theorem allows application of Hamilton's compactness theorem (Hamilton 1995) to construct "singularity models", which are Ricci flows on new three-dimensional manifolds. Owing to the Hamilton–Ivey estimate, these new Ricci flows have nonnegative curvature. Hamilton's Li–Yau inequality can then be applied to see that the scalar curvature is, at each point, a nondecreasing (nonnegative) function of time. This is a powerful result that allows many further arguments to go through. In the end, Perelman shows that any of his singularity models is asymptotically like a complete gradient shrinking Ricci soliton, which are completely classified; see the previous section. See for details on Hamilton's Li–Yau inequality; the books and contain expositions of both inequalities above. Examples Constant-curvature and Einstein metrics Let be a Riemannian manifold which is Einstein, meaning that there is a number such that . Then is a Ricci flow with , since then If is closed, then according to Hamilton's uniqueness theorem above, this is the only Ricci flow with initial data . One sees, in particular, that: if is positive, then the Ricci flow "contracts" since the scale factor is less than 1 for positive ; furthermore, one sees that can only be less than , in order that is a Riemannian metric. This is the simplest examples of a "finite-time singularity". if is zero, which is synonymous with being Ricci-flat, then is independent of time, and so the maximal interval of existence is the entire real line. if is negative, then the Ricci flow "expands" since the scale factor is greater than 1 for all positive ; furthermore one sees that can be taken arbitrarily large. One says that the Ricci flow, for this initial metric, is "immortal". In each case, since the Riemannian metrics assigned to different values of differ only by a constant scale factor, one can see that the normalized Ricci flow exists for all time and is constant in ; in particular, it converges smoothly (to its constant value) as . The Einstein condition has as a special case that of constant curvature; hence the particular examples of the sphere (with its standard metric) and hyperbolic space appear as special cases of the above. Ricci solitons Ricci solitons are Ricci flows that may change their size but not their shape up to diffeomorphisms. Cylinders Sk × Rl (for k ≥ 2) shrink self similarly under the Ricci flow up to diffeomorphisms A significant 2-dimensional example is the cigar soliton, which is given by the metric (dx2 + dy2)/(e4t + x2 + y2) on the Euclidean plane. Although this metric shrinks under the Ricci flow, its geometry remains the same. Such solutions are called steady Ricci solitons. An example of a 3-dimensional steady Ricci soliton is the Bryant soliton, which is rotationally symmetric, has positive curvature, and is obtained by solving a system of ordinary differential equations. A similar construction works in arbitrary dimension. There exist numerous families of Kähler manifolds, invariant under a U(n) action and birational to Cn, which are Ricci solitons. These examples were constructed by Cao and Feldman-Ilmanen-Knopf. (Chow-Knopf 2004) A 4-dimensional example exhibiting only torus symmetry was recently discovered by Bamler-Cifarelli-Conlon-Deruelle. A gradient shrinking Ricci soliton consists of a smooth Riemannian manifold (M,g) and f ∈ C∞(M) such that One of the major achievements of was to show that, if M is a closed three-dimensional smooth manifold, then finite-time singularities of the Ricci flow on M are modeled on complete gradient shrinking Ricci solitons (possibly on underlying manifolds distinct from M). In 2008, Huai-Dong Cao, Bing-Long Chen, and Xi-Ping Zhu completed the classification of these solitons, showing: Suppose (M,g,f) is a complete gradient shrinking Ricci soliton with dim(M) = 3. If M is simply-connected then the Riemannian manifold (M,g) is isometric to , , or , each with their standard Riemannian metrics. This was originally shown by with some extra conditional assumptions. Note that if M is not simply-connected, then one may consider the universal cover and then the above theorem applies to There is not yet a good understanding of gradient shrinking Ricci solitons in any higher dimensions. Relationship to uniformization and geometrization Hamilton's first work on Ricci flow was published at the same time as William Thurston's geometrization conjecture, which concerns the topological classification of three-dimensional smooth manifolds. Hamilton's idea was to define a kind of nonlinear diffusion equation which would tend to smooth out irregularities in the metric. Suitable canonical forms had already been identified by Thurston; the possibilities, called Thurston model geometries, include the three-sphere S3, three-dimensional Euclidean space E3, three-dimensional hyperbolic space H3, which are homogeneous and isotropic, and five slightly more exotic Riemannian manifolds, which are homogeneous but not isotropic. (This list is closely related to, but not identical with, the Bianchi classification of the three-dimensional real Lie algebras into nine classes.) Hamilton succeeded in proving that any smooth closed three-manifold which admits a metric of positive Ricci curvature also admits a unique Thurston geometry, namely a spherical metric, which does indeed act like an attracting fixed point under the Ricci flow, renormalized to preserve volume. (Under the unrenormalized Ricci flow, the manifold collapses to a point in finite time.) However, this doesn't prove the full geometrization conjecture, because of the restrictive assumption on curvature. Indeed, a triumph of nineteenth-century geometry was the proof of the uniformization theorem, the analogous topological classification of smooth two-manifolds, where Hamilton showed that the Ricci flow does indeed evolve a negatively curved two-manifold into a two-dimensional multi-holed torus which is locally isometric to the hyperbolic plane. This topic is closely related to important topics in analysis, number theory, dynamical systems, mathematical physics, and even cosmology. Note that the term "uniformization" suggests a kind of smoothing away of irregularities in the geometry, while the term "geometrization" suggests placing a geometry on a smooth manifold. Geometry is being used here in a precise manner akin to Klein's notion of geometry (see Geometrization conjecture for further details). In particular, the result of geometrization may be a geometry that is not isotropic. In most cases including the cases of constant curvature, the geometry is unique. An important theme in this area is the interplay between real and complex formulations. In particular, many discussions of uniformization speak of complex curves rather than real two-manifolds. Singularities Hamilton showed that a compact Riemannian manifold always admits a short-time Ricci flow solution. Later Shi generalized the short-time existence result to complete manifolds of bounded curvature. In general, however, due to the highly non-linear nature of the Ricci flow equation, singularities form in finite time. These singularities are curvature singularities, which means that as one approaches the singular time the norm of the curvature tensor blows up to infinity in the region of the singularity. A fundamental problem in Ricci flow is to understand all the possible geometries of singularities. When successful, this can lead to insights into the topology of manifolds. For instance, analyzing the geometry of singular regions that may develop in 3d Ricci flow, is the crucial ingredient in Perelman's proof of the Poincare and Geometrization Conjectures. Blow-up limits of singularities To study the formation of singularities it is useful, as in the study of other non-linear differential equations, to consider blow-ups limits. Intuitively speaking, one zooms into the singular region of the Ricci flow by rescaling time and space. Under certain assumptions, the zoomed in flow tends to a limiting Ricci flow , called a singularity model. Singularity models are ancient Ricci flows, i.e. they can be extended infinitely into the past. Understanding the possible singularity models in Ricci flow is an active research endeavor. Below, we sketch the blow-up procedure in more detail: Let be a Ricci flow that develops a singularity as . Let be a sequence of points in spacetime such that as . Then one considers the parabolically rescaled metrics Due to the symmetry of the Ricci flow equation under parabolic dilations, the metrics are also solutions to the Ricci flow equation. In the case that i.e. up to time the maximum of the curvature is attained at , then the pointed sequence of Ricci flows subsequentially converges smoothly to a limiting ancient Ricci flow . Note that in general is not diffeomorphic to . Type I and Type II singularities Hamilton distinguishes between Type I and Type II singularities in Ricci flow. In particular, one says a Ricci flow , encountering a singularity a time is of Type I if . Otherwise the singularity is of Type II. It is known that the blow-up limits of Type I singularities are gradient shrinking Ricci solitons. In the Type II case it is an open question whether the singularity model must be a steady Ricci soliton—so far all known examples are. Singularities in 3d Ricci flow In 3d the possible blow-up limits of Ricci flow singularities are well-understood. From the work of Hamilton, Perelman and Brendle, blowing up at points of maximum curvature leads to one of the following three singularity models: The shrinking round spherical space form The shrinking round cylinder The Bryant soliton The first two singularity models arise from Type I singularities, whereas the last one arises from a Type II singularity. Singularities in 4d Ricci flow In four dimensions very little is known about the possible singularities, other than that the possibilities are far more numerous than in three dimensions. To date the following singularity models are known The 4d Bryant soliton Compact Einstein manifold of positive scalar curvature Compact gradient Kahler–Ricci shrinking soliton The FIK shrinker (discovered by M. Feldman, T. Ilmanen, D. Knopf) The BCCD shrinker (discovered by Richard Bamler, Charles Cifarelli, Ronan Conlon, and Alix Deruelle) Note that the first three examples are generalizations of 3d singularity models. The FIK shrinker models the collapse of an embedded sphere with self-intersection number −1. Relation to diffusion To see why the evolution equation defining the Ricci flow is indeed a kind of nonlinear diffusion equation, we can consider the special case of (real) two-manifolds in more detail. Any metric tensor on a two-manifold can be written with respect to an exponential isothermal coordinate chart in the form (These coordinates provide an example of a conformal coordinate chart, because angles, but not distances, are correctly represented.) The easiest way to compute the Ricci tensor and Laplace-Beltrami operator for our Riemannian two-manifold is to use the differential forms method of Élie Cartan. Take the coframe field so that metric tensor becomes Next, given an arbitrary smooth function , compute the exterior derivative Take the Hodge dual Take another exterior derivative (where we used the anti-commutative property of the exterior product). That is, Taking another Hodge dual gives which gives the desired expression for the Laplace/Beltrami operator To compute the curvature tensor, we take the exterior derivative of the covector fields making up our coframe: From these expressions, we can read off the only independent spin connection one-form where we have taken advantage of the anti-symmetric property of the connection (). Take another exterior derivative This gives the curvature two-form from which we can read off the only linearly independent component of the Riemann tensor using Namely from which the only nonzero components of the Ricci tensor are From this, we find components with respect to the coordinate cobasis, namely But the metric tensor is also diagonal, with and after some elementary manipulation, we obtain an elegant expression for the Ricci flow: This is manifestly analogous to the best known of all diffusion equations, the heat equation where now is the usual Laplacian on the Euclidean plane. The reader may object that the heat equation is of course a linear partial differential equation—where is the promised nonlinearity in the p.d.e. defining the Ricci flow? The answer is that nonlinearity enters because the Laplace-Beltrami operator depends upon the same function p which we used to define the metric. But notice that the flat Euclidean plane is given by taking . So if is small in magnitude, we can consider it to define small deviations from the geometry of a flat plane, and if we retain only first order terms in computing the exponential, the Ricci flow on our two-dimensional almost flat Riemannian manifold becomes the usual two dimensional heat equation. This computation suggests that, just as (according to the heat equation) an irregular temperature distribution in a hot plate tends to become more homogeneous over time, so too (according to the Ricci flow) an almost flat Riemannian manifold will tend to flatten out the same way that heat can be carried off "to infinity" in an infinite flat plate. But if our hot plate is finite in size, and has no boundary where heat can be carried off, we can expect to homogenize the temperature, but clearly we cannot expect to reduce it to zero. In the same way, we expect that the Ricci flow, applied to a distorted round sphere, will tend to round out the geometry over time, but not to turn it into a flat Euclidean geometry. Recent developments The Ricci flow has been intensively studied since 1981. Some recent work has focused on the question of precisely how higher-dimensional Riemannian manifolds evolve under the Ricci flow, and in particular, what types of parametric singularities may form. For instance, a certain class of solutions to the Ricci flow demonstrates that neckpinch singularities will form on an evolving -dimensional metric Riemannian manifold having a certain topological property (positive Euler characteristic), as the flow approaches some characteristic time . In certain cases, such neckpinches will produce manifolds called Ricci solitons. For a 3-dimensional manifold, Perelman showed how to continue past the singularities using surgery on the manifold. Kähler metrics remain Kähler under Ricci flow, and so Ricci flow has also been studied in this setting, where it is called Kähler–Ricci flow. Notes References Articles for a popular mathematical audience. Research articles. Erratum. Revised version: Textbooks External links 1981 introductions 3-manifolds Geometric flow Partial differential equations Riemannian geometry Riemannian manifolds
Ricci flow
[ "Mathematics" ]
5,807
[ "Riemannian manifolds", "Space (mathematics)", "Metric spaces" ]
289,407
https://en.wikipedia.org/wiki/Computer%20art%20scene
The computer art scene, or simply artscene, is the community interested and active in the creation of computer-based artwork. Early computer art The history of computer art predates the computer art scene for several decades, with the first experiments having taken place in the early 1950s. Devices like plotters and teletypewriters were commonly used instead of video display screens. The earliest precursors to ASCII art can be found in RTTY art, that is, pictures created by amateur radio enthusiasts with teleprinters using the Baudot code. In the early days of microcomputers, what could be shown on a typical video display screen was limited to plain and simple text, such as that found in the ASCII code set. In the early 1980s, users of IBM PC compatible computers began to experiment with ways of forming simple pictures and designs using only the 255 characters within the Extended ASCII character set, specifically known as code page 437, created by IBM. Modems and networking technology allowed computer users to communicate with each other over bulletin board systems (BBSes); the operators of these BBSes used ASCII art to enhance the aesthetic appearance of their systems. The common user interface or video mode shared by all systems was plain text. As a result, a "scene" of artists arose to fill the need for original art to distinguish one BBS from another. Evolving technology At home At a time when IBM PC compatibles were limited to monochrome graphics or the four preset colors of the Color Graphics Adapter, the Atari 8-bit computers had a palette of 128 colors and could display 4-8 of those at once—or many more with custom programming. The Commodore 64 could display 16 fixed colors. In 1985, the Amiga arrived with the ability to display 640x480 4096-color graphics that could be exported via the NTSC standard. This capability was used by Disney animators in movies such as The Little Mermaid and by TV producers in shows such as SeaQuest and Babylon 5. Online As computer technology developed, the American National Standards Institute X3 committee invented a standard method of terminal control using escape sequences called "ANSI X3.64-1979". This protocol allowed for text and cursor positioning as well as defining foreground and background color attributes for the text. Eventually, text artists began incorporating this new level of flexibility to the existing medium of ASCII art by adding color to their text-based art, or animating their art by manipulating the cursor control codes. This is what is commonly referred to today as "ANSI art" that is used in many scene nfos. A decade later, the popularity of ANSI art had increased significantly (largely due to the similarly increasing interest in the BBS) and ANSI artists began to form into "groups", not unlike graffiti "crews." The first ANSI group was called Aces of ANSI Art (AAA). Though no official founding date can be established for this group, its earliest surviving tribute packs are dated December 1991 and includes art dated back to 1989. Other groups like ACiD (ANSI Creators in Demand) and iCE (Insane Creators Enterprises) quickly began to spring up. Beginning in June 1992, these groups would release their work in monthly "ARTPACKS," which were collections of ASCII art submitted by the group's various members, as well as news and membership lists. These artpacks were then spread far and wide by BBS users. Some of the same groups from the 1990s still exist today; their art is now primarily distributed using the internet. A later method of transmitting graphics over a BBS was developed called Remote Imaging Protocol or RIP, which required special software on both the BBS and the terminal end. RIP was still basically text, but the text referred to the positions of lines, curves, fills, and other steps in drawing graphics on an EGA display of 640x350x16 colors. While RIP never caught on in the BBS world, the art scene embraced it as a form of expression, if not a viable method of displaying art on a BBS. VGA to present day In 1987 IBM introduced the VGA card. Early VGA graphics were "high resolution" images, generally using an 8-bit depth (256 colors) and a resolution of 320x200x256, 360x480x256 (hacked Mode X), or 640x480x16 colors. VGA was not intended to be displayed via a BBS and the vast majority of the early works in the IBM PC artscene were distributed as coded executables called "loaders" or "intros" rather than raw bitmap images. In fact, it was considered to be "lame" to release an uncoded VGA work of art from the early- to mid-1990s, a sure indication that your group was not skilled enough to retain a worthy programmer. The advent of custom image viewers developed by groups within the artscene, such as ACiD View and iCEView, began to shift the perception of how VGA art should be distributed and what the accepted practice should be. A coded VGA which did not take any of the advantages of being an executable, like special effects or music, became viewed as an impractical use of disk space—all of this in turn spawned a number of competing image viewers, and even "Viewer Wars" between rival art groups. Talented underground artists such as CatBones continued to help pioneer and define what is now referred to as the "hirez artscene", further championing the move away from coded VGA to stand-alone imagery with his impressive artwork. Hirez today implies higher resolutions than before, such as a 1024x768 pixel canvas or larger, greater depth of color, and is created with much more sophisticated and modern software. Underground status Despite the fact that contributors to the artscene can be found worldwide, the scene remains detached from mainstream bbs and internet culture. This can be seen as a result of the artscene's early affiliations with hacker and software piracy (warez) organizations. As early demoscene groups were organized by cracktros coders, artscene members were often found designing the .nfo files detailing warez releases. In addition much of the ansi art provided for warez BBSes was drawn by future members of the artscene. Prior to the popularity of the internet in the 1990s, the most efficient way to distribute software and files across BBSes was via a courier system. Both the warez scene and the artscene utilized this system, and in many cases warez couriers could be found distributing monthly artpacks. In addition to connection that the various underground groups had, a common attitude and relationship between scene members developed. The general belief that "newbies are lame" and "veterans are elite", as well as the use of leetspeek, created an environment that was sometimes difficult for new members to enter. In particular, many artsceners' distrust and bitterness towards new America Online users in the 1990s may have eroded the possibility for a wider membership base and audience for the artscene. See also ANSI art ASCII art Pixel art Netart Digital art Software art Demoscene DeviantArt List of artscene groups Minor artscene groups References Bibliography Danet, Brenda. "Cyberpl@y: Communicating Online". Oxford, UK: Berg Publishers, 2001. . "Dark Domain: the artpacks.acid.org collection" (DVD-ROM). San Jose, CA, USA: ACiD Productions, LLC, 2004. . Scott, Jason. "BBS: The Documentary" (DVD). Boston, MA, USA: Bovine Ignition Systems, 2005. Zetter, Kim. "How Humble BBS Begat Wired World". Wired News. June 8, 2005. Retrieved October 27, 2005. Wands, Bruce (2006). Art of the Digital Age, London: Thames & Hudson. . External links Examples of ANSI Artwork artscene.textfiles.com, The artscene branch of the textfiles.com library. darkdomain.org, Dark Domain (2004). An archive on DVD which hosts a complete collection of underground artscene works between 1987-2003. Published by ACiD Productions. . Cleaner Alternative Museum Cleaner's ASCii/ANSi galleries. Roy/SAC Text Artist- Superior Art Creations, Information about ASCII Art Styles, SAC Art Packs Download Sixteen Colors ANSI Art and ASCII Art Archive - A web viewable archive of current and past ANSI and ASCII packs released by the computer art scene More on the History of the Art Scene BBS: The Documentary Episode 5 documents the rise of the Art Scene Organizations still in Operation Defacto2 Scene Portal Scene Art Groups and Sites Listing DepthCore international digital art & design group BreedArt, an international art group in operation since 2001, one of the innovators of the scene Downmix Current computer art scene news and releases Evoke: An international design group primarily for young and developing artists. The Luminarium international artgroup. SlashTHREE: A not-for-profit international art collective representing artists in over 40 countries world wide. Utilities Ansilove/PHP A set of tools for converting ANSi/BiN/ADF/iDF/TUNDRA/XBiN files into PNG images Computer art Computing culture
Computer art scene
[ "Technology" ]
1,937
[ "Computing culture", "Computing and society" ]
289,450
https://en.wikipedia.org/wiki/Risch%20algorithm
In symbolic computation, the Risch algorithm is a method of indefinite integration used in some computer algebra systems to find antiderivatives. It is named after the American mathematician Robert Henry Risch, a specialist in computer algebra who developed it in 1968. The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational functions, radicals, logarithms, and exponential functions. Risch called it a decision procedure, because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral. However, the algorithm does not always succeed in identifying whether or not the antiderivative of a given function in fact can be expressed in terms of elementary functions. The complete description of the Risch algorithm takes over 100 pages. The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman. Some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Brian L. Miller. Description The Risch algorithm is used to integrate elementary functions. These are functions obtained by composing exponentials, logarithms, radicals, trigonometric functions, and the four arithmetic operations (). Laplace solved this problem for the case of rational functions, as he showed that the indefinite integral of a rational function is a rational function and a finite number of constant multiples of logarithms of rational functions . The algorithm suggested by Laplace is usually described in calculus textbooks; as a computer program, it was finally implemented in the 1960s. Liouville formulated the problem that is solved by the Risch algorithm. Liouville proved by analytical means that if there is an elementary solution to the equation then there exist constants and functions and in the field generated by such that the solution is of the form Risch developed a method that allows one to consider only a finite set of functions of Liouville's form. The intuition for the Risch algorithm comes from the behavior of the exponential and logarithm functions under differentiation. For the function , where and are differentiable functions, we have so if were in the result of an indefinite integration, it should be expected to be inside the integral. Also, as then if were in the result of an integration, then only a few powers of the logarithm should be expected. Problem examples Finding an elementary antiderivative is very sensitive to details. For instance, the following algebraic function (posted to sci.math.symbolic by Henri Cohen in 1993) has an elementary antiderivative, as Wolfram Mathematica since version 13 shows (however, Mathematica does not use the Risch algorithm to compute this integral): namely: But if the constant term 71 is changed to 72, it is not possible to represent the antiderivative in terms of elementary functions, as FriCAS also shows. Some computer algebra systems may here return an antiderivative in terms of non-elementary functions (i.e. elliptic integrals), which are outside the scope of the Risch algorithm. For example, Mathematica returns a result with the functions EllipticPi and EllipticF. Integrals in the form were solved by Chebyshev (and in what cases it is elementary), but the strict proof for it was ultimately done by Zolotarev. The following is a more complex example that involves both algebraic and transcendental functions: In fact, the antiderivative of this function has a fairly short form that can be found using substitution (SymPy can solve it while FriCAS fails with "implementation incomplete (constant residues)" error in Risch algorithm): Some Davenport "theorems" are still being clarified. For example in 2020 a counterexample to such a "theorem" was found, where it turns out that an elementary antiderivative exists after all. Implementation Transforming Risch's theoretical algorithm into an algorithm that can be effectively executed by a computer was a complex task which took a long time. The case of the purely transcendental functions (which do not involve roots of polynomials) is relatively easy and was implemented early in most computer algebra systems. The first implementation was done by Joel Moses in Macsyma soon after the publication of Risch's paper. The case of purely algebraic functions was partially solved and implemented in Reduce by James H. Davenport - for simplicity it could only deal with square roots and repeated square roots and not general Radicals or other non-quadratic algebraic relations between variables. The general case was solved and almost fully implemented in Scratchpad, a precursor of Axiom, by Manuel Bronstein, and is now being developed in Axiom's fork, FriCAS. However, the implementation did not include some of the branches for special cases completely. Currently, there is no known full implementation of the Risch algorithm. Decidability The Risch algorithm applied to general elementary functions is not an algorithm but a semi-algorithm because it needs to check, as a part of its operation, if certain expressions are equivalent to zero (constant problem), in particular in the constant field. For expressions that involve only functions commonly taken to be elementary it is not known whether an algorithm performing such a check exists or not (current computer algebra systems use heuristics); moreover, if one adds the absolute value function to the list of elementary functions, it is known that no such algorithm exists; see Richardson's theorem. Note that this issue also arises in the polynomial division algorithm; this algorithm will fail if it cannot correctly determine whether coefficients vanish identically. Virtually every non-trivial algorithm relating to polynomials uses the polynomial division algorithm, the Risch algorithm included. If the constant field is computable, i.e., for elements not dependent on , the problem of zero-equivalence is decidable, then the Risch algorithm is a complete algorithm. Examples of computable constant fields are and , i.e., rational numbers and rational functions in with rational number coefficients, respectively, where is an indeterminate that does not depend on . This is also an issue in the Gaussian elimination matrix algorithm (or any algorithm that can compute the nullspace of a matrix), which is also necessary for many parts of the Risch algorithm. Gaussian elimination will produce incorrect results if it cannot correctly determine if a pivot is identically zero. See also Axiom (computer algebra system) Closed-form expression Incomplete gamma function Lists of integrals Liouville's theorem (differential algebra) Nonelementary integral Symbolic integration Notes References External links Computer algebra Integral calculus Differential algebra
Risch algorithm
[ "Mathematics", "Technology" ]
1,383
[ "Differential algebra", "Calculus", "Algebra", "Computer algebra", "Computational mathematics", "Fields of abstract algebra", "Computer science", "Integral calculus" ]
289,592
https://en.wikipedia.org/wiki/Surface-water%20hydrology
Surface-water hydrology is the sub-field of hydrology concerned with above-earth water (surface water), in contrast to groundwater hydrology that deals with water below the surface of the Earth. Its applications include rainfall and runoff, the routes that surface water takes (for example through rivers or reservoirs), and the occurrence of floods and droughts. Surface-water hydrology is used to predict the effects of water constructions such as dams and canals. It considers the layout of the watershed, geology, soils, vegetation, nutrients, energy and wildlife. Modelled aspects include precipitation, the interception of rain water by vegetation or artificial structures, evaporation, the runoff function and the soil-surface system itself. When surface water seeps into the ground above bedrock, it is categorized as groundwater, and the rate at which this occurs determines baseflow needs for instream flow, as well as subsurface water levels in wells. While groundwater is not part of surface-water hydrology, it must be taken into account for a full understanding of the behaviour of surface water. Glacial hydrology is a part of surface-water hydrology; some of the runoff from glaciers and snow also involves groundwater hydrology concepts. See also Hydrological transport model Moisture recycling References Hydrology Hydraulic engineering
Surface-water hydrology
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
259
[ "Hydrology", "Physical systems", "Hydrology stubs", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
290,053
https://en.wikipedia.org/wiki/Airfoil
An airfoil (American English) or aerofoil (British English) is a streamlined body that is capable of generating significantly more lift than drag. Wings, sails and propeller blades are examples of airfoils. Foils of similar function designed with water as the working fluid are called hydrofoils. When oriented at a suitable angle, a solid body moving through a fluid deflects the oncoming fluid (for fixed-wing aircraft, a downward force), resulting in a force on the airfoil in the direction opposite to the deflection. This force is known as aerodynamic force and can be resolved into two components: lift (perpendicular to the remote freestream velocity) and drag (parallel to the freestream velocity). The lift on an airfoil is primarily the result of its angle of attack. Most foil shapes require a positive angle of attack to generate lift, but cambered airfoils can generate lift at zero angle of attack. Airfoils can be designed for use at different speeds by modifying their geometry: those for subsonic flight generally have a rounded leading edge, while those designed for supersonic flight tend to be slimmer with a sharp leading edge. All have a sharp trailing edge. The air deflected by an airfoil causes it to generate a lower-pressure "shadow" above and behind itself. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flowfield about the airfoil has a higher average velocity on the upper surface than on the lower surface. In some situations (e.g., inviscid potential flow) the lift force can be related directly to the average top/bottom velocity difference without computing the pressure by using the concept of circulation and the Kutta–Joukowski theorem. Overview The wings and stabilizers of fixed-wing aircraft, as well as helicopter rotor blades, are built with airfoil-shaped cross sections. Airfoils are also found in propellers, fans, compressors and turbines. Sails are also airfoils, and the underwater surfaces of sailboats, such as the centerboard, rudder, and keel, are similar in cross-section and operate on the same principles as airfoils. Swimming and flying creatures and even many plants and sessile organisms employ airfoils/hydrofoils, common examples being bird wings, the bodies of fish, and the shape of sand dollars. An airfoil-shaped wing can create downforce on an automobile or other motor vehicle, improving traction. When the wind is obstructed by an object such as a flat plate, a building, or the deck of a bridge, the object will experience drag and also an aerodynamic force perpendicular to the wind. This does not mean the object qualifies as an airfoil. Airfoils are highly-efficient lifting shapes, able to generate more lift than similarly sized flat plates of the same area, and able to generate lift with significantly less drag. Airfoils are used in the design of aircraft, propellers, rotor blades, wind turbines and other applications of aeronautical engineering. A lift and drag curve obtained in wind tunnel testing is shown on the right. The curve represents an airfoil with a positive camber so some lift is produced at zero angle of attack. With increased angle of attack, lift increases in a roughly linear relation, called the slope of the lift curve. At about 18 degrees this airfoil stalls, and lift falls off quickly beyond that. The drop in lift can be explained by the action of the upper-surface boundary layer, which separates and greatly thickens over the upper surface at and past the stall angle. The thickened boundary layer's displacement thickness changes the airfoil's effective shape, in particular it reduces its effective camber, which modifies the overall flow field so as to reduce the circulation and the lift. The thicker boundary layer also causes a large increase in pressure drag, so that the overall drag increases sharply near and past the stall point. Airfoil design is a major facet of aerodynamics. Various airfoils serve different flight regimes. Asymmetric airfoils can generate lift at zero angle of attack, while a symmetric airfoil may better suit frequent inverted flight as in an aerobatic airplane. In the region of the ailerons and near a wingtip a symmetric airfoil can be used to increase the range of angles of attack to avoid spin–stall. Thus a large range of angles can be used without boundary layer separation. Subsonic airfoils have a round leading edge, which is naturally insensitive to the angle of attack. The cross section is not strictly circular, however: the radius of curvature is increased before the wing achieves maximum thickness to minimize the chance of boundary layer separation. This elongates the wing and moves the point of maximum thickness back from the leading edge. Supersonic airfoils are much more angular in shape and can have a very sharp leading edge, which is very sensitive to angle of attack. A supercritical airfoil has its maximum thickness close to the leading edge to have a lot of length to slowly shock the supersonic flow back to subsonic speeds. Generally such transonic airfoils and also the supersonic airfoils have a low camber to reduce drag divergence. Modern aircraft wings may have different airfoil sections along the wing span, each one optimized for the conditions in each section of the wing. Movable high-lift devices, flaps and sometimes slats, are fitted to airfoils on almost every aircraft. A trailing edge flap acts similarly to an aileron; however, it, as opposed to an aileron, can be retracted partially into the wing if not used. A laminar flow wing has a maximum thickness in the middle camber line. Analyzing the Navier–Stokes equations in the linear regime shows that a negative pressure gradient along the flow has the same effect as reducing the speed. So with the maximum camber in the middle, maintaining a laminar flow over a larger percentage of the wing at a higher cruising speed is possible. However, some surface contamination will disrupt the laminar flow, making it turbulent. For example, with rain on the wing, the flow will be turbulent. Under certain conditions, insect debris on the wing will cause the loss of small regions of laminar flow as well. Before NASA's research in the 1970s and 1980s the aircraft design community understood from application attempts in the WW II era that laminar flow wing designs were not practical using common manufacturing tolerances and surface imperfections. That belief changed after new manufacturing methods were developed with composite materials (e.g. laminar-flow airfoils developed by Professor Franz Wortmann for use with wings made of fibre-reinforced plastic). Machined metal methods were also introduced. NASA's research in the 1980s revealed the practicality and usefulness of laminar flow wing designs and opened the way for laminar-flow applications on modern practical aircraft surfaces, from subsonic general aviation aircraft to transonic large transport aircraft, to supersonic designs. Schemes have been devised to define airfoils – an example is the NACA system. Various airfoil generation systems are also used. An example of a general purpose airfoil that finds wide application, and pre–dates the NACA system, is the Clark-Y. Today, airfoils can be designed for specific functions by the use of computer programs. Airfoil terminology The various terms related to airfoils are defined below: The suction surface (a.k.a. upper surface) is generally associated with higher velocity and lower static pressure. The pressure surface (a.k.a. lower surface) has a comparatively higher static pressure than the suction surface. The pressure gradient between these two surfaces contributes to the lift force generated for a given airfoil. The geometry of the airfoil is described with a variety of terms : The leading edge is the point at the front of the airfoil that has maximum curvature (minimum radius). The trailing edge is the point on the airfoil most remote from the leading edge. The angle between the upper and lower surfaces at the trailing edge is the trailing edge angle. The chord line is the straight line connecting leading and trailing edges. The chord length, or simply chord, , is the length of the chord line. That is the reference dimension of the airfoil section. The shape of the airfoil is defined using the following geometrical parameters: The mean camber line or mean line is the locus of points midway between the upper and lower surfaces. Its shape depends on the thickness distribution along the chord; The thickness of an airfoil varies along the chord. It may be measured in either of two ways: Thickness measured perpendicular to the camber line. This is sometimes described as the "American convention"; Thickness measured perpendicular to the chord line. This is sometimes described as the "British convention". Some important parameters to describe an airfoil's shape are its camber and its thickness. For example, an airfoil of the NACA 4-digit series such as the NACA 2415 (to be read as 2 – 4 – 15) describes an airfoil with a camber of 0.02 chord located at 0.40 chord, with 0.15 chord of maximum thickness. Finally, important concepts used to describe the airfoil's behaviour when moving through a fluid are: The aerodynamic center, which is the chord-wise location about which the pitching moment is independent of the lift coefficient and the angle of attack. The center of pressure, which is the chord-wise location about which the pitching moment is momentarily zero. On a cambered airfoil, the center of pressure is not a fixed location as it moves in response to changes in angle of attack and lift coefficient. In two-dimensional flow around a uniform wing of infinite span, the slope of the lift curve is determined primarily by the trailing edge angle. The slope is greatest if the angle is zero; and decreases as the angle increases. For a wing of finite span, the aspect ratio of the wing also significantly influences the slope of the curve. As aspect ratio decreases, the slope also decreases. Thin airfoil theory Thin airfoil theory is a simple theory of airfoils that relates angle of attack to lift for incompressible, inviscid flows. It was devised by German mathematician Max Munk and further refined by British aerodynamicist Hermann Glauert and others in the 1920s. The theory idealizes the flow around an airfoil as two-dimensional flow around a thin airfoil. It can be imagined as addressing an airfoil of zero thickness and infinite wingspan. Thin airfoil theory was particularly notable in its day because it provided a sound theoretical basis for the following important properties of airfoils in two-dimensional inviscid flow: on a symmetric airfoil, the center of pressure and aerodynamic center are coincident and lie exactly one quarter of the chord behind the leading edge. on a cambered airfoil, the aerodynamic center lies exactly one quarter of the chord behind the leading edge, but the position of the center of pressure moves when the angle of attack changes. the slope of the lift coefficient versus angle of attack line is units per radian. As a consequence of (3), the section lift coefficient of a thin symmetric airfoil of infinite wingspan is: where is the section lift coefficient, is the angle of attack in radians, measured relative to the chord line. (The above expression is also applicable to a cambered airfoil where is the angle of attack measured relative to the zero-lift line instead of the chord line.) Also as a consequence of (3), the section lift coefficient of a cambered airfoil of infinite wingspan is: where is the section lift coefficient when the angle of attack is zero. Thin airfoil theory assumes the air is an inviscid fluid so does not account for the stall of the airfoil, which usually occurs at an angle of attack between 10° and 15° for typical airfoils. In the mid-late 2000s, however, a theory predicting the onset of leading-edge stall was proposed by Wallace J. Morris II in his doctoral thesis. Morris's subsequent refinements contain the details on the current state of theoretical knowledge on the leading-edge stall phenomenon. Morris's theory predicts the critical angle of attack for leading-edge stall onset as the condition at which a global separation zone is predicted in the solution for the inner flow. Morris's theory demonstrates that a subsonic flow about a thin airfoil can be described in terms of an outer region, around most of the airfoil chord, and an inner region, around the nose, that asymptotically match each other. As the flow in the outer region is dominated by classical thin airfoil theory, Morris's equations exhibit many components of thin airfoil theory. Derivation In thin airfoil theory, the width of the (2D) airfoil is assumed negligible, and the airfoil itself replaced with a 1D blade along its camber line, oriented at the angle of attack . Let the position along the blade be , ranging from at the wing's front to at the trailing edge; the camber of the airfoil, , is assumed sufficiently small that one need not distinguish between and position relative to the fuselage. The flow across the airfoil generates a circulation around the blade, which can be modeled as a vortex sheet of position-varying strength . The Kutta condition implies that , but the strength is singular at the bladefront, with for . If the main flow has density , then the Kutta–Joukowski theorem gives that the total lift force is proportional to and its moment about the leading edge proportional to From the Biot–Savart law, the vorticity produces a flow field oriented normal to the airfoil at . Since the airfoil is an impermeable surface, the flow must balance an inverse flow from . By the small-angle approximation, is inclined at angle relative to the blade at position , and the normal component is correspondingly . Thus, must satisfy the convolution equation which uniquely determines it in terms of known quantities. An explicit solution can be obtained through first the change of variablesand then expanding both and as a nondimensionalized Fourier series in with a modified lead term: The resulting lift and moment depend on only the first few terms of this series. The lift coefficient satisfies and the moment coefficient The moment about the 1/4 chord point will thus be From this it follows that the center of pressure is aft of the 'quarter-chord' point , by The aerodynamic center is the position at which the pitching moment does not vary with a change in lift coefficient: Thin-airfoil theory shows that, in two-dimensional inviscid flow, the aerodynamic center is at the quarter-chord position. See also Circulation control wing Hydrofoil Kline–Fogleman airfoil Küssner effect Parafoil Wing configuration References Citations General Sources Further reading Ali Kamranpay, Alireza Mehrabadi. Numerical Analysis of NACA Airfoil 0012 at Different Attack Angles and Obtaining its Aerodynamic Coefficients. Journal of Mechatronics and Automation. 2019; 6(3): 8–16p. External links UIUC Airfoil Coordinates Database Airfoil & Hydrofoil Reference Application FoilSim An airfoil simulator from NASA Airfoil Playground - Interactive WebApp Desktopaero Airflow across a wing (University of Cambridge) DesignFOIL An airfoil generation & analysis tool that no longer requires registration. Aerodynamics Aircraft wing design
Airfoil
[ "Chemistry", "Engineering" ]
3,197
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
290,441
https://en.wikipedia.org/wiki/Cram%C3%A9r%27s%20conjecture
In number theory, Cramér's conjecture, formulated by the Swedish mathematician Harald Cramér in 1936, is an estimate for the size of gaps between consecutive prime numbers: intuitively, that gaps between consecutive primes are always small, and the conjecture quantifies asymptotically just how small they must be. It states that where pn denotes the nth prime number, O is big O notation, and "log" is the natural logarithm. While this is the statement explicitly conjectured by Cramér, his heuristic actually supports the stronger statement and sometimes this formulation is called Cramér's conjecture. However, this stronger version is not supported by more accurate heuristic models, which nevertheless support the first version of Cramér's conjecture. The strongest form of all, which was never claimed by Cramér but is the one used in experimental verification computations and the plot in this article, is simply None of the three forms has yet been proven or disproven. Conditional proven results on prime gaps Cramér gave a conditional proof of the much weaker statement that on the assumption of the Riemann hypothesis. The best known unconditional bound is due to Baker, Harman, and Pintz. In the other direction, E. Westzynthius proved in 1931 that prime gaps grow more than logarithmically. That is, His result was improved by R. A. Rankin, who proved that Paul Erdős conjectured that the left-hand side of the above formula is infinite, and this was proven in 2014 by Kevin Ford, Ben Green, Sergei Konyagin, and Terence Tao, and independently by James Maynard. The two sets of authors eliminated one of the factors of later that year, showing that, infinitely often, where is some constant. Heuristic justification Cramér's conjecture is based on a probabilistic model—essentially a heuristic—in which the probability that a number of size x is prime is 1/log x. This is known as the Cramér random model or Cramér model of the primes. In the Cramér random model, with probability one. However, as pointed out by Andrew Granville, Maier's theorem shows that the Cramér random model does not adequately describe the distribution of primes on short intervals, and a refinement of Cramér's model taking into account divisibility by small primes suggests that the limit should not be 1, but a constant (), where is the Euler–Mascheroni constant. János Pintz has suggested that the limit sup may be infinite, and similarly Leonard Adleman and Kevin McCurley write As a result of the work of H. Maier on gaps between consecutive primes, the exact formulation of Cramér's conjecture has been called into question [...] It is still probably true that for every constant , there is a constant such that there is a prime between and . Similarly, Robin Visser writes In fact, due to the work done by Granville, it is now widely believed that Cramér's conjecture is false. Indeed, there [are] some theorems concerning short intervals between primes, such as Maier's theorem, which contradict Cramér's model. (internal references removed). Related conjectures and heuristics Daniel Shanks conjectured the following asymptotic equality, stronger than Cramér's conjecture, for record gaps: J.H. Cadwell has proposed the formula for the maximal gaps: which is formally identical to the Shanks conjecture but suggests a lower-order term. Marek Wolf has proposed the formula for the maximal gaps expressed in terms of the prime-counting function : where and is the twin primes constant; see , . This is again formally equivalent to the Shanks conjecture but suggests lower-order terms . Thomas Nicely has calculated many large prime gaps. He measures the quality of fit to Cramér's conjecture by measuring the ratio He writes, "For the largest known maximal gaps, has remained near 1.13." See also Prime number theorem Legendre's conjecture and Andrica's conjecture, much weaker but still unproven upper bounds on prime gaps Firoozbakht's conjecture Maier's theorem on the numbers of primes in short intervals for which the model predicts an incorrect answer References External links Analytic number theory Conjectures about prime numbers Unsolved problems in number theory
Cramér's conjecture
[ "Mathematics" ]
897
[ "Analytic number theory", "Unsolved problems in mathematics", "Unsolved problems in number theory", "Mathematical problems", "Number theory" ]
19,889,519
https://en.wikipedia.org/wiki/Magnetic%20immunoassay
Magnetic immunoassay (MIA) is a type of diagnostic immunoassay using magnetic beads as labels in lieu of conventional enzymes (ELISA), radioisotopes (RIA) or fluorescent moieties (fluorescent immunoassays) to detect a specified analyte. MIA involves the specific binding of an antibody to its antigen, where a magnetic label is conjugated to one element of the pair. The presence of magnetic beads is then detected by a magnetic reader (magnetometer) which measures the magnetic field change induced by the beads. The signal measured by the magnetometer is proportional to the analyte (virus, toxin, bacteria, cardiac marker, etc.) concentration in the initial sample. Magnetic labels Magnetic beads are made of nanometric-sized iron oxide particles encapsulated or glued together with polymers. These magnetic beads range from 35 nm up to 4.5 μm. The component magnetic nanoparticles range from 5 to 50 nm and exhibit a unique quality referred to as superparamagnetism in the presence of an externally applied magnetic field. First discovered by Frenchman Louis Néel, Nobel Physics Prize winner in 1970, this superparamagnetic quality has already been used for medical application in Magnetic Resonance Imaging (MRI) and in biological separations, but not yet for labeling in commercial diagnostic applications. Magnetic labels exhibit several features very well adapted for such applications: they are not affected by reagent chemistry or photo-bleaching and are therefore stable over time, the magnetic background in a biomolecular sample is usually insignificant, sample turbidity or staining have no impact on magnetic properties, magnetic beads can be manipulated remotely by magnetism. Detection Magnetic Immunoassay (MIA) is able to detect select molecules or pathogens through the use of a magnetically tagged antibody. Functioning in a way similar to that of an ELISA or Western Blot, a two-antibody binding process is used to determine concentrations of analytes. MIA uses antibodies that are coating a magnetic bead. These anti-bodies directly bind to the desired pathogen or molecule and the magnetic signal given off the bound beads is read using a magnetometer. The largest benefit this technology provides for immunostaining is that it can be conducted in a liquid medium, where methods such as ELISA or Western Blotting require a stationary medium for the desired target to bind to before the secondary antibody (such as HRP [Horse Radish Peroxidase]) is able to be applied. Since MIA can be conducted in a liquid medium a more accurate measurement of desired molecules can be performed in the model system. Since no isolation must occur to achieve quantifiable results users can monitor activity within a system. Getting a better idea of the behavior of their target. The manners in which this detection can occur are very numerous. The most basic form of detection is to run a sample through a gravity column that contains a polyethylene matrix with the secondary anti-body. The target compound binds to the antibody contained in the matrix, and any residual substances are washed out using a chosen buffer. The magnetic antibodies are then passed through the same column and after an incubation period, any unbound antibodies are washed out using the same method as before. The reading obtained from the magnetic beads bound to the target which is captured by the antibodies on the membrane is used to quantify the target compound in solution. Also, because it is so similar in methodology to ELISA or Western Blot the experiments for MIA can be adapted to use the same detection if the researcher wants to quantify their data in a similar manner. Magnetometers A simple instrument can detect the presence and measure the total magnetic signal of a sample, however, the challenge of developing an effective MIA is to separate naturally occurring magnetic background (noise) from the weak magnetically labeled target (signal). Various approaches and devices have been employed to achieve a meaningful signal-to-noise ratio (SNR) for bio-sensing applications: ·giant magneto-resistive sensors and spin valves, piezo-resistive cantilevers, inductive sensors, ·superconducting quantum interference devices, ·anisotropic magneto-resistive rings, ·and miniature Hall sensors. But improving SNR often requires a complex instrument to provide repeated scanning and extrapolation through data processing, or precise alignment of target and sensor of miniature and matching size. Beyond this requirement, MIA that exploits the non-linear magnetic properties of magnetic labels can effectively use the intrinsic ability of a magnetic field to pass through plastic, water, nitrocellulose, and other materials, thus allowing for true volumetric measurements in various immunoassay formats. Unlike conventional methods that measure the susceptibility of superparamagnetic materials, a MIA-based on non-linear magnetization eliminates the impact of linear dia- or paramagnetic materials such as sample matrix, consumable plastics and/or nitrocellulose. Although the intrinsic magnetism of these materials is very weak, with typical susceptibility values of –10−5 (dia) or +10−3 (para), when one is investigating very small quantities of superparamagnetic materials, such as nanograms per test, the background signal generated by ancillary materials cannot be ignored. In MIA based on non-linear magnetic properties of magnetic labels the beads are exposed to an alternating magnetic field at two frequencies, f1 and f2. In the presence of non-linear materials such as superparamagnetic labels, a signal can be recorded at combinatorial frequencies, for example, at f = f1 ± 2×f2. This signal is exactly proportional to the amount of magnetic material inside the reading coil. This technology makes magnetic immunoassay possible in a variety of formats such as: conventional lateral flow test by replacing gold labels with magnetic labels vertical flow tests allowing for the interrogation of rare analytes (such as bacteria) in large-volume samples microfluidic applications and biochip It was also described for in vivo applications and for multiparametric testing. Uses MIA is a versatile technique that can be used for a wide variety of practices. Currently it has been used to detect viruses in plants to catch pathogens that would normally devastate crops such as Grapevine fanleaf virus, and Potato virus X. Its adaptations now include portable devices that allow the user to gather sensitive data in the field. MIA can also be used to monitor therapeutic drugs. A case report of a 53-year-old kidney transplant patient details how the doctors were able to alter the quantities of the therapeutic drug. References Immunologic tests Blood tests Medical testing equipment
Magnetic immunoassay
[ "Chemistry", "Biology" ]
1,379
[ "Blood tests", "Chemical pathology", "Immunologic tests" ]
4,901,720
https://en.wikipedia.org/wiki/Arthropod%20leg
The arthropod leg is a form of jointed appendage of arthropods, usually used for walking. Many of the terms used for arthropod leg segments (called podomeres) are of Latin origin, and may be confused with terms for bones: coxa (meaning hip, : coxae), trochanter, femur (: femora), tibia (: tibiae), tarsus (: tarsi), ischium (: ischia), metatarsus, carpus, dactylus (meaning finger), patella (: patellae). Homologies of leg segments between groups are difficult to prove and are the source of much argument. Some authors posit up to eleven segments per leg for the most recent common ancestor of extant arthropods but modern arthropods have eight or fewer. It has been argued that the ancestral leg need not have been so complex, and that other events, such as successive loss of function of a Hox-gene, could result in parallel gains of leg segments. In arthropods, each of the leg segments articulates with the next segment in a hinge joint and may only bend in one plane. This means that a greater number of segments is required to achieve the same kinds of movements that are possible in vertebrate animals, which have rotational ball-and-socket joints at the base of the fore and hind limbs. Biramous and uniramous The appendages of arthropods may be either biramous or uniramous. A uniramous limb comprises a single series of segments attached end-to-end. A biramous limb, however, branches into two, and each branch consists of a series of segments attached end-to-end. The external branch (ramus) of the appendages of crustaceans is known as the exopod or exopodite, while the internal branch is known as the endopod or endopodite. Other structures aside from the latter two are termed exites (outer structures) and endites (inner structures). Exopodites can be easily distinguished from exites by the possession of internal musculature. The exopodites can sometimes be missing in some crustacean groups (amphipods and isopods), and they are completely absent in insects. The legs of insects and myriapods are uniramous. In crustaceans, the first antennae are uniramous, but the second antennae are biramous, as are the legs in most species. For a time, possession of uniramous limbs was believed to be a shared, derived character, so uniramous arthropods were grouped into a taxon called Uniramia. It is now believed that several groups of arthropods evolved uniramous limbs independently from ancestors with biramous limbs, so this taxon is no longer used. Chelicerata Arachnid legs differ from those of insects by the addition of two segments on either side of the tibia, the patella between the femur and the tibia, and the metatarsus (sometimes called basitarsus) between the tibia and the tarsus (sometimes called telotarsus), making a total of seven segments. The tarsus of spiders has claws at the end as well as a hook that helps with web-spinning. Spider legs can also serve sensory functions, with hairs that serve as touch receptors, as well as an organ on the tarsus that serves as a humidity receptor, known as the tarsal organ. The situation is identical in scorpions, but with the addition of a pre-tarsus beyond the tarsus. The claws of the scorpion are not truly legs, but are pedipalps, a different kind of appendage that is also found in spiders and is specialised for predation and mating. In Limulus, there are no metatarsi or pretarsi, leaving six segments per leg. Crustacea The legs of crustaceans are divided primitively into seven segments, which do not follow the naming system used in the other groups. They are: coxa, basis, ischium, merus, carpus, propodus, and dactylus. In some groups, some of the limb segments may be fused together. The claw (chela) of a lobster or crab is formed by the articulation of the dactylus against an outgrowth of the propodus. Crustacean limbs also differ in being biramous, whereas all other extant arthropods have uniramous limbs. Myriapoda Myriapods (millipedes, centipedes and their relatives) have seven-segmented walking legs, comprising coxa, trochanter, prefemur, femur, tibia, tarsus, and a tarsal claw. Myriapod legs show a variety of modifications in different groups. In all centipedes, the first pair of legs is modified into a pair of venomous fangs called forcipules. In most millipedes, one or two pairs of walking legs in adult males are modified into sperm-transferring structures called gonopods. In some millipedes, the first leg pair in males may be reduced to tiny hooks or stubs, while in others the first pair may be enlarged. Insects Insects and their relatives are hexapods, having six legs, connected to the thorax, each with five components. In order from the body they are the coxa, trochanter, femur, tibia, and tarsus. Each is a single segment, except the tarsus which can be from three to seven segments, each referred to as a tarsomere. Except in species in which legs have been lost or become vestigial through evolutionary adaptation, adult insects have six legs, one pair attached to each of the three segments of the thorax. They have paired appendages on some other segments, in particular, mouthparts, antennae and cerci, all of which are derived from paired legs on each segment of some common ancestor. Some larval insects do however have extra walking legs on their abdominal segments; these extra legs are called prolegs. They are found most frequently on the larvae of moths and sawflies. Prolegs do not have the same structure as modern adult insect legs, and there has been a great deal of debate as to whether they are homologous with them. Current evidence suggests that they are indeed homologous up to a very primitive stage in their embryological development, but that their emergence in modern insects was not homologous between the Lepidoptera and Symphyta. Such concepts are pervasive in current interpretations of phylogeny. In general, the legs of larval insects, particularly in the Endopterygota, vary more than in the adults. As mentioned, some have prolegs as well as "true" thoracic legs. Some have no externally visible legs at all (though they have internal rudiments that emerge as adult legs at the final ecdysis). Examples include the maggots of flies or grubs of weevils. In contrast, the larvae of other Coleoptera, such as the Scarabaeidae and Dytiscidae have thoracic legs, but no prolegs. Some insects that exhibit hypermetamorphosis begin their metamorphosis as planidia, specialised, active, legged larvae, but they end their larval stage as legless maggots, for example the Acroceridae. Among the Exopterygota, the legs of larvae tend to resemble those of the adults in general, except in adaptations to their respective modes of life. For example, the legs of most immature Ephemeroptera are adapted to scuttling beneath underwater stones and the like, whereas the adults have more gracile legs that are less of a burden during flight. Again, the young of the Coccoidea are called "crawlers" and they crawl around looking for a good place to feed, where they settle down and stay for life. Their later instars have no functional legs in most species. Among the Apterygota, the legs of immature specimens are in effect smaller versions of the adult legs. Fundamental morphology of insect legs A representative insect leg, such as that of a housefly or cockroach, has the following parts, in sequence from most proximal to most distal: coxa trochanter femur tibia tarsus pretarsus. Associated with the leg itself there are various sclerites around its base. Their functions are articular and have to do with how the leg attaches to the main exoskeleton of the insect. Such sclerites differ considerably between unrelated insects. Coxa The coxa is the proximal segment and functional base of the leg. It articulates with the pleuron and associated sclerites of its thoracic segment, and in some species it articulates with the edge of the sternite as well. The homologies of the various basal sclerites are open to debate. Some authorities suggest that they derive from an ancestral subcoxa. In many species, the coxa has two lobes where it articulates with the pleuron. The posterior lobe is the meron which is usually the larger part of the coxa. A meron is well developed in Periplaneta, the Isoptera, Neuroptera and Lepidoptera. Trochanter The trochanter articulates with the coxa but usually is attached rigidly to the femur. In some insects, its appearance may be confusing; for example it has two subsegments in the Odonata. In parasitic Hymenoptera, the base of the femur has the appearance of a second trochanter. Femur In most insects, the femur is the largest region of the leg; it is especially conspicuous in many insects with saltatorial legs because the typical leaping mechanism is to straighten the joint between the femur and the tibia, and the femur contains the necessary massive bipennate musculature. Tibia The tibia is the fourth section of the typical insect leg. As a rule, the tibia of an insect is slender in comparison to the femur, but it generally is at least as long and often longer. Near the distal end, there is generally a tibial spur, often two or more. In the Apocrita, the tibia of the foreleg bears a large apical spur that fits over a semicircular gap in the first segment of the tarsus. The gap is lined with comb-like bristles, and the insect cleans its antennae by drawing them through. Tarsus The ancestral tarsus was a single segment and in the extant Protura, Diplura and certain insect larvae the tarsus also is single-segmented. Most modern insects have tarsi divided into subsegments (tarsomeres), usually about five. The actual number varies with the taxon, which may be useful for diagnostic purposes. For example, the Pterogeniidae characteristically have 5-segmented fore- and mid-tarsi, but 4-segmented hind tarsi, whereas the Cerylonidae have four tarsomeres on each tarsus. The distal segment of the typical insect leg is the pretarsus. In the Collembola, Protura and many insect larvae, the pretarsus is a single claw. On the pretarsus most insects have a pair of claws (ungues, singular unguis). Between the ungues, a median unguitractor plate supports the pretarsus. The plate is attached to the apodeme of the flexor muscle of the ungues. In the Neoptera, the parempodia are a symmetrical pair of structures arising from the outside (distal) surface of the unguitractor plate between the claws. It is present in many Hemiptera and almost all Heteroptera. Usually, the parempodia are bristly (setiform), but in a few species they are fleshy. Sometimes the parempodia are reduced in size so as to almost disappear. Above the unguitractor plate, the pretarsus expands forward into a median lobe, the arolium. Webspinners (Embioptera) have an enlarged basal tarsomere on each of the front legs, containing the silk-producing glands. Under their pretarsi, members of the Diptera generally have paired lobes or pulvilli, meaning "little cushions". There is a single pulvillus below each unguis. The pulvilli often have an arolium between them or otherwise a median bristle or empodium, meaning the meeting place of the pulvilli. On the underside of the tarsal segments, there frequently are pulvillus-like organs or plantulae. The arolium, plantulae and pulvilli are adhesive organs enabling their possessors to climb smooth or steep surfaces. They all are outgrowths of the exoskeleton and their cavities contain blood. Their structures are covered with tubular tenent hairs, the apices of which are moistened by a glandular secretion. The organs are adapted to apply the hairs closely to a smooth surface so that adhesion occurs through surface molecular forces. Insects control the ungues through muscle tension on a long tendon, the "retractor unguis" or "long tendon". In insect models of locomotion and motor control, such as Drosophila (Diptera), locusts (Acrididae), or stick insects (Phasmatodea), the long tendon courses through the tarsus and tibia before reaching the femur. Tension on the long tendon is controlled by two muscles, one in the femur and one in the tibia, which can operate differently depending on how the leg is bent. Tension on the long tendon controls the claw, but also bends the tarsus and likely affects its stiffness during walking. Variations in functional anatomy of insect legs The typical thoracic leg of an adult insect is adapted for running (cursorial), rather than for digging, leaping, swimming, predation, or other similar activities. The legs of most cockroaches are good examples. However, there are many specialized adaptations, including: The forelegs of mole crickets (Gryllotalpidae) and some scarab beetle (Scarabaeidae) are adapted to burrowing in earth (fossorial). The raptorial forelegs of mantidflies (Mantispidae), mantises (Mantodea), and ambush bugs (Phymatinae) are adapted to seizing and holding prey in one way, while those of whirligig beetles Gyrinidae are long and adapted for grasping food or prey in quite a different way. The forelegs of some butterflies, such as many Nymphalidae, are reduced so greatly that only two pairs of functional walking legs remain. In most grasshoppers and crickets (Orthoptera), the hind legs are saltatorial; they have heavily bipinnately muscled femora and straight, long tibiae adapted to leaping and to some extent to defence by kicking. Flea beetles (Alticini) also have powerful hind femora that enable them to leap spectacularly. Other beetles with spectacularly muscular hind femora may not be saltatorial at all, but very clumsy; for example, particular species of bean weevils (Bruchinae) use their swollen hind legs for forcing their way out of the hard-shelled seeds of plants such as Erythrina in which they grew to adulthood. The legs of the Odonata, the dragonflies and damselflies, are adapted for seizing prey that the insects feed on while flying or while sitting still on a plant; they are nearly incapable of using them for walking. The majority of aquatic insects use their legs only for swimming (natatorial), though many species of immature insects swim by other means such as by wriggling, undulating, or expelling water in jets. Evolution and homology of arthropod legs The embryonic body segments (somites) of different arthropods taxa have diverged from a simple body plan with many similar appendages which are serially homologous, into a variety of body plans with fewer segments equipped with specialised appendages. The homologies between these have been discovered by comparing genes in evolutionary developmental biology. See also Limb Tentacle Tube foot References Arthropod morphology Animal locomotion Spider anatomy Trilobite anatomy
Arthropod leg
[ "Physics", "Biology" ]
3,510
[ "Animal locomotion", "Physical phenomena", "Behavior", "Animals", "Motion (physics)", "Ethology" ]
4,902,017
https://en.wikipedia.org/wiki/Parametric%20oscillator
A parametric oscillator is a driven harmonic oscillator in which the oscillations are driven by varying some parameters of the system at some frequencies, typically different from the natural frequency of the oscillator. A simple example of a parametric oscillator is a child pumping a playground swing by periodically standing and squatting to increase the size of the swing's oscillations. The child's motions vary the moment of inertia of the swing as a pendulum. The "pump" motions of the child must be at twice the frequency of the swing's oscillations. Examples of parameters that may be varied are the oscillator's resonance frequency and damping . Parametric oscillators are used in several areas of physics. The classical varactor parametric oscillator consists of a semiconductor varactor diode connected to a resonant circuit or cavity resonator. It is driven by varying the diode's capacitance by applying a varying bias voltage. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG-based parametric oscillators operate in the same fashion. Another important example is the optical parametric oscillator, which converts an input laser light wave into two output waves of lower frequency (). When operated at pump levels below oscillation, the parametric oscillator can amplify a signal, forming a parametric amplifier (paramp). Varactor parametric amplifiers were developed as low-noise amplifiers in the radio and microwave frequency range. The advantage of a parametric amplifier is that it has much lower noise than an amplifier based on a gain device like a transistor or vacuum tube. This is because in the parametric amplifier a reactance is varied instead of a (noise-producing) resistance. They are used in very low noise radio receivers in radio telescopes and spacecraft communication antennas. Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing since the action appears as a time varying modification on a system parameter. History Parametric oscillations were first noticed in mechanics. Michael Faraday (1831) was the first to notice oscillations of one frequency being excited by forces of double the frequency, in the crispations (ruffled surface waves) observed in a wine glass excited to "sing". Franz Melde (1860) generated parametric oscillations in a string by employing a tuning fork to periodically vary the tension at twice the resonance frequency of the string. Parametric oscillation was first treated as a general phenomenon by Rayleigh (1883,1887). One of the first to apply the concept to electric circuits was George Francis FitzGerald, who in 1892 tried to excite oscillations in an LC circuit by pumping it with a varying inductance provided by a dynamo. Parametric amplifiers (paramps) were first used in 1913-1915 for radio telephony from Berlin to Vienna and Moscow, and were predicted to have a useful future (Ernst Alexanderson, 1916). These early parametric amplifiers used the nonlinearity of an iron-core inductor, so they could only function at low frequencies. In 1948 Aldert van der Ziel pointed out a major advantage of the parametric amplifier: because it used a variable reactance instead of a resistance for amplification it had inherently low noise. A parametric amplifier used as the front end of a radio receiver could amplify a weak signal while introducing very little noise. In 1952 Harrison Rowe at Bell Labs extended some 1934 mathematical work on pumped oscillations by Jack Manley and published the modern mathematical theory of parametric oscillations, the Manley-Rowe relations. The varactor diode invented in 1956 had a nonlinear capacitance that was usable into microwave frequencies. The varactor parametric amplifier was developed by Marion Hines in 1956 at Western Electric. At the time it was invented microwaves were just being exploited, and the varactor amplifier was the first semiconductor amplifier at microwave frequencies. It was applied to low noise radio receivers in many areas, and has been widely used in radio telescopes, satellite ground stations, and long-range radar. It is the main type of parametric amplifier used today. Since that time parametric amplifiers have been built with other nonlinear active devices such as Josephson junctions. The technique has been extended to optical frequencies in optical parametric oscillators and amplifiers which use nonlinear crystals as the active element. Mathematical analysis A parametric oscillator is a harmonic oscillator whose physical properties vary with time. The equation of such an oscillator is This equation is linear in . By assumption, the parameters and depend only on time and do not depend on the state of the oscillator. In general, and/or are assumed to vary periodically, with the same period . If the parameters vary at roughly twice the natural frequency of the oscillator (defined below), the oscillator phase-locks to the parametric variation and absorbs energy at a rate proportional to the energy it already has. Without a compensating energy-loss mechanism provided by , the oscillation amplitude grows exponentially. (This phenomenon is called parametric excitation, parametric resonance or parametric pumping.) However, if the initial amplitude is zero, it will remain so; this distinguishes it from the non-parametric resonance of driven simple harmonic oscillators, in which the amplitude grows linearly in time regardless of the initial state. A familiar experience of both parametric and driven oscillation is playing on a swing. Rocking back and forth pumps the swing as a driven harmonic oscillator, but once moving, the swing can also be parametrically driven by alternately standing and squatting at key points in the swing arc. This changes moment of inertia of the swing and hence the resonance frequency, and children can quickly reach large amplitudes provided that they have some amplitude to start with (e.g., get a push). Standing and squatting at rest, however, leads nowhere. Transformation of the equation We begin by making a change of variable where is the time integral of the damping coefficient . This change of variable eliminates the damping term in the differential equation, reducing it to where the transformed frequency is defined as . In general, the variations in damping and frequency are relatively small perturbations where and are constants, namely, the time-averaged oscillator frequency and damping, respectively. The transformed frequency can then be written in a similar way as , where is the natural frequency of the damped harmonic oscillator and . Thus, our transformed equation can be written as . The independent variations and in the oscillator damping and resonance frequency, respectively, can be combined into a single pumping function . The converse conclusion is that any form of parametric excitation can be accomplished by varying either the resonance frequency or the damping, or both. Solution of the transformed equation Let us assume that is sinusoidal with a frequency approximately twice the natural frequency of the oscillator: where the pumping frequency but need not equal exactly. Using the method of variation of parameters, the solution to our transformed equation may be written as where the rapidly varying components, and have been factored out to isolate the slowly varying amplitudes and We proceed by substituting this solution into the differential equation and considering that both the coefficients in front of and must be zero to satisfy the differential equation identically. We also omit the second derivatives of and on the grounds that and are slowly varying, as well as omit sinusoidal terms not near the natural frequency, as they do not contribute significantly to resonance. The result is the following pair of coupled differential equations: This system of linear differential equations with constant coefficients can be decoupled and solved by eigenvalue/eigenvector methods. This yields the solution where and are the eigenvalues of the matrix and are corresponding eigenvectors, and and are arbitrary constants. The eigenvalues are given by If we write the difference between and as and replace with everywhere where the difference is not important, we get . If then the eigenvalues are real and exactly one is positive, which leads to exponential growth for and This is the condition for parametric resonance, with the growth rate for given by the positive eigenvalue Note, however, that this growth rate corresponds to the amplitude of the transformed variable whereas the amplitude of the original, untransformed variable can either grow or decay depending on whether is an increasing or decreasing function of time, Intuitive derivation of parametric excitation The above derivation may seem like a mathematical sleight-of-hand, so it may be helpful to give an intuitive derivation. The equation may be written in the form which represents a simple harmonic oscillator (or, alternatively, a bandpass filter) being driven by a signal that is proportional to its response . Assume that already has an oscillation at frequency and that the pumping has double the frequency and a small amplitude . Applying a trigonometric identity for products of sinusoids, their product produces two driving signals, one at frequency and the other at frequency . Being off-resonance, the signal is attenuated and can be neglected initially. By contrast, the signal is on resonance, serves to amplify , and is proportional to the amplitude . Hence, the amplitude of grows exponentially unless it is initially zero. Expressed in Fourier space, the multiplication is a convolution of their Fourier transforms and . The positive feedback arises because the component of converts the component of into a driving signal at , and vice versa (reverse the signs). This explains why the pumping frequency must be near , twice the natural frequency of the oscillator. Pumping at a grossly different frequency would not couple (i.e., provide mutual positive feedback) between the and components of . Parametric resonance Parametric resonance is the parametrical resonance phenomenon of mechanical perturbation and oscillation at certain frequencies (and the associated harmonics). This effect is different from regular resonance because it exhibits the instability phenomenon. Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing since the action appears as a time varying modification on a system parameter. The classical example of parametric resonance is that of the vertically forced pendulum. Parametric resonance takes place when the external excitation frequency equals twice the natural frequency of the system divided by a positive integer . For a parametric excitation with small amplitude in the absence of friction, the bandwidth of the resonance is to leading order . The effect of friction is to introduce a finite threshold for the amplitude of parametric excitation to result in an instability. For small amplitudes and by linearising, the stability of the periodic solution is given by Mathieu's equation: where is some perturbation from the periodic solution. Here the term acts as an ‘energy’ source and is said to parametrically excite the system. The Mathieu equation describes many other physical systems to a sinusoidal parametric excitation such as an LC Circuit where the capacitor plates move sinusoidally. Autoparametric resonance happens in a system with two coupled oscillators, such that the vibrations of one act as parametric resonance on the second. The zero point of the second oscillator becomes unstable, and thus it starts oscillating. Parametric amplifiers Introduction A parametric amplifier is implemented as a mixer. The mixer's gain shows up in the output as amplifier gain. The input weak signal is mixed with a strong local oscillator signal, and the resultant strong output is used in the ensuing receiver stages. Parametric amplifiers also operate by changing a parameter of the amplifier. Intuitively, this can be understood as follows, for a variable capacitor-based amplifier. Charge in a capacitor obeys: , therefore the voltage across is . Knowing the above, if a capacitor is charged until its voltage equals the sampled voltage of an incoming weak signal, and if the capacitor's capacitance is then reduced (say, by manually moving the plates further apart), then the voltage across the capacitor will increase. In this way, the voltage of the weak signal is amplified. If the capacitor is a varicap diode, then "moving the plates" can be done simply by applying time-varying DC voltage to the varicap diode. This driving voltage usually comes from another oscillator—sometimes called a "pump". The resulting output signal contains frequencies that are the sum and difference of the input signal (f1) and the pump signal (f2): (f1 + f2) and (f1 − f2). A practical parametric oscillator needs the following connections: one for the "common" or "ground", one to feed the pump, one to retrieve the output, and maybe a fourth one for biasing. A parametric amplifier needs a fifth port to input the signal being amplified. Since a varactor diode has only two connections, it can only be a part of an LC network with four eigenvectors with nodes at the connections. This can be implemented as a transimpedance amplifier, a traveling-wave amplifier or by means of a circulator. Mathematical equation The parametric oscillator equation can be extended by adding an external driving force : . We assume that the damping is sufficiently strong that, in the absence of the driving force , the amplitude of the parametric oscillations does not diverge, i.e., that . In this situation, the parametric pumping acts to lower the effective damping in the system. For illustration, let the damping be constant and assume that the external driving force is at the mean resonance frequency , i.e., . The equation becomes whose solution is approximately . As approaches the threshold , the amplitude diverges. When , the system enters parametric resonance and the amplitude begins to grow exponentially, even in the absence of a driving force . Advantages It is highly sensitive low noise level amplifier for ultra high frequency and microwave radio signal Other relevant mathematical results If the parameters of any second-order linear differential equation are varied periodically, Floquet analysis shows that the solutions must vary either sinusoidally or exponentially. The equation above with periodically varying is an example of a Hill equation. If is a simple sinusoid, the equation is called a Mathieu equation. See also Harmonic oscillator Mathieu equation Optical parametric amplifier Optical parametric oscillator References Further reading Kühn L. (1914) Elektrotech. Z., 35, 816-819. Pungs L. DRGM Nr. 588 822 (24 October 1913); DRP Nr. 281440 (1913); Elektrotech. Z., 44, 78-81 (1923?); Proc. IRE, 49, 378 (1961). Elmer, Franz-Josef, "Parametric Resonance Pendulum Lab University of Basel". unibas.ch, July 20, 1998. Cooper, Jeffery, "Parametric Resonance in Wave Equations with a Time-Periodic Potential". SIAM Journal on Mathematical Analysis, Volume 31, Number 4, pp. 821–835. Society for Industrial and Applied Mathematics, 2000. "Driven Pendulum: Parametric Resonance". phys.cmu.edu (Demonstration of physical mechanics or classical mechanics. Resonance oscillations set up in a simple pendulum via periodically varying pendulum length.) Mumford, W. W., "Some notes on the history of parametric transducers". Proceedings of the IRE, Volume 98, Number 5, pp. 848–853. Institute of Electrical and Electronics Engineers, May 1960. 2009, Ferdinand Verhulst, Perturbation analysis of parametric resonance, Encyclopedia of Complexity and Systems Science, Springer. External links Tim's Autoparametric Resonance — a video by Tim Rowett showing how autoparametric resonance appears in a pendulum made with a spring. Electronic amplifiers Dynamical systems Electronic oscillators Ordinary differential equations
Parametric oscillator
[ "Physics", "Mathematics", "Technology" ]
3,436
[ "Electronic amplifiers", "Mechanics", "Amplifiers", "Dynamical systems" ]
4,902,725
https://en.wikipedia.org/wiki/Flora%20of%20North%20America
The Flora of North America North of Mexico (usually referred to as FNA) is a multivolume work describing the native plants and naturalized plants of North America, including the United States, Canada, St. Pierre and Miquelon, and Greenland. It includes bryophytes and vascular plants. All taxa are described and included in dichotomous keys, distributions of all species and infraspecific taxa are mapped, and about 20% of species are illustrated with line drawings prepared specifically for FNA. It is expected to fill 30 volumes when completed and will be the first work to treat all of the known flora north of Mexico; in 2015 it was expected that the series would conclude in 2017. Twenty-nine of the volumes have been published as of 2022. Soon after publication, the contents are made available online. FNA is a collaboration of about 1,000 authors, artists, reviewers, and editors from throughout the world. Reception The series has been praised for "the comprehensive treatments [that] allow botanists to examine taxonomic and geographical traits of genera across the North American continent, rather than being limited by keys developed for one's own state or region". Reviewing volume 3, Paula Wolfe found the series worth recommending, and praised it for high standards. References External links Florae (publication) Botany in North America Missouri Botanical Garden
Flora of North America
[ "Biology" ]
278
[ "Flora", "Florae (publication)" ]
4,903,351
https://en.wikipedia.org/wiki/Oil%20filter
An oil filter is a filter designed to remove contaminants from engine oil, transmission oil, lubricating oil, or hydraulic oil. Their chief use is in internal-combustion engines for motor vehicles (both on- and off-road ), powered aircraft, railway locomotives, ships and boats, and static engines such as generators and pumps. Other vehicle hydraulic systems, such as those in automatic transmissions and power steering, are often equipped with an oil filter. Gas turbine engines, such as those on jet aircraft, also require the use of oil filters. Oil filters are used in many different types of hydraulic machinery. The oil industry itself employs filters for oil production, oil pumping, and oil recycling. Modern engine oil filters tend to be "full-flow" (inline) or "bypass". History Early automobile engines did not have oil filters, having only a rudimentary mesh sieve placed at the oil pump intake. Consequently, along with the generally low quality of oil available, very frequent oil changes were required. The Purolator oil filter was the first oil filter for the automobile; it revolutionized the filtration industry, and is still in production today. The Purolator was a bypass filter, whereby most of the oil was pumped from the oil sump directly to the engine's working parts, while a smaller proportion of the oil was sent through the filter via a second flow path, filtering the oil over time. Bypass and full-flow Full-flow A full-flow system will have a pump which sends pressurised oil through a filter to the engine bearings, after which the oil returns by gravity to the sump. In the case of a dry sump engine, the oil that reaches the sump is evacuated by a second pump to a remote oil tank. The function of the full-flow filter is to protect the engine from wear through abrasion. Bypass Modern bypass oil filter systems are secondary systems whereby a bleed from the main oil pump supplies oil to the bypass filter, the oil then passing not to the engine but returning to the sump or oil tank. The purpose of the bypass is to have a secondary filtration system to keep the oil in good condition, free of dirt, soot and water, providing much smaller particle retention than is practical for full flow filtration, the full-flow filter is still used to prevent any excessively large particles from causing substantial abrasion or acute blockage in the engine. Originally used on commercial and industrial diesel engines with large oil capacities where the cost of oil analysis testing and extra filtration to extended oil change intervals makes economic sense; bypass oil filters are becoming more common in private consumer applications. (It is essential that the bypass does not compromise the pressurised oilfeed within the full-flow system; one way to avoid such compromise is to have the bypass system as completely independent). Pressure relief valves Most pressurized lubrication systems incorporate an overpressure relief valve to allow oil to bypass the filter if its flow restriction is excessive, to protect the engine from oil starvation. Filter bypass may occur if the filter is clogged or the oil is thickened by cold weather. The overpressure relief valve is frequently incorporated into the oil filter. Filters mounted such that oil tends to drain from them usually incorporate an anti-drainback valve to hold oil in the filter after the engine (or other lubrication system) is shut down. This is done to avoid a delay in oil pressure buildup once the system is restarted; without an anti-drainback valve, pressurized oil would have to fill the filter before travelling onward to the engine's working parts. This situation can cause premature wear of moving parts due to initial lack of oil. Types of oil filter Mechanical Mechanical designs employ an element made of bulk material (such as cotton waste) or pleated Filter paper to entrap and sequester suspended contaminants. As material builds up on (or in) the filtration medium, oil flow is progressively restricted. This requires periodic replacement of the filter element (or the entire filter, if the element is not separately replaceable). Cartridge and spin-on Early engine oil filters were of cartridge (or replaceable element) construction, in which a permanent housing contains a replaceable filter element or cartridge. The housing is mounted either directly on the engine or remotely with supply and return pipes connecting it to the engine. In the mid-1950s, the spin-on oil filter design was introduced: a self-contained housing and element assembly which was to be unscrewed from its mount, discarded, and replaced with a new one. This made filter changes more convenient and potentially less messy, and quickly came to be the dominant type of oil filter installed by the world's automakers. Conversion kits were offered for vehicles originally equipped with cartridge-type filters. In the 1990s, European and Asian automakers in particular began to shift back in favor of replaceable-element filter construction, because it generates less waste with each filter change. American automakers have likewise begun to shift to replaceable-cartridge filters, and retrofit kits to convert from spin-on to cartridge-type filters are offered for popular applications. Commercially available automotive oil filters vary in their design, materials, and construction details. Ones that are made from completely synthetic material excepting the metal drain cylinders contained within are far superior and longer lasting than the traditional cardboard/cellulose/paper type that still predominate. These variables affect the efficacy, durability, and cost of the filter. Magnetic Magnetic filters use a permanent magnet or an electromagnet to capture ferromagnetic particles. An advantage of magnetic filtration is that maintaining the filter simply requires cleaning the particles from the surface of the magnet. Automatic transmissions in vehicles frequently have a magnet in the fluid pan to sequester magnetic particles and prolong the life of the media-type fluid filter. Some companies are manufacturing magnets that attach to the outside of an oil filter or magnetic drain plugs—first invented and offered for cars and motorcycles in the mid-1930s—to aid in capturing these metallic particles, though there is ongoing debate as to the effectiveness of such devices. Sedimentation A sedimentation or gravity bed filter allows contaminants heavier than oil to settle to the bottom of a container under the influence of gravity. Centrifugal A centrifuge oil cleaner is a rotary sedimentation device using centrifugal force rather than gravity to separate contaminants from the oil, in the same manner as any other centrifuge. Pressurized oil enters the center of the housing and passes into a drum rotor free to spin on a bearing and seal. The rotor has two jet nozzles arranged to direct a stream of oil at the inner housing to rotate the drum. The oil then slides to the bottom of the housing wall, leaving particulate oil contaminants stuck to the housing walls. The housing must periodically be cleaned, or the particles will accumulate to such a thickness as to stop the drum rotating. In this condition, unfiltered oil will be recirculated. Advantages of the centrifuge are: (i) that the cleaned oil may separate from any water which, being heavier than oil, settles at the bottom and can be drained off (provided any water has not emulsified with the oil); and (ii) they are much less likely to become blocked than a conventional filter. If the oil pressure is insufficient to spin the centrifuge, it may instead by driven mechanically or electrically. Note: some spin-off filters are described as centrifugal but they are not true centrifuges; rather, the oil is directed in such a way that there is a centrifugal swirl that helps contaminants stick to the outside of the filter. High efficiency (HE) High efficiency oil filters are a type of bypass filter that are claimed to allow extended oil drain intervals. HE oil filters typically have pore sizes of 3 micrometres, which studies have shown reduce engine wear. Some fleets have been able to increase their drain intervals up to 5-10 times. Filter placement in an oil system Deciding how clean the oil needs to be is important as cost increases rapidly with cleanliness. Having determined the optimum target cleanliness level for a contamination control programme, many engineers are then challenged by the process of optimizing the location of the filter. To ensure effective solid particle ingression balance, the engineer must consider various elements such as whether the filter will be for protection or for contamination control, ease of access for maintenance, and the performance of the unit being considered to meet the challenges of the target set. See also Air filter Fuel filter Impingement filter List of auto parts Oil-filter wrench References External links Oil filter cross reference Vehicle parts Oil filter
Oil filter
[ "Chemistry", "Technology", "Engineering" ]
1,813
[ "Chemical equipment", "Filters", "Filtration", "Vehicle parts", "Components" ]
4,904,656
https://en.wikipedia.org/wiki/Permeation
In physics and engineering, permeation (also called imbuing) is the penetration of a permeate (a fluid such as a liquid, gas, or vapor) through a solid. It is directly related to the concentration gradient of the permeate, a material's intrinsic permeability, and the materials' mass diffusivity. Permeation is modeled by equations such as Fick's laws of diffusion, and can be measured using tools such as a minipermeameter. Description The process of permeation involves the diffusion of molecules, called the permeant, through a membrane or interface. Permeation works through diffusion; the permeant will move from high concentration to low concentration across the interface. A material can be semipermeable, with the presence of a semipermeable membrane. Only molecules or ions with certain properties will be able to diffuse across such a membrane. This is a very important mechanism in biology where fluids inside a blood vessel need to be regulated and controlled. Permeation can occur through most materials including metals, ceramics and polymers. However, the permeability of metals is much lower than that of ceramics and polymers due to their crystal structure and porosity. Permeation is something that must be considered carefully in many polymer applications, due to their high permeability. Permeability depends on the temperature of the interaction as well as the characteristics of both the polymer and the permeant component. Through the process of sorption, molecules of the permeant can be either absorbed or desorbed at the interface. The permeation of a material can be measured through numerous methods that quantify the permeability of a substance through a specific material. Permeability due to diffusion is measured in SI units of mol/(m・s・Pa) although Barrers are also commonly used. Permeability due to diffusion is not to be confused with Permeability (earth sciences) due to fluid flow in porous solids measured in Darcy. Related terms Permeant: The substance or species, ion, molecules permeating through the solid. Semipermeability: Property of a material to be permeable only for some substances and not for others. Permeation measurement: Method for the quantification of the permeability of a material for a specific substance. History Abbé Jean-Antoine Nollet (physicist, 1700–1770) Nollet tried to seal wine containers with a pig's bladder and stored them under water. After a while the bladder bulged outwards. He noticed the high pressure that discharged after he pierced the bladder. Curious, he did the experiment the other way round: he filled the container with water and stored it in wine. The result was a bulging inwards of the bladder. His notes about this experiment are the first scientific mention of permeation (later it would be called semipermeability). Thomas Graham (chemist, 1805–1869) Graham experimentally proved the dependency of gas diffusion on molecular weight, which is now known as Graham's law. Richard Barrer (1910–1996) Barrer developed the modern Barrer measurement technique, and first used scientific methods for measuring permeation rates. Applications Packaging: The permeability of the package (materials, seals, closures, etc.) needs to be matched with the sensitivity of the package contents and the specified shelf life. Some packages must have nearly hermetic seals while others can (and sometimes must) be selectively permeable. Knowledge about the exact permeation rates is therefore essential. Tires: Air pressure in tires should decrease as slowly as possible. A good tire is one that allows the least amount of gas to escape. Permeation will occur over time with the tires, so it is best to know the permeability of the material that will make up the tire with the desired gas to make the most efficient tires. Insulating material: Water vapour permeation of insulating material is important as well as for submarine cables to protect the conductor from corrosion. Fuel cells: Automobiles are equipped with Polymer Electrolyte Membrane (PEM) fuel cells to convert hydrogen fuel and oxygen found in the atmosphere to produce electricity. However, these cells only produce around 1.16 volts of electricity. In order to power a vehicle, multiple cells are arranged into a stack. A stack's power output depends on both the number and the size of the individual fuel cells. Thermoplastic and Thermosetting Piping: Pipes intended to transport water under high pressure can be considered as failed when there is a detectable permeation of water through the pipe wall to the outer surface of the pipe. Medical Uses: Permeation can also be seen in the medical field in drug delivery. Drug patches made up of polymer material contain a chemical reservoir that is loaded beyond its solubility, and then transferred to the body through contact. In order for the chemical to release itself into the body, it must permeate and diffuse through the polymer membrane, according to the concentration gradient. Due to the over solubility of the reservoir, transport of the drug follows the burst and lag mechanism. There is a high transfer rate of the drug when the patch makes contact with the skin, but as time increases a concentration gradient is established, meaning delivery of the drug settles to a constant rate. This is crucial in drug delivery and is used in cases such as the Ocusert System. But also the opposite case can be found in the medical field. As ampoules may contain highly sensitive pharmaceuticals for injection it is crucial that the used material prevents any kinds of substances to enter the pharmaceutical product or evaporate from it. For this, ampoules are often made from glass and less frequently from synthetic materials. Technical uses: At the production of Halogen Lamps, halogen gases have to be encapsulated very closely. Aluminosilicate glass can be the perfect barrier for the gas encapsulation. Hereby, the transition to the electrode is critical. But due to matching thermal expansions of the glass body and the metal, the transition is working. Permeation measurement The permeation of films and membranes can be measured with any gas or liquid. One method uses a central module which is separated by the test film: the testing gas is fed on the one side of the cell and the permeated gas is carried to the detector by a sweep gas. The diagram on the right shows a testing cell for films, normally made from metals like stainless steel. The photo shows a testing cell for pipes made from glass, similar to a Liebig condenser. The testing medium (liquid or gas) is situated in the inner white pipe and the permeate is collected in the space between the pipe and the glass wall. It is transported by a sweep gas (connected to the upper and lower joint) to an analysing device. Permeation can also be measured through intermittent contact. This method involves taking a sample of the test chemical and placing it on the surface of the material whose permeability is being observed while adding or removing specific amounts of the test chemical. After a known amount of time, the material is analyzed to find the concentration of the test chemical present throughout its structure. Along with the amount of time the chemical was on the material and the analysis of the test material, one can determine the cumulative permeation of the test chemical. The following table gives examples of the calculated permeability coefficient of certain gases through a silicone membrane. *1 Barrer = 10−10 cm3 (STP) · cm /cm2 · s · cm-Hg Unless otherwise noted, permeabilities are measured and reported at 25 °C (RTP) and not (STP) From W. L. Robb. Thin Silicone Membranes – Their Permeation Properties and Some Applications. Annals of the New York Academy of Sciences, vol. 146, (January 1968) issue 1 Materials in, pp. 119–137 Approximation using Fick's first law The flux or flow of mass of the permeate through the solid can be modeled by Fick's first law. This equation can be modified to a very simple formula that can be used in basic problems to approximate permeation through a membrane. where is the "diffusion flux" is the diffusion coefficient or mass diffusivity is the concentration of the permeate is the thickness of the membrane We can introduce into this equation, which represents the sorption equilibrium parameter, which is the constant of proportionality between pressure () and . This relationship can be represented as . The diffusion coefficient can be combined with the sorption equilibrium parameter to get the final form of the equation, where is the permeability of the membrane. The relationship being Solubility of a gas in a metal In practical applications when looking at gases permeating metals, there is a way to relate gas pressure to concentration. Many gases exist as diatomic molecules when in the gaseous phase, but when permeating metals they exist in their singular ionic form. Sieverts' law states that the solubility of a gas, in the form of a diatomic molecule, in metal is proportional to the square root of the partial pressure of the gas. The flux can be approximated in this case by the equation We can introduce into this equation, which represents the reaction equilibrium constant. From the relationship . The diffusion coefficient can be combined with the reaction equilibrium constant to get the final form of the equation, where is the permeability of the membrane. The relationship being See also Moisture vapor transmission rate – measure of the passage of water vapor through a substance Oxygen transmission rate – measure of packaging permeability Hermetic seal – airtight seal Permeability (earth sciences) – measure of the ability of a porous material to allow fluids to pass through it References Further reading Yam, K. L., Encyclopedia of Packaging Technology, John Wiley & Sons, 2009, Massey, L K, Permeability Properties of Plastics and Elastomers, 2003, Andrew Publishing, ASTM F1249 Standard Test Method for Water Vapor Transmission Rate Through Plastic Film and Sheeting Using a Modulated Infrared Sensor ASTM E398 Standard Test Method for Water Vapor Transmission Rate of Sheet Materials Using Dynamic Relative Humidity Measurement ASTM F2298 Standard Test Methods for Water Vapor Diffusion Resistance and Air Flow Resistance of Clothing Materials Using the Dynamic Moisture Permeation Cell F2622 Standard Test Method for Oxygen Gas Transmission Rate Through Plastic Film and Sheeting Using Various Sensors G1383: Standard Test Method for Permeation of Liquids and Gases through Protective Clothing Materials under Conditions of Intermittent Contact. "Thin silicone membranes – Their permeation properties and some applications", Annals of the New York Academy of Sciences, vol. 146, issue 1 Materials in, pp. 119–137 W. L. Robb Pharmaceutical Systems for Drug Delivery, David Jones; Chien YW. 2nd ed. New York: Marcel Dekker, Inc; 1993. Novel drug delivery systems. O.V. Malykh, A.Yu. Golub, V.V. Teplyakov, "Polymeric membrane materials: New aspects of empirical approaches to prediction of gas permeability parameters in relation to permanent gases, linear lower hydrocarbons and some toxic gases", Advances in Colloid and Interface Science, Volume 165, Issues 1–2, 11 May 2011, Pages 89–99 . CheFEM 3 is Equation of State-based FEM software for prediction of permeation of polymers and their composites, CheFEM 3 Software. Physical quantities Packaging
Permeation
[ "Physics", "Mathematics" ]
2,398
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
4,905,213
https://en.wikipedia.org/wiki/Thomson%20%28unit%29
The thomson (symbol: Th) is a unit that has appeared infrequently in scientific literature relating to the field of mass spectrometry as a unit of mass-to-charge ratio. The unit was proposed by R. Graham Cooks and Alan L. Rockwood naming it in honour of J. J. Thomson who measured the mass-to-charge ratio of electrons and ions. Definition The thomson is defined as where Da is the symbol for the unit dalton (also called the unified atomic mass unit, symbol u), and e is the elementary charge, which is the unit of electric charge in the system of atomic units. For example, the ion C7H72+ has a mass of 91 Da. Its charge number is +2, and hence its charge is 2e. The ion will be observed at 45.5 Th in a mass spectrum. The thomson allows for negative values for negatively charged ions. For example, the benzoate anion would be observed at −121 Th since the charge is −e. Use The thomson has been used by some mass spectrometrists, for example Alexander Makarov—the inventor of the Orbitrap—in a scientific poster, and a 2015 presentation. Other uses of the thomson include papers, and (notably) one book. The journal Rapid Communications in Mass Spectrometry (in which the original article appeared) states that "the thomson (Th) may be used for such purposes as a unit of mass-to-charge ratio although it is not currently approved by IUPAP or IUPAC." Even so, the term has been called "controversial" by RCM's former Editor-in Chief (in a review the Hoffman text cited above). The book, Mass Spectrometry Desk Reference, argues against the use of the thomson. However, the editor-in-chief of the Journal of the Mass Spectrometry Society of Japan has written an editorial in support of the thomson unit. The thomson is not an SI unit, nor has it been defined by IUPAC. Since 2013, the thomson is deprecated by IUPAC (Definitions of Terms Relating to Mass Spectrometry). Since 2014, Rapid Communications in Mass Spectrometry regards the thomson as a "term that should be avoided in mass spectrometry publications". References Units of measurement Mass spectrometry
Thomson (unit)
[ "Physics", "Chemistry", "Mathematics" ]
495
[ "Units of measurement", "Spectrum (physical sciences)", "Instrumental analysis", "Quantity", "Mass", "Mass spectrometry", "Matter" ]
4,905,420
https://en.wikipedia.org/wiki/Supergranulation
In solar physics and observation, supergranulation is a pattern of convection cells in the Sun's photosphere. The individual convection cells are typically referred to as supergranules. The pattern was discovered in the 1950s by A.B. Hart using Doppler velocity measurements showing horizontal flows on the photosphere (flow speed about 300 to 500 m/s, a tenth of that in the smaller granules). Later work (1960s) by Leighton, Noyes and Simon established a typical size of about 30000 km for supergranules with a lifetime of about 24 hours. Origin Supergranulation has long been interpreted as a specific convection scale, but its origin is not precisely known. Although the presence of granules in the solar photosphere is a well-documented phenomenon, there is still much debate on the true nature or even the existence of higher-order granulation patterns. Some authors suggest the existence of three distinct scales of organization: granulation (with typical diameters of 150–2500 km), mesogranulation (5000–10000 km) and supergranulation (over 20000 km). Granules are typically considered as being signs of convective cells forming a hierarchic structure: supergranules would be thus fragmented in their uppermost layers into smaller mesogranules, which in turn would split into even smaller granules at their surface. The solar material would flow downward in dark "lanes" separating granules with the divisions between supergranules being the biggest concentrations of cold gas, analogous to rivers connecting smaller tributaries. It should however be stressed that this picture is highly speculative and might turn out to be false in the light of future discoveries. Recent studies show some evidence that mesogranulation was a ghost feature caused by averaging procedures. See also Dopplergram References External links a SOHO/MDI Dopplergram showing supergranular speed pattern NASA: The Sun Does The Wave Information at Nature.com Michel Rieutord and François Rincon, "The Sun’s Supergranulation", Living Rev. Solar Phys. 7, (2010), 2. online article (cited on June 15, 2010). Solar phenomena
Supergranulation
[ "Physics" ]
462
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
4,906,966
https://en.wikipedia.org/wiki/British%20Standard%20Pipe
British Standard Pipe (BSP) is a set of technical standards for screw threads that has been adopted internationally for interconnecting and sealing pipes and fittings by mating an external (male) thread with an internal (female) thread. It has been adopted as standard in plumbing and pipe fitting, except in North America, where NPT and related threads are used. Types Two types of threads are distinguished: Parallel (straight) threads, British Standard Pipe Parallel thread (BSPP; originally also known as British Standard Pipe Fitting thread/BSPF and British Standard Pipe Mechanical thread/BSPM), which have a constant diameter; denoted by the letter G. Taper threads, British Standard Pipe Taper thread (BSPT), whose diameter increases or decreases along the length of the thread; denoted by the letter R. These can be combined into two types of joints: Jointing threads These are pipe threads where pressure-tightness is made through the mating of two threads together. They always use a taper male thread, but can have either parallel or taper female threads. (In Europe, taper female pipe threads are not commonly used.) Longscrew threads These are parallel pipe threads used where a pressure-tight joint is achieved by the compression of a soft material (such as an o-ring seal or a washer) between the end face of the male thread and a socket or nipple face, with the tightening of a backnut. Thread form The thread form follows the British Standard Whitworth standard: Symmetrical V-thread in which the angle between the flanks is 55° (measured in an axial plane) One-sixth of this sharp V is truncated at the top and the bottom The threads are rounded equally at crests and roots by circular arcs ending tangentially with the flanks where r ≈ 0.1373P The theoretical depth of the thread is therefore 0.6403 times the nominal pitch h ≈ 0.6403P Pipe thread sizes At least 41 thread sizes have been defined, ranging from 1⁄16 to 18, although of these only 15 are included in ISO 7 and 24 in ISO 228. The size number was originally based on the inner diameter (measured in inches) of a steel tube for which the thread was intended, but contemporary pipes tend to use thinner walls to save material, and thus have an inner diameter larger than this nominal size. In the modern standard metric version, it is simply a size number, where listed diameter size is the major outer diameter of the external thread. For a taper thread, it is the diameter at the "gauge length" (plus/minus one thread pitch) from the small end of the thread. The taper is 1:16, meaning that for each 16 units of measurement increase in the distance from the end, the diameter increases by 1 unit of measurement. These standard pipe threads are formally referred to by the following sequence of blocks: the words, Pipe thread, the document number of the standard (e.g., ISO 7 or EN 10226) the symbol for the pipe thread type: G, external and internal parallel (ISO 228) R, external taper (ISO 7) Rp, internal parallel (ISO 7/1) Rc, internal taper (ISO 7) Rs, external parallel the thread size Threads are normally right-hand. For left-hand threads, the letters, LH, are appended. Example: Pipe thread EN 10226 Rp The terminology for the use of G and R originated from Germany (G for gas, as it was originally designed for use on gas pipes; R for rohr, meaning pipe.) Pipe and fastener dimensions ISO 7 (Pressure Tight threads) The standard ISO 7 - Pipe threads where pressure-tight joints are made on the threads consists of the following parts: ISO 7-1:1994 Dimensions, tolerances and designation ISO 7-2:2000 Verification by means of limit gauges ISO 228 (Non Pressure Tight Threads) The standard ISO 228 - Pipe threads where pressure-tight joints are not made on the threads consists of the following parts: ISO 228-1:2000 Dimensions, tolerances and designation ISO 228-2:1987 Verification by means of limit gauges See also AN thread British standard brass thread British Standard Whitworth Garden hose thread National pipe thread Panzergewinde Thread angle Threaded pipe References External links British Standard Pipe Parallel Thread Dimensions British Standard Pipe Taper Thread Dimensions BSP Thread Charts and Diagrams, showing dimensions of tubing and fittings ISO 7-1:1994 ISO 7-2:2000 ISO 228-1:2000 ISO 228-2:1987 Parallel pipe threads G Parallel pipe threads PF Thread standards Piping Plumbing British Standards
British Standard Pipe
[ "Chemistry", "Engineering" ]
953
[ "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Mechanical engineering", "Piping" ]
4,907,086
https://en.wikipedia.org/wiki/Hyper-encryption
Hyper-encryption is a form of encryption invented by Michael O. Rabin which uses a high-bandwidth source of public random bits, together with a secret key that is shared by only the sender and recipient(s) of the message. It uses the assumptions of Ueli Maurer's bounded-storage model as the basis of its secrecy. Although everyone can see the data, decryption by adversaries without the secret key is still not feasible, because of the space limitations of storing enough data to mount an attack against the system. Unlike almost all other cryptosystems except the one-time pad, hyper-encryption can be proved to be information-theoretically secure, provided the storage bound cannot be surpassed. Moreover, if the necessary public information cannot be stored at the time of transmission, the plaintext can be shown to be impossible to recover, regardless of the computational capacity available to an adversary in the future, even if they have access to the secret key at that future time. A highly energy-efficient implementation of a hyper-encryption chip was demonstrated by Krishna Palem et al. using the Probabilistic CMOS or PCMOS technology and was shown to be ~205 times more efficient in terms of Energy-Performance-Product. See also Perfect forward secrecy Randomness extractor References Further reading Y. Z. Ding and M. O. Rabin. Hyper-encryption and everlasting security. In 19th Annual Symposium on Theoretical Aspects of Computer Science (STACS), volume 2285 of Lecture Notes in Computer Science, pp. 1–26. Springer-Verlag, 2002. Jason K. Juang, Practical Implementation and Analysis of Hyper-Encryption. Masters dissertation, MIT Department of Electrical Engineering and Computer Science, 2009-05-22. External links , video of a lecture by Professor Michael O. Rabin. Cryptography Information theory
Hyper-encryption
[ "Mathematics", "Technology", "Engineering" ]
381
[ "Cybersecurity engineering", "Telecommunications engineering", "Cryptography", "Applied mathematics", "Computer science", "Information theory" ]
4,907,619
https://en.wikipedia.org/wiki/Truncated%20regression%20model
Truncated regression models are a class of models in which the sample has been truncated for certain ranges of the dependent variable. That means observations with values in the dependent variable below or above certain thresholds are systematically excluded from the sample. Therefore, whole observations are missing, so that neither the dependent nor the independent variable is known. This is in contrast to censored regression models where only the value of the dependent variable is clustered at a lower threshold, an upper threshold, or both, while the value for independent variables is available. Sample truncation is a pervasive issue in quantitative social sciences when using observational data, and consequently the development of suitable estimation techniques has long been of interest in econometrics and related disciplines. In the 1970s, James Heckman noted the similarity between truncated and otherwise non-randomly selected samples, and developed the Heckman correction. Estimation of truncated regression models is usually done via parametric maximum likelihood method. More recently, various semi-parametric and non-parametric generalisation were proposed in the literature, e.g., based on the local least squares approach or the local maximum likelihood approach, which are kernel based methods. See also Censored regression model Sampling bias Truncated distribution References Further reading Actuarial science Single-equation methods (econometrics) Regression models Mathematical and quantitative methods (economics)
Truncated regression model
[ "Mathematics" ]
274
[ "Applied mathematics", "Actuarial science" ]
4,909,026
https://en.wikipedia.org/wiki/Ohio%20River%20Bridges%20Project
The Ohio River Bridges Project (ORBP) was a 2002–2016 transportation project in the Louisville metropolitan area primarily involving the construction of two Interstate highway bridges across the Ohio River and the reconstruction of the Kennedy Interchange (locally known as "Spaghetti Junction") near downtown Louisville. The Abraham Lincoln Bridge, an urban span carrying northbound traffic on I-65 from downtown Louisville to Jeffersonville, Indiana, opened December 2015; it is slightly upstream from the John F. Kennedy Memorial Bridge that had been completed in 1963 and was redecked for this project to handle southbound traffic. A suburban span, the Lewis and Clark Bridge (called the East End Bridge during planning), opened in December 2016 and connects the Indiana and Kentucky segments of I-265 between Prospect, Kentucky (far eastern Jefferson County), and Utica, Indiana. Additionally completed were reconstructed ramps on I-65 between Muhammad Ali Boulevard and downtown Louisville, as well as a new underground tunnel and freeway extensions to complete connections to the Lewis and Clark Bridge. On July 26, 2002, the two governors of Kentucky and Indiana announced that the East End Bridge would be constructed, along with a new I-65 downtown span and a reconstructed Kennedy Interchange, where three interstates connect. The cost of the three projects was to total approximately $2.5 billion, and would be the largest transportation project ever constructed between the two states. An estimated 132 residents and 80 businesses were to be displaced. The Louisville–Southern Indiana Bridge Authority (LSIBA), a 14-member commission (seven members from Kentucky and seven from Indiana) charged with developing a financial plan and establishing funding mechanisms for construction, was established in October 2009. The LSIBA oversaw construction of the project, and continues to operate and maintain the bridges and collect tolls. Construction began in 2014, with the entire project being completed in late December 2016. Tolling on the bridges is expected to continue through at least 2053. Lewis and Clark Bridge The result of many community discussions for over 30 years, the Lewis and Clark Bridge (known as the East End Bridge from its conception until completion of construction) is part of a new 6.5 mile (10.5 km) highway that connects the formerly disjoint sections of I-265 in Indiana and Kentucky. With the new section complete, I-265 forms a 3/4 beltway around the Louisville metropolitan area. Design A-15 was chosen over six alternatives for the I-265 connection, which includes the Lewis and Clark Bridge Bridge. A tunnel for the new highway was constructed under the historic Drumanard Estate in Kentucky because the property is listed on the National Register of Historic Places. The interstate reappears from the tunnel near the Shadow Wood subdivision before crossing Transylvania Beach and the Ohio River. The highway passes north of Utica, Indiana, near the old Indiana Army Ammunition Plant. Construction of an exploratory tunnel under the historic east end property was to begin in summer 2007, but bids were 39% more than the state had expected. Construction of the exploratory tunnel finally started in April 2011. The design is the result of the $22.1 million, four-year Ohio River Bridges Study, which found that solving the region's traffic congestion would require the construction of two new bridges across the Ohio River and reconstruction of the Kennedy Interchange in downtown Louisville. Limited land acquisition began in 2004, with the number of homes taken by eminent domain expected to be higher because of development occurring in the route path. 109 residences, most in Clark County, Indiana were displaced, the majority of which were constructed in the year before the route for the I-265 extension was finalized. Half of the Shadow Wood subdivision and two condominium buildings at Harbor of Harrods Creek in Jefferson County, Kentucky, were razed. The only new interchange along the 6.5-mile (10.5 km) eastern route is in Indiana at Salem Road. That full interchange provides access to the Clark Maritime Center and the old Indiana Army Ammunition Plant, a site that has been undergoing redevelopment as the River Ridge Commerce Center. The bridge includes accommodations for pedestrians and bicyclists. Former Indiana Governor Frank O'Bannon said he could not wait for construction to begin, adding, "We'll finally be able to take down that sign at the end of Interstate 265 near the Clark Maritime Center that says 'No Bridge to Kentucky,'" he said to applause. In September 2005, the Kentucky Transportation Cabinet released plans to reconstruct the U.S. Highway 42 interchange and rebuild the "super-two" roadway from I-71 north to the interchange. The super-two roadway already had a right of way wide enough for a six lane freeway, although at the time only two lanes worth of space is being used. The incomplete US 42 interchange had been constructed in the early 1960s with the original construction of Interstate 265. The reconstruction of the northern two miles (3 km) included the widening of the super-two alignment to six-lanes, the rebuilding and widening of the ramps at US 42, the installation of two traffic signals at the base of the ramps, and stub roadways that would eventually lead into the tunnel under the Drumanard Estate to the immediate north of the interchange. On July 19, 2006, the final design alternatives for the East End Bridge were announced. The three designs chosen included a cable-stayed bridge with two diamond-shaped towers with the cables reaching to the outside; a cable-stayed two-tower bridge with the towers in the center of the bridge deck and cables reaching to the outside; and a cable-stayed two center towered bridges with the cables extending to the center of the deck. It was also announced that the new bridge would cost $221 million and feature three northbound and three southbound lanes. In 2011, this was scaled back in order to save money by narrowing the bridge deck configuration to have only two lanes in each direction (but with the future ability to re-stripe for three by narrowing shoulders) and by slightly narrowing the pedestrian/bicycle lane, resulting in a total reduction in overall deck width. The bridge opened to the public on December 18, 2016. Tolling began on December 30, 2016. Abraham Lincoln Bridge The Abraham Lincoln Bridge, completed in 2015, runs parallel to the John F. Kennedy Memorial Bridge downstream and now carries six lanes of northbound I-65 traffic. Pedestrian and bicycle lanes were in the original plans, but were removed. The existing I-65 Kennedy Memorial Bridge, completed in 1963, was renovated for six lanes of southbound traffic. The Lincoln Bridge opened for northbound traffic only on December 6, 2015, with southbound traffic being rerouted onto it later that month as reconstruction of the Kennedy Bridge began. The Lincoln began carrying northbound traffic only on October 10, 2016, when the Kennedy reopened for southbound I-65 through traffic; the Kennedy itself fully reopened on November 14, 2016. A Structured Public Involvement protocol developed by K. Bailey and T. Grossardt was used to elicit public preferences for the design of the structure. From spring 2005 to summer 2006, several hundred citizens attended a series of public meetings in Louisville, Kentucky, and Jeffersonville, Indiana, and evaluated a range of bridge design options using 3D visualizations. This public involvement process focused in on designs that the public felt were more suitable, as shown by their polling scores. The SPI public involvement process itself was evaluated by anonymous, real-time citizen polling at the open public meetings. On July 19, 2006, the final design alternatives for the bridge were announced. The three designs included a three-span arch, a cable-stayed design with three towers, and a cable-stayed type with a single A-shaped support tower. It was also announced that the projected cost for the bridge would be $203 million. The new structure is the fourth bridge in downtown Louisville, joining the John F. Kennedy Memorial Bridge erected between spring 1961 and late 1963 at a cost of $10 million; the four-lane George Rogers Clark Memorial Bridge, constructed from June 1928 and to October 31, 1929; and the Big Four Bridge, which operated as a railroad bridge from 1895 to 1969 and reopened as a pedestrian bridge in May 2014. 2008 report In February 2008, the Kentucky Transportation Cabinet released a study saying that tolls would be a possible part of the new bridges, because there were insufficient federal funds for the $4.1 billion project. The tolling would likely be electronic, without traditional tollbooths, similar to SunPass in Florida. The possibility of tolls was not met with a warm reception; Jeffersonville's city council quickly passed a resolution urging state and federal officials to find other ways to fund the bridges project. 2010 financial plan The LSIBA issued the updated financial plan for the Ohio River Bridges project on December 16, 2010. The plan envisioned roughly half of the project's costs being financed through $1.00 tolls on the proposed I-65 (northbound) and I-265 and the existing I-65 (southbound) and I-64 Ohio River crossings in the Louisville area. While the financial plan envisioned construction beginning in the summer of 2012, the plan still required approval from the Federal Highway Administration and Congress before work could begin because the existing I-65 and I-64 bridges were built with federal interstate highway funds. The Kentucky Public Transportation Infrastructure Authority officially approved of the Commonwealth joining the E-ZPass consortium across 15 states and the Canadian province of Ontario on July 29, 2015. Users have the choice of purchasing a traditional E-ZPass transponder for use throughout E-ZPass states, or a decal applied to the windshield for use by mainly local commuters, while occasional travelers can choose to pay their toll by mail via license plate recognition notices. 2011 new issues On September 9, 2011, Kentucky and Indiana officials announced the closure of the Sherman Minton Bridge. Cracks in bridge support beams found during an inspection on that day led to the bridge closure, which transportation officials indicated would last for an undetermined length of time. The bridge is a major connection between Louisville and Southern Indiana and is Interstate 64's pathway between the states. The bridge reopened shortly before midnight on February 17, 2012, almost two weeks ahead of a deadline imposed by both states for completion of repairs. Criticism and alternatives Like other public works projects, criticism and alternatives have sprung up. Criticism has largely centered around land acquisition and routing issues, as well as concerns that the Butchertown neighborhood would lose a significant portion of its historical infrastructure with its absorption into the reconfigured Kennedy Interchange. A notable alternative to a portion of the project plan, 8664, called for I-64 to be rerouted around downtown using I-265 and the new East End Bridge so that I-64 in downtown could be deconstructed, making way for downtown park and business expansion in its place. One notable critic was the non-profit group River Fields. The safety and cost effectiveness of the East End Tunnel under the Drumanard Estate, a 1920s-era property on the National Register of Historic Places was questioned. It would be the second longest automobile tunnel in Kentucky, after the Cumberland Gap Tunnel, and the longest allowing Hazmat-containing vehicles to pass through unannounced and without escort. The chief of the Harrods Creek, Kentucky, fire department, which would be first responder to any accident, expressed concern that the proposed tunnel would be considerably more dangerous to travel through and with fewer safety precautions. See also List of crossings of the Ohio River List of parkways and named highways in Kentucky; nine parkways were formerly tolled under the Turnpike Authority of Kentucky Contemporary Louisville area projects City of Parks KFC Yum! Center References Further reading Downtown Interstate 65 Bridge at Bridges & Tunnels East End Interstate 265 Bridge at Bridges & Tunnels External links The Ohio River Bridges Project (archived) "Road to Ruin: Interstate 265 Ohio River Bridge", taxpayer.net (archived) The East End Tunnel River Ridge Commerce Center 2000s in Louisville, Kentucky 2002 establishments in Indiana 2002 establishments in Kentucky 2010s in Louisville, Kentucky 2016 disestablishments in Indiana 2016 disestablishments in Kentucky Bridges completed in the 2010s Bridges over the Ohio River Interstate 64 Interstate 65 Interstate 71 Projects disestablished in 2016 Projects established in 2002 Road interchanges in the United States Road tunnels in the United States Transport controversies Transport infrastructure completed in 2016 Transportation in Clark County, Indiana Transportation in Louisville, Kentucky Tunnels completed in 2016 Tunnels in Kentucky
Ohio River Bridges Project
[ "Physics" ]
2,528
[ "Physical systems", "Transport", "Transport controversies" ]
6,413,844
https://en.wikipedia.org/wiki/Underwater%20explosion
An underwater explosion (also known as an UNDEX) is a chemical or nuclear explosion that occurs under the surface of a body of water. While useful in anti-ship and submarine warfare, underwater bombs are not as effective against coastal facilities. Properties of water Underwater explosions differ from in-air explosions due to the properties of water: Mass and incompressibility (all explosions) – water has a much higher density than air, which makes water harder to move (higher inertia). It is also relatively hard to compress (increase density) when under pressure in a low range (up to about 100 atmospheres). These two together make water an excellent conductor of shock waves from an explosion. Effect of neutron exposure on salt water (nuclear explosions only) – most underwater blast scenarios happen in seawater, not fresh or pure water. The water itself is not much affected by neutrons but salt is strongly affected. When exposed to neutron radiation during the microsecond of active detonation of a nuclear pit, water itself does not typically "activate", or become radioactive. The two elements in water, hydrogen and oxygen, can absorb an extra neutron, becoming deuterium and oxygen-17 respectively, both of which are stable isotopes. Even oxygen-18 is stable. Radioactive atoms can result if a hydrogen atom absorbs two neutrons, an oxygen atom absorbs three neutrons, or oxygen-16 undergoes a high energy neutron (n-p) reaction to produce a short-lived nitrogen-16. In any typical scenario, the probability of such multiple captures in significant numbers in the short time of active nuclear reactions around a bomb is very low. Salt in seawater readily absorbs neutrons into both the sodium-23 and chlorine-35 atoms, which change to radioactive isotopes. Sodium-24 has a half-life of about 15 hours, while that of chlorine-36 (which has a lower activation cross-section) is 300,000 years. The sodium is the most dangerous contaminant after the explosion because it has a short half-life. These are generally the main radioactive contaminants in an underwater blast; others are the usual blend of irradiated minerals, coral, unused nuclear fuel, and bomb case components present in a surface blast nuclear fallout, carried in suspension or dissolved in the water. Distillation or evaporating water (clouds, humidity, and precipitation) removes radiation contamination, leaving behind the radioactive salts. Effects Effects of an underwater explosion depend on several things, including distance from the explosion, the energy of the explosion, the depth of the explosion, and the depth of the water. Underwater explosions are categorized by the depth of the explosion. Shallow underwater explosions are those where a crater formed at the water's surface is large in comparison with the depth of the explosion. Deep underwater explosions are those where the crater is small in comparison with the depth of the explosion, or nonexistent. The overall effect of an underwater explosion depends on depth, the size and nature of the explosive charge, and the presence, composition and distance of reflecting surfaces such as the seabed, surface, thermoclines, etc. This phenomenon has been extensively used in antiship warhead design since an underwater explosion (particularly one underneath a hull) can produce greater damage than an above-surface one of the same explosive size. Initial damage to a target will be caused by the first shockwave; this damage will be amplified by the subsequent physical movement of water and by the repeated secondary shockwaves or bubble pulse. Additionally, charge detonation away from the target can result in damage over a larger hull area. Underwater nuclear tests close to the surface can disperse radioactive water and steam over a large area, with severe effects on marine life, nearby infrastructures and humans. The detonation of nuclear weapons underwater was banned by the 1963 Partial Nuclear Test Ban Treaty and it is also prohibited under the Comprehensive Nuclear-Test-Ban Treaty of 1996. Shallow underwater explosion The Baker nuclear test at Bikini Atoll in July 1946 was a shallow underwater explosion, part of Operation Crossroads. A 20 kiloton warhead was detonated in a lagoon which was approximately deep. The first effect was illumination of the sea from the underwater fireball. A rapidly expanding gas bubble created a shock wave that caused an expanding ring of apparently dark water at the surface, called the slick, followed by an expanding ring of apparently white water, called the crack. A mound of water and spray, called the spray dome, formed at the water's surface which became more columnar as it rose. When the rising gas bubble broke the surface, it created a shock wave in the air as well. Water vapor in the air condensed as a result of Prandtl–Meyer expansion fans decreasing the air pressure, density, and temperature below the dew point; making a spherical cloud that marked the location of the shock wave. Water filling the cavity formed by the bubble caused a hollow column of water, called the chimney or plume, to rise in the air and break through the top of the cloud. A series of ocean surface waves moved outward from the center. The first wave was about high at from the center. Other waves followed, and at further distances some of these were higher than the first wave. For example, at from the center, the ninth wave was the highest at . Gravity caused the column to fall to the surface and caused a cloud of mist to move outward rapidly from the base of the column, called the base surge. The ultimate size of the base surge was in diameter and high. The base surge rose from the surface and merged with other products of the explosion, to form clouds which produced moderate to heavy rainfall for nearly one hour. Deep underwater explosion An example of a deep underwater explosion is the Wahoo test, which was carried out in 1958 as part of Operation Hardtack I. A 9 kt Mk-7 was detonated at a depth of in deep water. There was little evidence of a fireball. The spray dome rose to a height of . Gas from the bubble broke through the spray dome to form jets which shot out in all directions and reached heights of up to . The base surge at its maximum size was in diameter and high. The heights of surface waves generated by deep underwater explosions are greater because more energy is delivered to the water. During the Cold War, underwater explosions were thought to operate under the same principles as tsunamis, potentially increasing dramatically in height as they move over shallow water, and flooding the land beyond the shoreline. Later research and analysis suggested that water waves generated by explosions were different from those generated by tsunamis and landslides. Méhauté et al. conclude in their 1996 overview Water Waves Generated by Underwater Explosion that the surface waves from even a very large offshore undersea explosion would expend most of their energy on the continental shelf, resulting in coastal flooding no worse than that from a bad storm. The Operation Wigwam test in 1955 occurred at a depth of , the deepest detonation of any nuclear device. Deep nuclear explosion Unless it breaks the water surface while still a hot gas bubble, an underwater nuclear explosion leaves no trace at the surface but hot, radioactive water rising from below. This is always the case with explosions deeper than about . About one second after such an explosion, the hot gas bubble collapses because: The water pressure is enormous below . The expansion reduces gas pressure, which decreases temperature. Rayleigh–Taylor instability at the gas/water boundary causes "fingers" of water to extend into the bubble, increasing the boundary surface area. Water is nearly incompressible. Vast amounts of energy are absorbed by phase change (water becomes steam at the fireball boundary). Expansion quickly becomes unsustainable because the amount of water pushed outward increases with the cube of the blast-bubble radius. Since water is not readily compressible, moving this much of it out of the way so quickly absorbs a massive amount of energy—all of which comes from the pressure inside the expanding bubble. Water pressure outside the bubble soon causes it to collapse back into a small sphere and rebound, expanding again. This is repeated several times, but each rebound contains only about 40% of the energy of the previous cycle. At the maximum diameter of the first oscillation, a very large nuclear bomb exploded in very deep water creates a bubble about a half-mile (800 m) wide in about one second and then contracts, which also takes about a second. Blast bubbles from deep nuclear explosions have slightly longer oscillations than shallow ones. They stop oscillating and become mere hot water in about six seconds. This happens sooner in nuclear blasts than bubbles from conventional explosives. The water pressure of a deep explosion prevents any bubbles from surviving to float up to the surface. The drastic 60% loss of energy between oscillation cycles is caused in part by the extreme force of a nuclear explosion pushing the bubble wall outward supersonically (faster than the speed of sound in saltwater). This causes Rayleigh–Taylor instability. That is, the smooth water wall touching the blast face becomes turbulent and fractal, with fingers and branches of cold ocean water extending into the bubble. That cold water cools the hot gas inside and causes it to condense. The bubble becomes less of a sphere and looks more like the Crab Nebula—the deviation of which from a smooth surface is also due to Rayleigh–Taylor instability as ejected stellar material pushes through the interstellar medium. As might be expected, large, shallow explosions expand faster than deep, small ones. Despite being in direct contact with a nuclear explosion fireball, the water in the expanding bubble wall does not boil; the pressure inside the bubble exceeds (by far) the vapor pressure of water. The water touching the blast can only boil during bubble contraction. This boiling is like evaporation, cooling the bubble wall, and is another reason that an oscillating blast bubble loses most of the energy it had in the previous cycle. During these hot gas oscillations, the bubble continually rises for the same reason a mushroom cloud does: it is less dense. This causes the blast bubble never to be perfectly spherical. Instead, the bottom of the bubble is flatter, and during contraction, it even tends to "reach up" toward the blast center. In the last expansion cycle, the bottom of the bubble touches the top before the sides have fully collapsed, and the bubble becomes a torus in its last second of life. About six seconds after detonation, all that remains of a large, deep nuclear explosion is a column of hot water rising and cooling in the near-freezing ocean. List of underwater nuclear tests Relatively few underwater nuclear tests were performed before they were banned by the Partial Test Ban Treaty. They are: Note: it is often believed that the French did extensive underwater tests in French West Polynesia on the Moruroa and Fangataufa Atolls. This is incorrect; the bombs were placed in shafts drilled into the underlying coral and volcanic rock, and they did not intentionally leak fallout. Nuclear Test Gallery Nuclear detonation detection via hydroacoustics There are several methods of detecting nuclear detonations. Hydroacoustics is the primary means of determining if a nuclear detonation has occurred underwater. Hydrophones are used to monitor the change in water pressure as sound waves propagate through the world's oceans. Sound travels through 20 °C water at approximately 1482 meters per second, compared to the 332 m/s speed of sound through air. In the world's oceans, sound travels most efficiently at a depth of approximately 1000 meters. Sound waves that travel at this depth travel at minimum speed and are trapped in a layer known as the Sound Fixing and Ranging Channel (SOFAR). Sounds can be detected in the SOFAR from large distances, allowing for a limited number of monitoring stations required to detect oceanic activity. Hydroacoustics was originally developed in the early 20th century as a means of detecting objects like icebergs and shoals to prevent accidents at sea. Three hydroacoustic stations were built before the adoption of the Comprehensive Nuclear-Test-Ban Treaty. Two hydrophone stations were built in the North Pacific Ocean and Mid-Atlantic Ocean, and a T-phase station was built off the west coast of Canada. When the CTBT was adopted, 8 more hydroacoustic stations were constructed to create a comprehensive network capable of identifying underwater nuclear detonations anywhere in the world. These 11 hydroacoustic stations, in addition to 326 monitoring stations and laboratories, comprise the International Monitoring System (IMS), which is monitored by the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). There are two types of hydroacoustic stations currently used in the IMS network; 6 hydrophone monitoring stations and 5 T-phase stations. These 11 stations are primarily located in the southern hemisphere, which is primarily ocean. Hydrophone monitoring stations consist of an array of three hydrophones suspended from cables tethered to the ocean floor. They are positioned at a depth located within the SOFAR in order to effectively gather readings. Each hydrophone records 250 samples per second, while the tethering cable supplies power and carries information to the shore. This information is converted to a usable form and transmitted via secure satellite link to other facilities for analysis. T-phase monitoring stations record seismic signals generate from sound waves that have coupled with the ocean floor or shoreline. T-phase stations are generally located on steep-sloped islands in order to gather the cleanest possible seismic readings. Like hydrophone stations, this information is sent to the shore and transmitted via satellite link for further analysis. Hydrophone stations have the benefit of gathering readings directly from the SOFAR, but are generally more expensive to implement than T-phase stations. Hydroacoustic stations monitor frequencies from 1 to 100 Hertz to determine if an underwater detonation has occurred. If a potential detonation has been identified by one or more stations, the gathered signals will contain a high bandwidth with the frequency spectrum indicating an underwater cavity at the source. See also Nuclear weapons testing Marine engineering Shock factor Nuclear depth bomb Nuclear torpedo Operation Chastise Sources Further reading Explosions Nuclear technology
Underwater explosion
[ "Physics", "Chemistry" ]
2,903
[ "Nuclear technology", "Explosions", "Nuclear physics" ]
6,414,242
https://en.wikipedia.org/wiki/Ridge%20vent
A ridge vent is a type of vent installed at the peak of a sloped roof which allows warm, humid air to escape a building's attic. Ridge vents are most common on shingled residential buildings. Ridge vents are also used in industrial warehouses to help release the hot air and help circulate comfortable air inside the building . For ridge venting to be effective, soffit vents must be present, especially on residential applications. Most shingle manufacturers have ventilation calculators to help you calculate the right amount of ventilation to add to a home. References External links Minimizing Water Intrusion Through Roof Vents in High-Wind Regions Roofs
Ridge vent
[ "Technology", "Engineering" ]
131
[ "Structural system", "Structural engineering", "Roofs" ]
6,419,203
https://en.wikipedia.org/wiki/Ruan%20Yuan
Ruan Yuan (; 1764–1849), courtesy name Yuntai (雲臺), was a Chinese historian, politician, and writer of the Qing Dynasty who was the most prominent Chinese scholar during the first half of the 19th century. He won the jinshi degree in the imperial examinations in 1789 and was subsequently appointed to the Hanlin Academy. He was known for his work Biographies of Astronomers and Mathematicians and for his editing the Shisan Jing Zhushu (Commentaries and Notes on the Thirteen Classics) for the Qing emperor. Ruan Yuan was a successful official as well as a scholar. He was the Viceroy of Liangguang, the most important imperial official in Canton (Guangzhou), during the critical years 1817–1826, just before the First Opium War with Britain. It was a crucial time when Chinese trade with the outside world was allowed only through the Canton System, with all foreigners confined to Canton, the capital of Guangdong Province. During his tenure in Canton, Ruan is estimated to have earned more than 195,000 taels of silver. He was widely recognized as an official, scholar, and patron of learning both by his contemporaries and by modern scholars. He was also praised as an honest official and an exemplary man of the ‘Confucian persuasion’. His name is mentioned in almost all works on Qing history or Chinese classics because of the wide range of his research and publications. A number of these publications are still reprinted. Ruan Yuan was a follower of the Han Learning tradition and as such, with the encouragement of Liu Fenglu, he edited and organized publication of the compendium of the imperial achievements in kaozheng scholarship, the Huang Qing Jingjie (皇清经解) published in 1829. Kong Luhua (relative of the Duke Yansheng) was the second wife of Ruan Yuan. References Bibliography External links Ruan Yuan biography from St. Andrews University 1764 births 1849 deaths 19th-century Chinese historians Assistant grand secretaries Chinese Confucianists Grand secretaries of the Qing dynasty Historians from Jiangsu Historians of astronomy Historians of mathematics Politicians from Yangzhou Viceroys of Huguang Viceroys of Liangguang Viceroys of Yun-Gui Writers from Yangzhou Qing dynasty classicists
Ruan Yuan
[ "Astronomy" ]
447
[ "People associated with astronomy", "Historians of astronomy", "History of astronomy" ]
6,419,517
https://en.wikipedia.org/wiki/Trospium%20chloride
Trospium chloride is a muscarinic antagonist used to treat overactive bladder. It has side effects typical of this class of drugs, namely dry mouth, stomach upset, and constipation; these side effects cause problems with people taking their medicine as directed. However it doesn't cause central nervous system side effects like some other muscarinic antagonists. Chemically it is a quaternary ammonium cation which causes it to stay in periphery rather than crossing the blood–brain barrier. It works by causing the smooth muscle in the bladder to relax. It was patented in 1966 and approved for medical use in 1974. It was first approved in the US in 2004, and an extended release version was brought to market in 2007. It became generic in the EU in 2009, and the first extended-release generic was approved in the US in 2012. Medical uses Trospium chloride is used for the treatment of overactive bladder with symptoms of urge incontinence and frequent urination. It should not be used with people who retain urine, who have severe digestive conditions, myasthenia gravis, narrow-angle glaucoma, or tachyarrhythmia. It should be used with caution in people who have problems with their autonomous nervous system (dysautonomia) or who have gastroesophageal reflux disease, or in whom fast heart rates are undesirable, such as people with hyperthyroidism, coronary artery disease and congestive heart failure. There are no adequate and well-controlled studies of trospium chloride in pregnant women and there are signs of harm to the fetus in animal studies. The drug was excreted somewhat in the milk of nursing mothers. The drug was studied in children. Side effects Side effects are typical of gastrointestinal effects of anticholinergic drugs, and include dry mouth, indigestion, and constipation. These side effects lead to problems with adherence, especially for older people. The only CNS side effect is headache, which was very rare. Tachycardia is a rare side effect. Pharmacology Mechanism of action Trospium chloride is a muscarinic antagonist. Trospium chloride blocks the effect of acetylcholine on muscarinic receptors organs that are responsive to the compounds, including the bladder. Its parasympatholytic action relaxes the smooth muscle in the bladder. Receptor assays showed that trospium chloride has negligible affinity for nicotinic receptors as compared to muscarinic receptors at concentrations obtained from therapeutic doses. The drug has high and similar affinity for all five of the muscarinic acetylcholine receptor subtypes, including the M1, M2, M3, M4, and M5 receptors. Pharmacokinetics After oral administration, less than 10% of the dose is absorbed. Mean absolute bioavailability of a 20 mg dose is 9.6% (range: 4.0 to 16.1%). Peak plasma concentrations (Cmax) occur between 5 and 6 hours post-dose. Mean Cmax increases greater than dose-proportionally; a 3-fold and 4-fold increase in Cmax was observed for dose increases from 20 mg to 40 mg and from 20 mg to 60 mg, respectively. AUC exhibits dose linearity for single doses up to 60 mg. Trospium chloride exhibits diurnal variability in exposure with a decrease in Cmax and AUC of up to 59% and 33%, respectively, for evening relative to morning doses. Administration with a high fat meal resulted in reduced absorption, with AUC and Cmax values 70 to 80% lower than those obtained when trospium chloride was administered while fasting. Therefore, it is recommended that trospium chloride should be taken at least one hour prior to meals or on an empty stomach. Protein binding ranged from 50 to 85% when concentration levels of trospium chloride (0.5 to 50 ng/mL) were incubated with human serum in vitro. The 3H-trospium chloride ratio of plasma to whole blood was 1.6:1. This ratio indicates that the majority of 3H-trospium chloride is distributed in plasma. The apparent volume of distribution for a 20 mg oral dose is 395 (± 140) liters. The metabolic pathway of trospium in humans has not been fully defined. Of the 10% of the dose absorbed, metabolites account for approximately 40% of the excreted dose following oral administration. The major metabolic pathway is hypothesized as ester hydrolysis with subsequent conjugation of benzylic acid to form azoniaspironortropanol with glucuronic acid. Cytochrome P450 is not expected to contribute significantly to the elimination of trospium. Data taken from in vitro human liver microsomes investigating the inhibitory effect of trospium on seven cytochrome P450 isoenzyme substrates (CYP1A2, 2A6, 2C9, 2C19, 2D6, 2E1, and 3A4) suggest a lack of inhibition at clinically relevant concentrations. The plasma half-life for trospium chloride following oral administration is approximately 20 hours. After oral administration of an immediate-release formulation of 14C-trospium chloride, the majority of the dose (85.2%) was recovered in feces and a smaller amount (5.8% of the dose) was recovered in urine; 60% of the radioactivity excreted in urine was unchanged trospium. The mean renal clearance for trospium (29 L/hour) is 4-fold higher than average glomerular filtration rate, indicating that active tubular secretion is a major route of elimination for trospium. There may be competition for elimination with other compounds that are also renally eliminated. Chemistry Anticholinergic drugs used to treat overactive bladder were all amines as of 2003. Quaternary ammonium cations in general are more hydrophilic than other amines and don't cross membranes well, so they tend to be poorly absorbed from the digestive system, and to not cross the blood–brain barrier. Oxybutynin, tolterodine, darifenacin, and solifenacin are tertiary amines while trospium chloride and propantheline are quaternary amines. History The synthesis of trospium was described by scientists from Dr. Robert Pfleger Chemische Fabrik GmbH, Heinz Bertholdt, Robert Pfleger, and Wolfram Schulz, in US. Pat. No. 3,480,626 (the US equivalent to DE119442), and its activity was first published in the literature in 1967. The first regulatory approval was granted in Germany in August 1999 to Madaus AG for Regurin 20 mg Tablets. Madaus is considered the originator for regulatory filings worldwide. The German filing was recognized throughout Europe under the Mutual Recognition Procedure. Madaus licensed the US rights to trospium chloride to Interneuron in 1999 and Interneuron ran clinical trials in the US to win FDA approval. Interneuron changed its name to Indevus in 2002 Indevus entered into a partnership with Odyssey Pharmaceuticals, a subsidiary of Pliva, to market the drug in April 2004, and won FDA approval for the drug, which it branded as Sanctura, in May 2004. The approval earned Indevus a milestone payment of $120M from Pliva, which had already paid Indevus $30 million at signing; the market for overactive bladder therapies was estimated to be worth $1.1 billion in 2004. In 2005 Pliva exited the relationship, selling its rights to Esprit Pharma, and in September 2007 Allergan acquired Esprit, and negotiated a new agreement with Indevus under which Allergan would completely take over the US manufacturing, regulatory approvals, and marketing. A month before, Indevus had received FDA approval for an extended release formulation that allowed once a day dosing, Sanctura XR. Indevus had developed intellectual property around the extended release formulation which it licensed to Madaus for most of the world. In 2012 the FDA approved the first generic version of the extended release formulation, granting approval to the ANDA that Watson Pharmaceuticals had filed in 2009. Annual sales in the US at that time were $67M. European patents had expired in 2009. As of 2016, the drug is available worldwide under many brand names and formulations, including oral, extended release, suppositories, and injections. Society and culture Marketing rights to the drug became subject to parallel import litigation in Europe in the case of Speciality European Pharma Ltd v Doncaster Pharmaceuticals Group Ltd / Madaus GmbH (Case No. A3/2014/0205) which was resolved in March 2015. Madaus had exclusively licensed the right to use the Regurin trademark to Speciality European Pharma Ltd. In 2009, when European patents expired on the drug, Doncaster Pharmaceuticals Group, a well known parallel importer, which had been selling the drug in the UK under another label, Ceris, which was used in France, began to put stickers on their packaging with the Regurin name. Speciality and Madaus sued and initially won based on the argument that 90% of prescriptions were already generic, but Doncaster appealed and won the appeal based on the argument that it could not charge a premium with a generic label. The case has broad implications for trade in the EU. Research In 2007 Indevus partnered with Alkermes to develop and test an inhaled form of trospium chloride as a treatment for COPD; it was in Phase II trials at that time. References External links Carboxylate esters Chlorides M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Nitrogen heterocycles Peripherally selective drugs Quaternary ammonium compounds Spiro compounds Tertiary alcohols
Trospium chloride
[ "Chemistry" ]
2,126
[ "Chlorides", "Inorganic compounds", "Salts", "Organic compounds", "Spiro compounds" ]
6,419,756
https://en.wikipedia.org/wiki/Petrophysics
Petrophysics (from the Greek πέτρα, petra, "rock" and φύσις, physis, "nature") is the study of physical and chemical rock properties and their interactions with fluids. A major application of petrophysics is in studying reservoirs for the hydrocarbon industry. Petrophysicists work together with reservoir engineers and geoscientists to understand the porous media properties of the reservoir. Particularly how the pores are interconnected in the subsurface, controlling the accumulation and migration of hydrocarbons. Some fundamental petrophysical properties determined are lithology, porosity, water saturation, permeability, and capillary pressure. The petrophysicists workflow measures and evaluates these petrophysical properties through well-log interpretation (i.e. in-situ reservoir conditions) and core analysis in the laboratory. During well perforation, different well-log tools are used to measure the petrophysical and mineralogical properties through radioactivity and seismic technologies in the borehole. In addition, core plugs are taken from the well as sidewall core or whole core samples. These studies are combined with geological, geophysical, and reservoir engineering studies to model the reservoir and determine its economic feasibility. While most petrophysicists work in the hydrocarbon industry, some also work in the mining, water resources, geothermal energy, and carbon capture and storage industries. Petrophysics is part of the geosciences, and its studies are used by petroleum engineering, geology, geochemistry, exploration geophysics and others. Fundamental petrophysical properties The following are the fundamental petrophysical properties used to characterize a reservoir: Lithology: A description of the rock's physical characteristics, such as grain size, composition and texture. By studying the lithology of local geological outcrops and core samples, geoscientists can use a combination of log measurements, such as natural gamma, neutron, density and resistivity, to determine the lithology down the well. Porosity: The pore space volume portion related to the bulk rock volume, symbolized as . It is typically calculated using data from an instrument that measures the reaction of the rock to bombardment by neutrons or gamma rays but can also be derived from sonic and NMR logging. A helium porosimeter is the main technique to measure grain volume and porosity in the laboratory. Water saturation: The fraction of the pore space occupied by water. This is typically calculated using data from an instrument that measures the resistivity of the rock and applying empirical or theoretical water saturation models; the most worldwide used is Archie's (1942) model. It is known by the symbol . Permeability: The quantity of fluid (water or hydrocarbon) that can flow through a rock as a function of time and pressure, related to how interconnected the pores are, and it is known by the symbol . Formation testing is the only tool that can directly measure a rock formation's permeability down a well. In case of its absence, which is common in most cases, an estimate for permeability can be derived from empirical relationships with other measurements such as porosity, NMR and sonic logging. Darcy's law is applied in the laboratory to measure the core plug permeability with an inert gas or liquid (i.e. that does not react with the rock). Formation thickness (h) of rock with enough permeability to deliver fluids to a well bore, this property is often called “net reservoir rock.” In the oil and gas industry, another quantity “net pay” is computed which is the thickness of rock that can deliver hydrocarbons to the well bore at a profitable rate. Rock mechanical properties The rock's mechanical or geomechanical properties are also used within petrophysics to determine the reservoir strength, elastic properties, hardness, ultrasonic behaviour, index characteristics and in situ stresses. Petrophysicists use acoustic and density measurements of rocks to compute their mechanical properties and strength. They measure the compressional (P) wave velocity of sound through the rock and the shear (S) wave velocity and use these with the density of the rock to compute the rock's compressive strength, which is the compressive stress that causes a rock to fail, and the rocks' flexibility, which is the relationship between stress and deformation for a rock. Converted-wave analysis is also determines the subsurface lithology and porosity. Geomechanics measurements are useful for drillability assessment, wellbore and open-hole stability design, log strength and stress correlations, and formation and strength characterization. These measurements are also used to design dams, roads, foundations for buildings, and many other large construction projects. They can also help interpret seismic signals from the Earth, either manufactured seismic signals or those from earthquakes. Methods of petrophysical analysis Core analysis As core samples are the only evidence of the reservoir's formation rock structure, the Core analysis is the "ground truth" data measured at laboratory to comprehend the key petrophysical features of the in-situ reservoir. In the petroleum industry, rock samples are retrieved from the subsurface and measured by oil or service companies' core laboratories. This process is time-consuming and expensive; thus, it can only be applied to some of the wells drilled in a field. Also, proper design, planning and supervision decrease data redundancy and uncertainty. Client and laboratory teams must work aligned to optimise the core analysis process. Well-logging Well Logging is a relatively inexpensive method to obtain petrophysical properties downhole. Measurement tools are conveyed downhole using either wireline or LWD method. An example of wireline logs is shown in Figure 1. The first “track” shows the natural gamma radiation level of the rock. The gamma radiation level “log” shows increasing radiation to the right and decreasing radiation to the left. The rocks emitting less radiation have more yellow shading. The detector is very sensitive, and the amount of radiation is very low. In clastic rock formations, rocks with smaller amounts of radiation are more likely to be coarser-grained and have more pore space, while rocks with higher amounts of radiation are more likely to have finer grains and less pore space. The second track in the plot records the depth below the reference point, usually the Kelly bush or rotary table in feet, so these rock formations are 11,900 feet below the Earth's surface. In the third track, the electrical resistivity of the rock is presented. The water in this rock is salty. The electrolytes flowing inside the pore space within the water conduct electricity resulting in lower resistivity of the rock. This also indicates an increased water saturation and decreased hydrocarbon saturation. The fourth track shows the computed water saturation, both as “total” water (including the water bound to the rock) in magenta and the “effective water” or water that is free to flow in black. Both quantities are given as a fraction of the total pore space. The fifth track shows the fraction of the total rock that is pore space filled with fluids (i.e. porosity). The display of the pore space is divided into green for oil and blue for movable water. The black line shows the fraction of the pore space, which contains either water or oil that can move or be "produced" (i.e. effective porosity). While the magenta line indicates the toral porosity, meaning that it includes the water that is permanently bound to the rock. The last track represents the rock lithology divided into sandstone and shale portions. The yellow pattern represents the fraction of the rock (excluding fluids) composed of coarser-grained sandstone. The gray pattern represents the fraction of rock composed of finer-grained, i.e. "shale." The sandstone is the part of the rock that contains the producible hydrocarbons and water. Modelling Reservoir models are built by reservoir engineering in specialised software with the petrophysical dataset elaborated by the petrophysicist to estimate the amount of hydrocarbon present in the reservoir, the rate at which that hydrocarbon can be produced to the Earth's surface through wellbores and the fluid flow in rocks. Similar models in the water resource industry compute how much water can be produced to the surface over long periods without depleting the aquifer. Rock volumetric model for shaly sand formation Shaly sand is a term referred to as a mixture of shale or clay and sandstone. Hence, a significant portion of clay minerals and silt-size particles results in a fine-grained sandstone with higher density and rock complexity. The shale/clay volume is an essential petrophysical parameter to estimate since it contributes to the rock bulk volume, and for correct porosity and water saturation, evaluation needs to be correctly defined. As shown in Figure 2, for modelling clastic rock formation, there are four components whose definitions are typical for shaly or clayey sands that assume: the rock matrix (grains), clay portion that surrounds the grains, water, and hydrocarbons. These two fluids are stored only in pore space in the rock matrix. Due to the complex microstructure, for a water-wet rock, the following terms comprised a clastic reservoir formation: Vma = volume of matrix grains. Vdcl = volme of dry clay. Vcbw = volume of clay bound water. Vcl = volume of wet clay (Vdcl +Vcbw). Vcap = volume of capillary bound water. Vfw = volume of free water. Vhyd = volume of hydrocarbon. ΦT = Total porosity (PHIT), which includes the connected and not connected pore throats. Φe = Effective porosity which includes only the inter-connected pore throats. Vb = bulk volume of the rock. Key equations: Vma + Vcl + Vfw + Vhyd = 1 Rock matrix volume + wet clay volume + water free volume + hydrocarbon volume = bulk rock volume Scholarly societies The Society of Petrophysicists and Well Log Analysts (SPWLA) is an organisation whose mission is to increase the awareness of petrophysics, formation evaluation, and well logging best practices in the oil and gas industry and the scientific community at large. See also Archie's law Formation evaluation Gardner's relation Petrology References Further reading External links Petrophysics Forum Crains Petrophysical Handbook RockPhysicists Society of Petrophysicists and Well Log Analysts (SPWLA) Petroleum engineering Applied and interdisciplinary physics
Petrophysics
[ "Physics", "Engineering" ]
2,238
[ "Petroleum engineering", "Applied and interdisciplinary physics", "Energy engineering" ]
27,228,915
https://en.wikipedia.org/wiki/Nodal%20precession
Nodal precession is the precession of the orbital plane of a satellite around the rotational axis of an astronomical body such as Earth. This precession is due to the non-spherical nature of a rotating body, which creates a non-uniform gravitational field. The following discussion relates to low Earth orbit of artificial satellites, which have no measurable effect on the motion of Earth. The nodal precession of more massive, natural satellites like the Moon is more complex. Around a spherical body, an orbital plane would remain fixed in space around the gravitational primary body. However, most bodies rotate, which causes an equatorial bulge. This bulge creates a gravitational effect that causes orbits to precess around the rotational axis of the primary body. The direction of precession is opposite the direction of revolution. For a typical prograde orbit around Earth (that is, in the direction of primary body's rotation), the longitude of the ascending node decreases, that is the node precesses westward. If the orbit is retrograde, this increases the longitude of the ascending node, that is the node precesses eastward. This nodal precession enables heliosynchronous orbits to maintain a nearly constant angle relative to the Sun. Description A non-rotating body of planetary scale or larger would be pulled by gravity into a spherical shape. Virtually all bodies rotate, however. The centrifugal force deforms the body so that it has an equatorial bulge. Because of the bulge of the central body, the gravitational force on a satellite is not directed toward the center of the central body, but is offset toward its equator. Whichever hemisphere of the central body the satellite lies over, it is preferentially pulled slightly toward the equator of the central body. This creates a torque on the satellite. This torque does not reduce the inclination; rather, it causes a torque-induced gyroscopic precession, which causes the orbital nodes to drift with time. Equation The rate of precession depends on the inclination of the orbital plane to the equatorial plane, as well as the orbital eccentricity. For a satellite in a prograde orbit around Earth, the precession is westward (nodal regression), that is, the node and satellite move in opposite directions. A good approximation of the precession rate is where is the precession rate (in rad/s), is the body's equatorial radius ( for Earth), is the semi-major axis of the satellite's orbit, is the eccentricity of the satellite's orbit, is the angular velocity of the satellite's motion (2 radians divided by its period in seconds), is its inclination, is the body's second dynamic form factor The nodal progression of low Earth orbits is typically a few degrees per day to the west (negative). For a satellite in a circular ( = 0) 800 km altitude orbit at 56° inclination about Earth: The orbital period is , so the angular velocity is . The precession is therefore This is equivalent to −3.683° per day, so the orbit plane will make one complete turn (in inertial space) in 98 days. The apparent motion of the sun is approximately +1° per day (360° per year / 365.2422 days per tropical year ≈ 0.9856473° per day), so apparent motion of the sun relative to the orbit plane is about 2.8° per day, resulting in a complete cycle in about 127 days. For retrograde orbits is negative, so the precession becomes positive. (Alternatively, can be thought of as positive but the inclination is greater than 90°, so the cosine of the inclination is negative.) In this case it is possible to make the precession approximately match the apparent motion of the sun, resulting in a heliosynchronous orbit. The used in this equation is the dimensionless coefficient from the geopotential model or gravity field model for the body. See also Axial precession, or "precession of the equinoxes" for Earth Apsidal precession, another kind of orbital precession (the change in the argument of periapsis) Lunar standstill, in which the Moon's declination on the lunistices depends on the precession of its orbital nodes Lunar node References External links Nodal regression description from USENET Discussion of nodal regression from Analytical Graphics Astrodynamics Precession
Nodal precession
[ "Physics", "Engineering" ]
928
[ "Astrodynamics", "Physical quantities", "Precession", "Aerospace engineering", "Wikipedia categories named after physical quantities" ]
27,228,956
https://en.wikipedia.org/wiki/LessWrong
LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics. Purpose LessWrong describes itself as an online forum and community aimed at improving human reasoning, rationality, and decision-making, with the goal of helping its users hold more accurate beliefs and achieve their personal objectives. The best known posts of LessWrong are "The Sequences", a series of essays which aim to describe how to avoid the typical failure modes of human reasoning with the goal of improving decision-making and the evaluation of evidence. One suggestion is the use of Bayes' theorem as a decision-making tool. There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases that have been studied by the psychologist Daniel Kahneman. LessWrong is also concerned with artificial intelligence, transhumanism, existential threats and the singularity. History LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with artificial intelligence researcher Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. In 2013, a significant portion of the rationalist community shifted focus to Scott Alexander's Slate Star Codex. Artificial intelligence Discussions of AI within LessWrong include AI alignment, AI safety, and machine consciousness. Articles posted on LessWrong about AI have been cited in the news media. LessWrong, and its surrounding movement work on AI are the subjects of the 2019 book The AI Does Not Hate You, written by former BuzzFeed science correspondent Tom Chivers. Effective altruism LessWrong played a significant role in the development of the effective altruism (EA) movement, and the two communities are closely intertwined. In a survey of LessWrong users in 2016, 664 out of 3,060 respondents, or 21.7%, identified as "effective altruists". A separate survey of effective altruists in 2014 revealed that 31% of respondents had first heard of EA through LessWrong, though that number had fallen to 8.2% by 2020. Roko's basilisk In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system an incentive to try such blackmail. Neoreaction After LessWrong split from Overcoming Bias, it attracted some individuals affiliated with neoreaction with discussions of eugenics and evolutionary psychology. However, Yudkowsky has strongly rejected neoreaction. Additionally, in a survey among LessWrong users in 2016, 28 out of 3060 respondents (0.92%) identified as "neoreactionary". Notable users LessWrong has been associated with several influential contributors. Founder Eliezer Yudkowsky established the platform to promote rationality and raise awareness about potential risks associated with artificial intelligence. Scott Alexander became one of the site's most popular writers before starting his own blog, Slate Star Codex, contributing discussions on AI safety and rationality. Further notable users on LessWrong include Paul Christiano, Wei Dai and Zvi Mowshowitz. A selection of posts by these and other contributors, selected through a community review process, were published as parts of the essay collections "A Map That Reflects the Territory" and "The Engines of Cognition". References Internet forums Transhumanist organizations Internet properties established in 2009 Effective altruism Rationalism
LessWrong
[ "Biology" ]
824
[ "Effective altruism", "Behavior", "Altruism" ]
27,233,123
https://en.wikipedia.org/wiki/2-Mercaptopyridine
2-Mercaptopyridine is an organosulfur compound with the formula HSC5H4N. This yellow crystalline solid is a derivative of pyridine. The compound and its derivatives serve primarily as acylating agents. A few of 2-mercaptopyridine's other uses include serving as a protecting group for amines and imides as well as forming a selective reducing agent. 2-Mercaptopyridine oxidizes to [[2,2′-dipyridyl disulfide]]. Preparation 2-Mercaptopyridine was originally synthesized in 1931 by heating 2-chloropyridine with calcium hydrogen sulfide. ClC5H4N + Ca(SH)2 → HSC5H4N + Ca(SH)Cl A more convenient route to 2-mercaptopyridine is the reaction of 2-chloropyridine and thiourea in ethanol and aqueous ammonia. 2-Mercaptopyridine derivatives can also be generated from precursors lacking preformed pyridine rings. It arises for example in the condensation of α,β-unsaturated ketones, malononitrile, and 4-methylbenzenethiol under microwave irradiation. The reaction is conducted with a base catalyst. Structure and properties Similar in nature to 2-hydroxypyridine, 2-mercaptopyridine converts to the thione (or more accurately thioamide) tautomer. The preferred form is dependent on temperature, concentration, and solvent. The thiol is favored at lower temperatures, lower concentrations, and in less polar solvents. 2-Mercaptopyridine is favored in dilute solutions and in solvents capable of hydrogen bonding. These solvents will compete with other 2-mercaptopyridines to prevent self association. The association constant for this reaction between mutual 2-mercaptopyridines is described below. The ratio is of monosulfide to disulfide in chloroform. Kassociation = (2.7±0.5)x103 Reactions 2-Mercaptopyridine oxidizes to 2,2'-dipyridyl disulfide. As amines are good catalysts for the oxidation of thiols to disulfides, this process is autocatalytic. 2-Mercaptopyridine can also be prepared by hydride reduction of 2,2'-dipyridyl disulfide. C5H4NSSC5H4N + 2H → 2HSC5H4N Main reactions 2-Mercaptopyridine and the disulfide are chelating ligands. 2-mercaptopyridine forms the indium(III) complex In(PyS)3 complexes in supercritical carbon dioxide. 2-Mercaptopyridine may also be used to coat porous media in order to purify plasmid DNA of impurities such as RNA and proteins at relatively quick timescales to similar methods. 2-Mercaptopyridine is also used acylate phenols, amines, and carboxylic acids. Another application lies in metal-free catalysis: 2-mercaptopyridine can be used as a catalyst for isodesmic C-H borylation of heteroarenes. The particular pattern of Lewis base and Brønsted acid allows to cleave boron-carbons bonds and then form a new boron-carbon bond by lewis pair mediated C-H activation. References 2-Pyridyl compounds Thiols Thioamides
2-Mercaptopyridine
[ "Chemistry" ]
786
[ "Organic compounds", "Thioamides", "Thiols", "Functional groups" ]
27,237,592
https://en.wikipedia.org/wiki/Lindblad%20resonance
A Lindblad resonance, named for the Swedish galactic astronomer Bertil Lindblad, is an orbital resonance in which an object's epicyclic frequency (the rate at which one periapse follows another) is a simple multiple of some forcing frequency. Resonances of this kind tend to increase the object's orbital eccentricity and to cause its longitude of periapse to line up in phase with the forcing. Lindblad resonances drive spiral density waves both in galaxies (where stars are subject to forcing by the spiral arms themselves) and in Saturn's rings (where ring particles are subject to forcing by Saturn's moons). Lindblad resonances affect stars at such distances from a disc galaxy's centre where the natural frequency of the radial component of a star's orbital velocity is close to the frequency of the gravitational potential maxima encountered during its course through the spiral arms. If a star's orbital speed around the galactic centre is greater than that of the part of the spiral arm through which it is passing, then an inner Lindblad resonance occurs—if smaller, then an outer Lindblad resonance. At an inner resonance, a star's orbital speed is increased, moving the star outwards, and decreased for an outer resonance causing inward movement. References Further reading Murray, C.D., and S.F. Dermott 1999, Solar System Dynamics (Cambridge: Cambridge University Press). External links Three-Dimensional Waves Generated At Lindblad Resonances In Thermally Stratified Disks – Lubow & Ogilvie Concepts in astrophysics Stellar dynamics Orbital perturbations Orbital resonance
Lindblad resonance
[ "Physics", "Chemistry", "Astronomy" ]
336
[ "Concepts in astrophysics", "Scattering stubs", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Scattering", "Stellar dynamics" ]
1,299,404
https://en.wikipedia.org/wiki/Feature%20%28machine%20learning%29
In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a data set. Choosing informative, discriminating, and independent features is crucial to produce effective algorithms for pattern recognition, classification, and regression tasks. Features are usually numeric, but other types such as strings and graphs are used in syntactic pattern recognition, after some pre-processing step such as one-hot encoding. The concept of "features" is related to that of explanatory variables used in statistical techniques such as linear regression. Feature types In feature engineering, two types of features are commonly used: numerical and categorical. Numerical features are continuous values that can be measured on a scale. Examples of numerical features include age, height, weight, and income. Numerical features can be used in machine learning algorithms directly. Categorical features are discrete values that can be grouped into categories. Examples of categorical features include gender, color, and zip code. Categorical features typically need to be converted to numerical features before they can be used in machine learning algorithms. This can be done using a variety of techniques, such as one-hot encoding, label encoding, and ordinal encoding. The type of feature that is used in feature engineering depends on the specific machine learning algorithm that is being used. Some machine learning algorithms, such as decision trees, can handle both numerical and categorical features. Other machine learning algorithms, such as linear regression, can only handle numerical features. Classification A numeric feature can be conveniently described by a feature vector. One way to achieve binary classification is using a linear predictor function (related to the perceptron) with a feature vector as input. The method consists of calculating the scalar product between the feature vector and a vector of weights, qualifying those observations whose result exceeds a threshold. Algorithms for classification from a feature vector include nearest neighbor classification, neural networks, and statistical techniques such as Bayesian approaches. Examples In character recognition, features may include histograms counting the number of black pixels along horizontal and vertical directions, number of internal holes, stroke detection and many others. In speech recognition, features for recognizing phonemes can include noise ratios, length of sounds, relative power, filter matches and many others. In spam detection algorithms, features may include the presence or absence of certain email headers, the email structure, the language, the frequency of specific terms, the grammatical correctness of the text. In computer vision, there are a large number of possible features, such as edges and objects. Feature vectors In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors of explanatory variables used in statistical procedures such as linear regression. Feature vectors are often combined with weights using a dot product in order to construct a linear predictor function that is used to determine a score for making a prediction. The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed. Higher-level features can be obtained from already available features and added to the feature vector; for example, for the study of diseases the feature 'Age' is useful and is defined as Age = 'Year of death' minus 'Year of birth' . This process is referred to as feature construction. Feature construction is the application of a set of constructive operators to a set of existing features resulting in construction of new features. Examples of such constructive operators include checking for the equality conditions {=, ≠}, the arithmetic operators {+,−,×, /}, the array operators {max(S), min(S), average(S)} as well as other more sophisticated operators, for example count(S,C) that counts the number of features in the feature vector S satisfying some condition C or, for example, distances to other recognition classes generalized by some accepting device. Feature construction has long been considered a powerful tool for increasing both accuracy and understanding of structure, particularly in high-dimensional problems. Applications include studies of disease and emotion recognition from speech. Selection and extraction The initial set of raw features can be redundant and large enough that estimation and optimization is made difficult or ineffective. Therefore, a preliminary step in many applications of machine learning and pattern recognition consists of selecting a subset of features, or constructing a new and reduced set of features to facilitate learning, and to improve generalization and interpretability. Extracting or selecting features is a combination of art and science; developing systems to do so is known as feature engineering. It requires the experimentation of multiple possibilities and the combination of automated techniques with the intuition and knowledge of the domain expert. Automating this process is feature learning, where a machine not only uses features for learning, but learns the features itself. See also Covariate Dimensionality reduction Feature engineering Hashing trick Statistical classification Explainable artificial intelligence References Data mining Machine learning Pattern recognition
Feature (machine learning)
[ "Engineering" ]
1,070
[ "Artificial intelligence engineering", "Machine learning" ]
1,299,817
https://en.wikipedia.org/wiki/Gas%20mark
The gas mark is a temperature scale used on gas ovens and cookers in the United Kingdom, Ireland and some Commonwealth of Nations countries. History The draft 2003 edition of the Oxford English Dictionary lists the earliest known usage of the concept as being in L. Chatterton's book Modern Cookery published in 1943: "Afternoon tea scones… Time: 20 minutes. Temperature: Gas, Regulo Mark 7". "Regulo" was a type of gas regulator used by a manufacturer of cookers; however, the scale has now become universal, and the word Regulo is rarely used. The term "gas mark" was a subject of the joint BBC/OED production Balderdash and Piffle, in May 2005. The earliest printed evidence of use of "gas mark" (with no other terms between the two words) appears to date from 1958. However, the manufacturers of the "New World" gas ranges in the mid-1930s gave away recipe books for use with their cooker, and the "Regulo" was the gas regulator. The book has no reference to degrees. All dishes to be cooked are noted to be at "Regulo Mark X". Equivalents in Fahrenheit and Celsius Gas mark 1 is 275 degrees Fahrenheit (135 degrees Celsius). Oven temperatures increase by for each gas mark step. Above Gas Mark 1, the scale markings increase by one for each step. Below Gas Mark 1, the scale markings halve at each step, each representing a decrease of . Formulae In theory, the following formulae can be used to convert between gas mark values and Celsius. For temperatures above 135 °C (gas mark 1), to convert gas mark to degrees Celsius (), multiply the gas mark number () by 14, then add 121: For the reverse conversion: These do not work for less than 1, since the steps are given as halves (i.e., , ). For temperatures below 135 °C (gas mark 1), to convert gas mark to degrees Celsius apply the following conversion: For the reverse: Note that tables of temperature equivalents for kitchen use conventionally round Celsius values to the nearest 10 degrees, with steps of either 10 or 20 degrees between Gas Marks. Conversion table In practice, of course, a conversion table is used instead of the above formulae. The numbers in the conversion table below represent values that would actually be given in a recipe or set on a stove. Other cooking temperature scales France: Thermostat French ovens and recipes use a scale called the "Thermostat" (abbreviated "Th") that is based on the Celsius scale. Thermostat 1 equals 30 °C for conventional ovens, increasing by 30 °C for each whole number along the scale. Germany: Stufe In Germany, "Stufe" (the German word for "step") is used for gas cooking temperatures. Gas ovens are commonly marked in steps from 1 to 8, corresponding to: Other ovens may be marked on a scale of 1–7, where Stufe is about 125 °C in a conventional oven, Stufe 1 is about 150 °C, increasing by 25 °C for each subsequent step, up to Stufe 7 at 300 °C. See also Outline of metrology and measurement References Scales of temperature Ovens
Gas mark
[ "Physics", "Mathematics" ]
690
[ "Scales of temperature", "Quantity", "Physical quantities" ]
1,300,043
https://en.wikipedia.org/wiki/Tell-tale%20%28spacecraft%29
In space systems a tell-tale is a single-bit status indicator that is included in telemetry or is used within the spacecraft's on-board software to signal conditions that must be tracked or acted upon, especially when the status changes. A tell-tale may continually change as the status it is tracking changes, or, it may change once upon change of status and then remain at that value until deliberately cleared. The latter type of tell-tale is known as a "sticky-bit" because its value "sticks", that is, remains constant, once it has been set. The Phoenix spacecraft contains another type of tell-tale, developed by the University of Aarhus in Denmark, as part of its Meteorological Station. It is a small tube that is deflected by the martian wind, similar to a sailing tell-tale. The science payload's stereo camera recorded images of its motion to be used to determine wind direction and speed. References Spacecraft communication
Tell-tale (spacecraft)
[ "Engineering" ]
197
[ "Spacecraft communication", "Aerospace engineering" ]
1,300,341
https://en.wikipedia.org/wiki/Non-homologous%20end%20joining
Non-homologous end joining (NHEJ) is a pathway that repairs double-strand breaks in DNA. It is called "non-homologous" because the break ends are directly ligated without the need for a homologous template, in contrast to homology directed repair (HDR), which requires a homologous sequence to guide repair. NHEJ is active in both non-dividing and proliferating cells, while HDR is not readily accessible in non-dividing cells. The term "non-homologous end joining" was coined in 1996 by Moore and Haber. NHEJ is typically guided by short homologous DNA sequences called microhomologies. These microhomologies are often present in single-stranded overhangs on the ends of double-strand breaks. When the overhangs are perfectly compatible, NHEJ usually repairs the break accurately. Imprecise repair leading to loss of nucleotides can also occur, but is much more common when the overhangs are not compatible. Inappropriate NHEJ can lead to translocations and telomere fusion, hallmarks of tumor cells. NHEJ implementations are understood to have been existent throughout nearly all biological systems and it is the predominant double-strand break repair pathway in mammalian cells. In budding yeast (Saccharomyces cerevisiae), however, homologous recombination dominates when the organism is grown under common laboratory conditions. When the NHEJ pathway is inactivated, double-strand breaks can be repaired by a more error-prone pathway called microhomology-mediated end joining (MMEJ). In this pathway, end resection reveals short microhomologies on either side of the break, which are then aligned to guide repair. This contrasts with classical NHEJ, which typically uses microhomologies already exposed in single-stranded overhangs on the DSB ends. Repair by MMEJ therefore leads to deletion of the DNA sequence between the microhomologies. and archaea Many species of bacteria, including Escherichia coli, lack an end joining pathway and thus rely completely on homologous recombination to repair double-strand breaks. NHEJ proteins have been identified in a number of bacteria, including Bacillus subtilis, Mycobacterium tuberculosis, and Mycobacterium smegmatis. Bacteria utilize a remarkably compact version of NHEJ in which all of the required activities are contained in only two proteins: a Ku homodimer and the multifunctional ligase/polymerase/nuclease LigD. In mycobacteria, NHEJ is much more error prone than in yeast, with bases often added to and deleted from the ends of double-strand breaks during repair. Many of the bacteria that possess NHEJ proteins spend a significant portion of their life cycle in a stationary haploid phase, in which a template for recombination is not available. NHEJ may have evolved to help these organisms survive DSBs induced during desiccation. It preferentially use rNTPs (RNA nucleotides), possibly advantageous in dormant cells. The archaeal NHEJ system in Methanocella paludicola have a homodimeric Ku, but the three functions of LigD are broken up into three single-domain proteins sharing an operon. All three genes retain substantial homology with their LigD counterparts and the polymerase retains the preference for rNTP. NHEJ has been lost and acquired multiple times in bacteria and archaea, with a significant amount of horizontal gene transfer shuffling the system around taxa. Corndog and Omega, two related mycobacteriophages of Mycobacterium smegmatis, also encode Ku homologs and exploit the NHEJ pathway to recircularize their genomes during infection. Unlike homologous recombination, which has been studied extensively in bacteria, NHEJ was originally discovered in eukaryotes and was only identified in prokaryotes in the past decade. In eukaryotes In contrast to bacteria, NHEJ in eukaryotes utilizes a number of proteins, which participate in the following steps: End binding and tethering In yeast, the Mre11-Rad50-Xrs2 (MRX) complex is recruited to DSBs early and is thought to promote bridging of the DNA ends. The corresponding mammalian complex of Mre11-Rad50-Nbs1 (MRN) is also involved in NHEJ, but it may function at multiple steps in the pathway beyond simply holding the ends in proximity. DNA-PKcs is also thought to participate in end bridging during mammalian NHEJ. Eukaryotic Ku is a heterodimer consisting of Ku70 and Ku80, and forms a complex with DNA-PKcs, which is present in mammals but absent in yeast. Ku is a basket-shaped molecule that slides onto the DNA end and translocates inward. Ku may function as a docking site for other NHEJ proteins, and is known to interact with the DNA ligase IV complex and XLF. End processing End processing involves removal of damaged or mismatched nucleotides by nucleases and resynthesis by DNA polymerases. This step is not necessary if the ends are already compatible and have 3' hydroxyl and 5' phosphate termini. Little is known about the function of nucleases in NHEJ. Artemis is required for opening the hairpins that are formed on DNA ends during V(D)J recombination, a specific type of NHEJ, and may also participate in end trimming during general NHEJ. Mre11 has nuclease activity, but it seems to be involved in homologous recombination, not NHEJ. The X family DNA polymerases Pol λ and Pol μ (Pol4 in yeast) fill gaps during NHEJ. Yeast lacking Pol4 are unable to join 3' overhangs that require gap filling, but remain proficient for gap filling at 5' overhangs. This is because the primer terminus used to initiate DNA synthesis is less stable at 3' overhangs, necessitating a specialized NHEJ polymerase. Ligation The DNA ligase IV complex, consisting of the catalytic subunit DNA ligase IV and its cofactor XRCC4 (Dnl4 and Lif1 in yeast), performs the ligation step of repair. XLF, also known as Cernunnos, is homologous to yeast Nej1 and is also required for NHEJ. While the precise role of XLF is unknown, it interacts with the XRCC4/DNA ligase IV complex and likely participates in the ligation step. Recent evidence suggests that XLF promotes re-adenylation of DNA ligase IV after ligation, recharging the ligase and allowing it to catalyze a second ligation. Other In yeast, Sir2 was originally identified as an NHEJ protein, but is now known to be required for NHEJ only because it is required for the transcription of Nej1. NHEJ and heat-labile sites Induction of heat-labile sites (HLS) is a signature of ionizing radiation. The DNA clustered damage sites consist of different types of DNA lesions. Some of these lesions are not prompt DSBs but they convert to DSB after heating. HLS are not evolved to DSB under physiological temperature (37 C). Also, the interaction of HLS with other lesions and their role in living cells is yet elusive. The repair mechanisms of these sites are not fully revealed. The NHEJ is the dominant DNA repair pathway throughout the cell cycle. The DNA-PKcs protein is the critical element in the center of NHEJ. Using DNA-PKcs KO cell lines or inhibition of DNA-PKcs does not affect the repair capacity of HLS. Also blocking both HR and NHEJ repair pathways by dactolisib (NVP-BEZ235) inhibitor showed that repair of HLS is not dependent on HR and NHEJ. These results showed that the repair mechanism of HLS is independent of NHEJ and HR pathways Regulation The choice between NHEJ and homologous recombination for repair of a double-strand break is regulated at the initial step in recombination, 5' end resection. In this step, the 5' strand of the break is degraded by nucleases to create long 3' single-stranded tails. DSBs that have not been resected can be rejoined by NHEJ, but resection of even a few nucleotides strongly inhibits NHEJ and effectively commits the break to repair by recombination. NHEJ is active throughout the cell cycle, but is most important during G1 when no homologous template for recombination is available. This regulation is accomplished by the cyclin-dependent kinase Cdk1 (Cdc28 in yeast), which is turned off in G1 and expressed in S and G2. Cdk1 phosphorylates the nuclease Sae2, allowing resection to initiate. V(D)J recombination NHEJ plays a critical role in V(D)J recombination, the process by which B-cell and T-cell receptor diversity is generated in the vertebrate immune system. In V(D)J recombination, hairpin-capped double-strand breaks are created by the RAG1/RAG2 nuclease, which cleaves the DNA at recombination signal sequences. These hairpins are then opened by the Artemis nuclease and joined by NHEJ. A specialized DNA polymerase called terminal deoxynucleotidyl transferase (TdT), which is only expressed in lymph tissue, adds nontemplated nucleotides to the ends before the break is joined. This process couples "variable" (V), "diversity" (D), and "joining" (J) regions, which when assembled together create the variable region of a B-cell or T-cell receptor gene. Unlike typical cellular NHEJ, in which accurate repair is the most favorable outcome, error-prone repair in V(D)J recombination is beneficial in that it maximizes diversity in the coding sequence of these genes. Patients with mutations in NHEJ genes are unable to produce functional B cells and T cells and suffer from severe combined immunodeficiency (SCID). At telomeres Telomeres are normally protected by a "cap" that prevents them from being recognized as double-strand breaks. Loss of capping proteins causes telomere shortening and inappropriate joining by NHEJ, producing dicentric chromosomes which are then pulled apart during mitosis. Paradoxically, some NHEJ proteins are involved in telomere capping. For example, Ku localizes to telomeres and its deletion leads to shortened telomeres. Ku is also required for subtelomeric silencing, the process by which genes located near telomeres are turned off. Consequences of dysfunction Several human syndromes are associated with dysfunctional NHEJ. Hypomorphic mutations in LIG4 and XLF cause LIG4 syndrome and XLF-SCID, respectively. These syndromes share many features including cellular radiosensitivity, microcephaly and severe combined immunodeficiency (SCID) due to defective V(D)J recombination. Loss-of-function mutations in Artemis also cause SCID, but these patients do not show the neurological defects associated with LIG4 or XLF mutations. The difference in severity may be explained by the roles of the mutated proteins. Artemis is a nuclease and is thought to be required only for repair of DSBs with damaged ends, whereas DNA Ligase IV and XLF are required for all NHEJ events. Mutations in genes that participate in non-homologous end joining lead to ataxia-telangiectasia (ATM gene), Fanconi anemia (multiple genes), as well as hereditary breast and ovarian cancers (BRCA1 gene). Many NHEJ genes have been knocked out in mice. Deletion of XRCC4 or LIG4 causes embryonic lethality in mice, indicating that NHEJ is essential for viability in mammals. In contrast, mice lacking Ku or DNA-PKcs are viable, probably because low levels of end joining can still occur in the absence of these components. All NHEJ mutant mice show a SCID phenotype, sensitivity to ionizing radiation, and neuronal apoptosis. Aging A system was developed for measuring NHEJ efficiency in the mouse. NHEJ efficiency could be compared across tissues of the same mouse and in mice of different age. Efficiency was higher in the skin, lung and kidney fibroblasts, and lower in heart fibroblasts and brain astrocytes. Furthermore, NHEJ efficiency declined with age. The decline was 1.8 to 3.8-fold, depending on the tissue, in the 5-month-old compared to the 24-month-old mice. Reduced capability for NHEJ can lead to an increase in the number of unrepaired or faultily repaired DNA double-strand breaks that may then contribute to aging. An analysis of the level of NHEJ protein Ku80 in human, cow, and mouse indicated that Ku80 levels vary dramatically between species, and that these levels are strongly correlated with species longevity. List of proteins involved in NHEJ in human cells Ku70/80 DNA-PKcs DNA Ligase IV XRCC4 XLF Artemis DNA polymerase mu DNA polymerase lambda PNKP Aprataxin APLF BRCA1 BRCA2 CYREN References DNA repair Telomeres
Non-homologous end joining
[ "Biology" ]
2,879
[ "DNA repair", "Senescence", "Molecular genetics", "Cellular processes", "Telomeres" ]
1,300,358
https://en.wikipedia.org/wiki/Fock%20matrix
In the Hartree–Fock method of quantum mechanics, the Fock matrix is a matrix approximating the single-electron energy operator of a given quantum system in a given set of basis vectors. It is most often formed in computational chemistry when attempting to solve the Roothaan equations for an atomic or molecular system. The Fock matrix is actually an approximation to the true Hamiltonian operator of the quantum system. It includes the effects of electron-electron repulsion only in an average way. Because the Fock operator is a one-electron operator, it does not include the electron correlation energy. The Fock matrix is defined by the Fock operator. In its general form the Fock operator writes: Where i runs over the total N spin orbitals. In the closed-shell case, it can be simplified by considering only the spatial orbitals. Noting that the terms are duplicated and the exchange terms are null between different spins. For the restricted case which assumes closed-shell orbitals and single- determinantal wavefunctions, the Fock operator for the i-th electron is given by: where: is the Fock operator for the i-th electron in the system, is the one-electron Hamiltonian for the i-th electron, is the number of electrons and is the number of occupied orbitals in the closed-shell system, is the Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system, is the exchange operator, defining the quantum effect produced by exchanging two electrons. The Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron. For systems with unpaired electrons there are many choices of Fock matrices. See also Hartree–Fock method Unrestricted Hartree–Fock Restricted open-shell Hartree–Fock References Atomic, molecular, and optical physics Quantum chemistry Matrices
Fock matrix
[ "Physics", "Chemistry", "Mathematics" ]
420
[ " and optical physics stubs", "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Mathematical objects", "Quantum mechanics", "Matrices (mathematics)", "Theoretical chemistry", " molecular", "Matrix stubs", "Atomic", "Physical chemistry stubs", " and optical physi...
1,300,485
https://en.wikipedia.org/wiki/Desmin
Desmin is a protein that in humans is encoded by the DES gene. Desmin is a muscle-specific, type III intermediate filament that integrates the sarcolemma, Z disk, and nuclear membrane in sarcomeres and regulates sarcomere architecture. Structure Desmin is a 53.5 kD protein composed of 470 amino acids, encoded by the human DES gene located on the long arm of chromosome 2. There are three major domains to the desmin protein: a conserved alpha helix rod, a variable non alpha helix head, and a carboxy-terminal tail. Desmin, as all intermediate filaments, shows no polarity when assembled. The rod domain consists of 308 amino acids with parallel alpha helical coiled coil dimers and three linkers to disrupt it. The rod domain connects to the head domain. The head domain 84 amino acids with many arginine, serine, and aromatic residues is important in filament assembly and dimer-dimer interactions. The tail domain is responsible for the integration of filaments and interaction with proteins and organelles. Desmin is only expressed in vertebrates, however homologous proteins are found in many organisms. Desmin is a subunit of intermediate filaments in cardiac muscle, skeletal muscle and smooth muscle tissue. In cardiac muscle, desmin is present in Z-discs and intercalated discs. Desmin has been shown to interact with desmoplakin and αB-crystallin. Function Desmin was first described in 1976, first purified in 1977, the gene was cloned in 1989, and the first knockout mouse was created in 1996. The function of desmin has been deduced through studies in knockout mice. Desmin is one of the earliest protein markers for muscle tissue in embryogenesis as it is detected in the somites. Although it is present early in the development of muscle cells, it is only expressed at low levels, and increases as the cell nears terminal differentiation. A similar protein, vimentin, is present in higher amounts during embryogenesis while desmin is present in higher amounts after differentiation. This suggests that there may be some interaction between the two in determining muscle cell differentiation. However desmin knockout mice develop normally and only experience defects later in life. Since desmin is expressed at a low level during differentiation another protein may be able to compensate for desmin's function early in development but not later on. In adult desmin-null mice, hearts from 10 week-old animals showed drastic alterations in muscle architecture, including a misalignment of myofibrils and disorganization and swelling of mitochondria; findings that were more severe in cardiac relative to skeletal muscle. Cardiac tissue also exhibited progressive necrosis and calcification of the myocardium. A separate study examined this in more detail in cardiac tissue and found that murine hearts lacking desmin developed hypertrophic cardiomyopathy and chamber dilation combined with systolic dysfunction. In adult muscle, desmin forms a scaffold around the Z-disk of the sarcomere and connects the Z-disk to the subsarcolemmal cytoskeleton. It links the myofibrils laterally by connecting the Z-disks. Through its connection to the sarcomere, desmin connects the contractile apparatus to the cell nucleus, mitochondria, and post-synaptic areas of motor endplates. These connections maintain the structural and mechanical integrity of the cell during contraction while also helping in force transmission and longitudinal load bearing. In human heart failure, desmin expression is upregulated, which has been hypothesized to be a defense mechanism in an attempt to maintain normal sarcomere alignment amidst disease pathogenesis. There is some evidence that desmin may also connect the sarcomere to the extracellular matrix (ECM) through desmosomes which could be important in signalling between the ECM and the sarcomere which could regulate muscle contraction and movement. Finally, desmin may be important in mitochondria function. When desmin is not functioning properly there is improper mitochondrial distribution, number, morphology and function. Since desmin links the mitochondria to the sarcomere it may transmit information about contractions and energy need and through this regulate the aerobic respiration rate of the muscle cell. Clinical significance Desmin-related myofibrillar myopathy (DRM or desminopathy) is a subgroup of the myofibrillar myopathy diseases and is the result of a mutation in the gene that codes for desmin which by changing the protein structure prevents it from forming protein filaments, and rather, forms aggregates of desmin and other proteins throughout the cell. Desmin (DES) mutations have been associated with restrictive, dilated, idiopathic, arrhythmogenic and non-compaction cardimyopathy. The N-terminal part of the 1A desmin subdomain is a genetic hot spot region for mutations affecting filament assembly. Some of these DES mutations cause an aggregation of desmin within the cytoplasm. A mutation p.A120D was discovered in a family, where several members had sudden cardiac death. In addition, DES mutations cause frequently cardiac conduction diseases. Desmin has been evaluated for role in assessing the depth of invasion of urothelial carcinoma in TURBT specimens. References External links GeneReviews/NIH/NCBI/UW entry on Myofibrillar Myopathy LOVD mutation database: DES Tumor markers
Desmin
[ "Chemistry", "Biology" ]
1,157
[ "Chemical pathology", "Tumor markers", "Biomarkers" ]
1,300,486
https://en.wikipedia.org/wiki/Vimentin
Vimentin is a structural protein that in humans is encoded by the VIM gene. Its name comes from the Latin vimentum which refers to an array of flexible rods. Vimentin is a type III intermediate filament (IF) protein that is expressed in mesenchymal cells. IF proteins are found in all animal cells as well as bacteria. Intermediate filaments, along with tubulin-based microtubules and actin-based microfilaments, comprises the cytoskeleton. All IF proteins are expressed in a highly developmentally-regulated fashion; vimentin is the major cytoskeletal component of mesenchymal cells. Because of this, vimentin is often used as a marker of mesenchymally-derived cells or cells undergoing an epithelial-to-mesenchymal transition (EMT) during both normal development and metastatic progression. Structure The assembly of the fibrous vimentin filament that forms the cytoskeleton follows a gradual sequence. The vimentin monomer has a central α-helical domain, capped on each end by non-helical amino (head) and carboxyl (tail) domains. Two monomers are likely co-translationally expressed in a way that facilitates their interaction forming a coiled-coil dimer, which is the basic subunit of vimentin assembly. A pair of coiled-coil dimers connect in an antiparallel fashion to form a tetramer. Eight tetramers join to form what is known as the unit-length filament (ULF), ULFs then stick to each other and elongate followed by compaction to form the fibrous proteins. The α-helical sequences contain a pattern of hydrophobic amino acids that contribute to forming a "hydrophobic seal" on the surface of the helix. In addition, there is a periodic distribution of acidic and basic amino acids that seems to play an important role in stabilizing coiled-coil dimers. The spacing of the charged residues is optimal for ionic salt bridges, which allows for the stabilization of the α-helix structure. While this type of stabilization is intuitive for intrachain interactions, rather than interchain interactions, scientists have proposed that perhaps the switch from intrachain salt bridges formed by acidic and basic residues to the interchain ionic associations contributes to the assembly of the filament. Function Vimentin plays a significant role in supporting and anchoring the position of the organelles in the cytosol. Vimentin is attached to the nucleus, endoplasmic reticulum, and mitochondria, either laterally or terminally. The dynamic nature of vimentin is important when offering flexibility to the cell. Scientists found that vimentin provided cells with a resilience absent from the microtubule or actin filament networks, when under mechanical stress in vivo. Therefore, in general, it is accepted that vimentin is the cytoskeletal component responsible for maintaining cell integrity. (It was found that cells without vimentin are extremely delicate when disturbed with a micropuncture). Transgenic mice that lack vimentin appeared normal and did not show functional differences. It is possible that the microtubule network may have compensated for the absence of the intermediate network. This result supports an intimate interaction between microtubules and vimentin. Moreover, when microtubule depolymerizers were present, vimentin reorganization occurred, once again implying a relationship between the two systems. On the other hand, wounded mice that lack the vimentin gene heal slower than their wild type counterparts. In essence, vimentin is responsible for maintaining cell shape, integrity of the cytoplasm, and stabilizing cytoskeletal interactions. Vimentin has been shown to eliminate toxic proteins in JUNQ and IPOD inclusion bodies in asymmetric division of mammalian cell lines. Also, vimentin is found to control the transport of low-density lipoprotein, LDL, -derived cholesterol from a lysosome to the site of esterification. With the blocking of transport of LDL-derived cholesterol inside the cell, cells were found to store a much lower percentage of the lipoprotein than normal cells with vimentin. This dependence seems to be the first process of a biochemical function in any cell that depends on a cellular intermediate filament network. This type of dependence has ramifications on the adrenal cells, which rely on cholesteryl esters derived from LDL. Vimentin plays a role in aggresome formation, where it forms a cage surrounding a core of aggregated protein. In addition to its conventional intracellular localisation, vimentin can be found extracellularly. Vimentin can be expressed as a cell surface protein and have suggested roles in immune reactions. It can also be released in phosphorylated forms to the extracellular space by activated macrophages, astrocytes are also known to release vimentin. Clinical significance It has been used as a sarcoma tumor marker to identify mesenchyme. Its specificity as a biomarker has been disputed by Jerad Gardner. Vimentin is present in spindle cell squameous cell carcinoma. Methylation of the vimentin gene has been established as a biomarker of colon cancer and this is being utilized in the development of fecal tests for colon cancer. Statistically significant levels of vimentin gene methylation have also been observed in certain upper gastrointestinal pathologies such as Barrett's esophagus, esophageal adenocarcinoma, and intestinal type gastric cancer. High levels of DNA methylation in the promoter region have also been associated with markedly decreased survival in hormone positive breast cancers. Downregulation of vimentin was identified in cystic variant of papillary thyroid carcinoma using a proteomic approach. See also Anti-citrullinated protein antibody for its use in diagnosis of rheumatoid arthritis. Vimentin was discovered to be an attachment factor for SARS-CoV-2 by Nader Rahimi and colleagues. Interactions Vimentin has been shown to interact with: DSP MEN1 MYST2 PKN1 PRKCI PLEC SPTAN1 UPP1 YWHAZ The 3' UTR of Vimentin mRNA has been found to bind a 46kDa protein. References Further reading External links Vimentin Images Cytoskeleton Cell biology Tumor markers
Vimentin
[ "Chemistry", "Biology" ]
1,381
[ "Cell biology", "Chemical pathology", "Tumor markers", "Biomarkers" ]
1,300,489
https://en.wikipedia.org/wiki/Glial%20fibrillary%20acidic%20protein
Glial fibrillary acidic protein (GFAP) is a protein that is encoded by the GFAP gene in humans. It is a type III intermediate filament (IF) protein that is expressed by numerous cell types of the central nervous system (CNS), including astrocytes and ependymal cells during development. GFAP has also been found to be expressed in glomeruli and peritubular fibroblasts taken from rat kidneys, Leydig cells of the testis in both hamsters and humans, human keratinocytes, human osteocytes and chondrocytes and stellate cells of the pancreas and liver in rats. GFAP is closely related to the other three non-epithelial type III IF family members, vimentin, desmin and peripherin, which are all involved in the structure and function of the cell's cytoskeleton. GFAP is thought to help to maintain astrocyte mechanical strength as well as the shape of cells, but its exact function remains poorly understood, despite the number of studies using it as a cell marker. The protein was named and first isolated and characterized by Lawrence F. Eng in 1969. In humans, it is located on the long arm of chromosome 17. Structure Type III intermediate filaments contain three domains, named the head, rod and tail domains. The specific DNA sequence for the rod domain may differ between different type III intermediate filaments, but the structure of the protein is highly conserved. This rod domain coils around that of another filament to form a dimer, with the N-terminal and C-terminal of each filament aligned. Type III filaments such as GFAP are capable of forming both homodimers and heterodimers; GFAP can polymerize with other type III proteins. GFAP and other type III IF proteins cannot assemble with keratins, the type I and II intermediate filaments: in cells that express both proteins, two separate intermediate filament networks form, which can allow for specialization and increased variability. To form networks, the initial GFAP dimers combine to make staggered tetramers, which are the basic subunits of an intermediate filament. Since rod domains alone in vitro do not form filaments, the non-helical head and tail domains are necessary for filament formation. The head and tail regions have greater variability of sequence and structure. In spite of this increased variability, the head of GFAP contains two conserved arginines and an aromatic residue that have been shown to be required for proper assembly. Function in the central nervous system GFAP is expressed in the central nervous system in astrocyte cells, and the concentration of GFAP differs between different regions in the CNS, where the highest levels are found in medulla oblongata, cervical spinal cord and hippocampus. It is involved in many important CNS processes, including cell communication and the functioning of the blood brain barrier. GFAP has been shown to play a role in mitosis by adjusting the filament network present in the cell. During mitosis, there is an increase in the amount of phosphorylated GFAP, and a movement of this modified protein to the cleavage furrow. There are different sets of kinases at work; cdc2 kinase acts only at the G2 phase transition, while other GFAP kinases are active at the cleavage furrow alone. This specificity of location allows for precise regulation of GFAP distribution to the daughter cells. Studies have also shown that GFAP knockout mice undergo multiple degenerative processes including abnormal myelination, white matter structure deterioration, and functional/structural impairment of the blood–brain barrier. These data suggest that GFAP is necessary for many critical roles in the CNS. GFAP is proposed to play a role in astrocyte-neuron interactions as well as cell-cell communication. In vitro, using antisense RNA, astrocytes lacking GFAP do not form the extensions usually present with neurons. Studies have also shown that Purkinje cells in GFAP knockout mice do not exhibit normal structure, and these mice demonstrate deficits in conditioning experiments such as the eye-blink task. Biochemical studies of GFAP have shown MgCl2 and/or calcium/calmodulin dependent phosphorylation at various serine or threonine residues by PKC and PKA which are two kinases that are important for the cytoplasmic transduction of signals. These data highlight the importance of GFAP for cell-cell communication. GFAP has also been shown to be important in repair after CNS injury. More specifically for its role in the formation of glial scars in a multitude of locations throughout the CNS including the eye and brain. Autoimmune GFAP astrocytopathy In 2016 a CNS inflammatory disorder associated with anti-GFAP antibodies was described. Patients with autoimmune GFAP astrocytopathy developed meningoencephalomyelitis with inflammation of the meninges, the brain parenchyma, and the spinal cord. About one third of cases were associated with various cancers and many also expressed other CNS autoantibodies. Meningoencephalitis is the predominant clinical presentation of autoimmune GFAP astrocytopathy in published case series. It also can appear associated with encephalomyelitis and parkinsonism. Disease states There are multiple disorders associated with improper GFAP regulation, and injury can cause glial cells to react in detrimental ways. Glial scarring is a consequence of several neurodegenerative conditions, as well as injury that severs neural material. The scar is formed by astrocytes interacting with fibrous tissue to re-establish the glial margins around the central injury core and is partially caused by up-regulation of GFAP. Another condition directly related to GFAP is Alexander disease, a rare genetic disorder. Its symptoms include mental and physical retardation, dementia, enlargement of the brain and head, spasticity (stiffness of arms and/or legs), and seizures. The cellular mechanism of the disease is the presence of cytoplasmic accumulations containing GFAP and heat shock proteins, known as Rosenthal fibers. Mutations in the coding region of GFAP have been shown to contribute to the accumulation of Rosenthal fibers. Some of these mutations have been proposed to be detrimental to cytoskeleton formation as well as an increase in caspase 3 activity, which would lead to increased apoptosis of cells with these mutations. GFAP therefore plays an important role in the pathogenesis of Alexander disease. Notably, the expression of some GFAP isoforms have been reported to decrease in response to acute infection or neurodegeneration. Additionally, reduction in GFAP expression has also been reported in Wernicke's encephalopathy. The HIV-1 viral envelope glycoprotein gp120 can directly inhibit the phosphorylation of GFAP and GFAP levels can be decreased in response to chronic infection with HIV-1, varicella zoster, and pseudorabies. Decreases in GFAP expression have been reported in Down's syndrome, schizophrenia, bipolar disorder and depression. The generally high abundance of GFAP in the CNS has led to a great interest in GFAP as a blood biomarker of acute injury to the brain and spinal cord in different types of disease mechanisms, such as traumatic brain injury and cerebrovascular disease. Elevated blood levels of GFAP are also found in neuroinflammatory diseases, such as multiple sclerosis and neuromyelitis optica, a disease targeting astrocytes. In a study of 22 child patients undergoing extracorporeal membrane oxygenation (ECMO), children with abnormally high levels of GFAP were 13 times more likely to die and 11 times more likely to suffer brain injury than children with normal GFAP levels. Interactions Glial fibrillary acidic protein has been shown to interact with MEN1 and PSEN1. Isoforms Although GFAP alpha is the only isoform which is able to assemble homomerically, GFAP has 8 different isoforms which label distinct subpopulations of astrocytes in the human and rodent brain. These isoforms include GFAP kappa, GFAP +1 and the currently best researched GFAP delta. GFAP delta appears to be linked with neural stem cells (NSCs) and may be involved in migration. GFAP+1 is an antibody which labels two isoforms. Although GFAP+1 positive astrocytes are supposedly not reactive astrocytes, they have a wide variety of morphologies including processes of up to 0.95 mm (seen in the human brain). The expression of GFAP+1 positive astrocytes is linked with old age and the onset of AD pathology. See also 17q21.31 microdeletion syndrome (Koolen–de Vries syndrome) GFAP stain References Further reading External links GeneReviews/NCBI/NIH/UW entry on Alexander disease OMIM entries on Alexander disease Proteins Biology of bipolar disorder
Glial fibrillary acidic protein
[ "Chemistry" ]
1,956
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]