id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
44,943,575
https://en.wikipedia.org/wiki/Magnesium%20glycinate
Magnesium glycinate, also known as magnesium diglycinate or magnesium bisglycinate, is the magnesium salt of glycine (one magnesium and two glycine molecules), and is sold as a dietary supplement. It contains 14.1% elemental magnesium by mass. Magnesium glycinate is also often "buffered" with magnesium oxide but it is also available in its pure non-buffered magnesium glycinate form. Uses Magnesium glycinate has been studied with applicability to patients with a bowel resection or pregnancy-induced leg cramps. Less scientific research exists on magnesium glycinate in therapeutic applications than other more common forms of magnesium salt such as magnesium chloride, oxide or citrate. Magnesium glycinate has been considered in the context of magnesium's potential influence on systems associated with the development of depression. See also Magnesium (pharmaceutical preparation) Magnesium deficiency (medicine) Magnesium in biology References Dietary supplements Glycinates Magnesium compounds Metal-amino acid complexes
Magnesium glycinate
Chemistry
211
3,085,232
https://en.wikipedia.org/wiki/Baxter%20Althane%20disaster
The Baxter Althane disaster in autumn 2001 was a series of 53 sudden deaths of kidney failure patients in Spain, Croatia, Italy, Germany, Taiwan, Colombia and the USA (mainly Nebraska and Texas). All had received hospital treatment with Althane hemodialysis equipment, a product range manufactured by Baxter International, USA. Although official investigations initially found no link between the cases, Baxter Co. eventually published its own findings, admitting that a perfluorohydrocarbon-based cleaning fluid was not properly removed from the tubings during manufacture. Baxter also announced discontinuation and permanent recall of all Althane equipment. Families of most non-US victims were compensated by Baxter voluntarily, while US plaintiffs settled via a class action lawsuit. The company continues to manufacture dialysis machines of a newer design. References External links Baxter News Release "Baxter Corporation responds to Croatian Investigators", 7 Jan 2002 Baxter News Release "Baxter announces agreement with families... in Spain", 28 Nov 2001 2001 health disasters 2001 industrial disasters 2001 disasters in the United States 2001 disasters in Europe Baxter International Medical scandals Drug safety
Baxter Althane disaster
Chemistry
223
26,520,106
https://en.wikipedia.org/wiki/Lie%20point%20symmetry
Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan. Roughly speaking, a Lie point symmetry of a system is a local group of transformations that maps every solution of the system to another solution of the same system. In other words, it maps the solution set of the system to itself. Elementary examples of Lie groups are translations, rotations and scalings. The Lie symmetry theory is a well-known subject. In it are discussed continuous symmetries opposed to, for example, discrete symmetries. The literature for this theory can be found, among other places, in these notes. Overview Types of symmetries Lie groups and hence their infinitesimal generators can be naturally "extended" to act on the space of independent variables, state variables (dependent variables) and derivatives of the state variables up to any finite order. There are many other kinds of symmetries. For example, contact transformations let coefficients of the transformations infinitesimal generator depend also on first derivatives of the coordinates. Lie-Bäcklund transformations let them involve derivatives up to an arbitrary order. The possibility of the existence of such symmetries was recognized by Noether. For Lie point symmetries, the coefficients of the infinitesimal generators depend only on coordinates, denoted by . Applications Lie symmetries were introduced by Lie in order to solve ordinary differential equations. Another application of symmetry methods is to reduce systems of differential equations, finding equivalent systems of differential equations of simpler form. This is called reduction. In the literature, one can find the classical reduction process, and the moving frame-based reduction process. Also symmetry groups can be used for classifying different symmetry classes of solutions. Geometrical framework Infinitesimal approach Lie's fundamental theorems underline that Lie groups can be characterized by elements known as infinitesimal generators. These mathematical objects form a Lie algebra of infinitesimal generators. Deduced "infinitesimal symmetry conditions" (defining equations of the symmetry group) can be explicitly solved in order to find the closed form of symmetry groups, and thus the associated infinitesimal generators. Let be the set of coordinates on which a system is defined where is the cardinality of . An infinitesimal generator in the field is a linear operator that has in its kernel and that satisfies the Leibniz rule: . In the canonical basis of elementary derivations , it is written as: where is in for all in . Lie groups and Lie algebras of infinitesimal generators Lie algebras can be generated by a generating set of infinitesimal generators as defined above. To every Lie group, one can associate a Lie algebra. Roughly, a Lie algebra is an algebra constituted by a vector space equipped with Lie bracket as additional operation. The base field of a Lie algebra depends on the concept of invariant. Here only finite-dimensional Lie algebras are considered. Continuous dynamical systems A dynamical system (or flow) is a one-parameter group action. Let us denote by such a dynamical system, more precisely, a (left-)action of a group on a manifold : such that for all point in : where is the neutral element of ; for all in , . A continuous dynamical system is defined on a group that can be identified to i.e. the group elements are continuous. Invariants An invariant, roughly speaking, is an element that does not change under a transformation. Definition of Lie point symmetries In this paragraph, we consider precisely expanded Lie point symmetries i.e. we work in an expanded space meaning that the distinction between independent variable, state variables and parameters are avoided as much as possible. A symmetry group of a system is a continuous dynamical system defined on a local Lie group acting on a manifold . For the sake of clarity, we restrict ourselves to n-dimensional real manifolds where is the number of system coordinates. Lie point symmetries of algebraic systems Let us define algebraic systems used in the forthcoming symmetry definition. Algebraic systems Let be a finite set of rational functions over the field where and are polynomials in i.e. in variables with coefficients in . An algebraic system associated to is defined by the following equalities and inequalities: An algebraic system defined by is regular (a.k.a. smooth) if the system is of maximal rank , meaning that the Jacobian matrix is of rank at every solution of the associated semi-algebraic variety. Definition of Lie point symmetries The following theorem (see th. 2.8 in ch.2 of ) gives necessary and sufficient conditions so that a local Lie group is a symmetry group of an algebraic system. Theorem. Let be a connected local Lie group of a continuous dynamical system acting in the n-dimensional space . Let with define a regular system of algebraic equations: Then is a symmetry group of this algebraic system if, and only if, for every infinitesimal generator in the Lie algebra of . Example Consider the algebraic system defined on a space of 6 variables, namely with: The infinitesimal generator is associated to one of the one-parameter symmetry groups. It acts on 4 variables, namely and . One can easily verify that and . Thus the relations are satisfied for any in that vanishes the algebraic system. Lie point symmetries of dynamical systems Let us define systems of first-order ODEs used in the forthcoming symmetry definition. Systems of ODEs and associated infinitesimal generators Let be a derivation w.r.t. the continuous independent variable . We consider two sets and . The associated coordinate set is defined by and its cardinal is . With these notations, a system of first-order ODEs is a system where: and the set specifies the evolution of state variables of ODEs w.r.t. the independent variable. The elements of the set are called state variables, these of parameters. One can associate also a continuous dynamical system to a system of ODEs by resolving its equations. An infinitesimal generator is a derivation that is closely related to systems of ODEs (more precisely to continuous dynamical systems). For the link between a system of ODEs, the associated vector field and the infinitesimal generator, see section 1.3 of. The infinitesimal generator associated to a system of ODEs, described as above, is defined with the same notations as follows: Definition of Lie point symmetries Here is a geometrical definition of such symmetries. Let be a continuous dynamical system and its infinitesimal generator. A continuous dynamical system is a Lie point symmetry of if, and only if, sends every orbit of to an orbit. Hence, the infinitesimal generator satisfies the following relation based on Lie bracket: where is any constant of and i.e. . These generators are linearly independent. One does not need the explicit formulas of in order to compute the infinitesimal generators of its symmetries. Example Consider Pierre François Verhulst's logistic growth model with linear predation, where the state variable represents a population. The parameter is the difference between the growth and predation rate and the parameter corresponds to the receptive capacity of the environment: The continuous dynamical system associated to this system of ODEs is: The independent variable varies continuously; thus the associated group can be identified with . The infinitesimal generator associated to this system of ODEs is: The following infinitesimal generators belong to the 2-dimensional symmetry group of : Software There exist many software packages in this area. For example, the package liesymm of Maple provides some Lie symmetry methods for PDEs. It manipulates integration of determining systems and also differential forms. Despite its success on small systems, its integration capabilities for solving determining systems automatically are limited by complexity issues. The DETools package uses the prolongation of vector fields for searching Lie symmetries of ODEs. Finding Lie symmetries for ODEs, in the general case, may be as complicated as solving the original system. References Lie groups Symmetry
Lie point symmetry
Physics,Mathematics
1,755
1,410,760
https://en.wikipedia.org/wiki/Molar%20heat%20capacity
The molar heat capacity of a chemical substance is the amount of energy that must be added, in the form of heat, to one mole of the substance in order to cause an increase of one unit in its temperature. Alternatively, it is the heat capacity of a sample of the substance divided by the amount of substance of the sample; or also the specific heat capacity of the substance times its molar mass. The SI unit of molar heat capacity is joule per kelvin per mole, J⋅K−1⋅mol−1. Like the specific heat, the measured molar heat capacity of a substance, especially a gas, may be significantly higher when the sample is allowed to expand as it is heated (at constant pressure, or isobaric) than when it is heated in a closed vessel that prevents expansion (at constant volume, or isochoric). The ratio between the two, however, is the same heat capacity ratio obtained from the corresponding specific heat capacities. This property is most relevant in chemistry, when amounts of substances are often specified in moles rather than by mass or volume. The molar heat capacity generally increases with the molar mass, often varies with temperature and pressure, and is different for each state of matter. For example, at atmospheric pressure, the (isobaric) molar heat capacity of water just above the melting point is about 76 J⋅K−1⋅mol−1, but that of ice just below that point is about 37.84 J⋅K−1⋅mol−1. While the substance is undergoing a phase transition, such as melting or boiling, its molar heat capacity is technically infinite, because the heat goes into changing its state rather than raising its temperature. The concept is not appropriate for substances whose precise composition is not known, or whose molar mass is not well defined, such as polymers and oligomers of indeterminate molecular size. A closely related property of a substance is the heat capacity per mole of atoms, or atom-molar heat capacity, in which the heat capacity of the sample is divided by the number of moles of atoms instead of moles of molecules. So, for example, the atom-molar heat capacity of water is 1/3 of its molar heat capacity, namely 25.3 J⋅K−1⋅mol−1. In informal chemistry contexts, the molar heat capacity may be called just "heat capacity" or "specific heat". However, international standards now recommend that "specific heat capacity" always refer to capacity per unit of mass, to avoid possible confusion. Therefore, the word "molar", not "specific", should always be used for this quantity. Definition The molar heat capacity of a substance, which may be denoted by cm, is the heat capacity C of a sample of the substance, divided by the amount (moles) n of the substance in the sample: cm where Q is the amount of heat needed to raise the temperature of the sample by ΔT. Obviously, this parameter cannot be computed when n is not known or defined. Like the heat capacity of an object, the molar heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature T of the sample and the pressure P applied to it. Therefore, it should be considered a function cm(P,T) of those two variables. These parameters are usually specified when giving the molar heat capacity of a substance. For example, "H2O: 75.338 J⋅K−1⋅mol−1 (25 °C, 101.325 kPa)" When not specified, published values of the molar heat capacity cm generally are valid for some standard conditions for temperature and pressure. However, the dependency of cm(P,T) on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one can usually omit the qualifier (P,T), and approximate the molar heat capacity by a constant cm suitable for those ranges. Since the molar heat capacity of a substance is the specific heat c times the molar mass of the substance M/N its numerical value is generally smaller than that of the specific heat. Paraffin wax, for example, has a specific heat of about but a molar heat capacity of about . The molar heat capacity is an "intensive" property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it.) Variations The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured molar heat capacity, even for the same starting pressure P and starting temperature T. Two particular choices are widely used: If the pressure is kept constant (for instance, at the ambient atmospheric pressure), and the sample is allowed to expand, the expansion generates work as the force from the pressure displaces the enclosure. That work must come from the heat energy provided. The value thus obtained is said to be the molar heat capacity at constant pressure (or isobaric), and is often denoted cP,m, cp,m, cP,m, etc. On the other hand, if the expansion is prevented — for example by a sufficiently rigid enclosure, or by increasing the external pressure to counteract the internal one — no work is generated, and the heat energy that would have gone into it must instead contribute to the internal energy of the object, including raising its temperature by an extra amount. The value obtained this way is said to be the molar heat capacity at constant volume (or isochoric) and denoted cV,m, cv,m, cv,m, etc. The value of cV,m is always less than the value of cP,m. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. All methods for the measurement of specific heat apply to molar heat capacity as well. Units The SI unit of molar heat capacity heat is joule per kelvin per mole (J/(K⋅mol), J/(K mol), J K−1 mol−1, etc.). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per mole (J/(°C⋅mol)). In chemistry, heat amounts are still often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat: the "small calorie" (or "gram-calorie", "cal") is 4.184 J, exactly. The "grand calorie" (also "kilocalorie", "kilogram-calorie", or "food calorie"; "kcal" or "Cal") is 1000 small calories, that is, 4184 J, exactly. When heat is measured in these units, the unit of specific heat is usually 1 cal/(°C⋅mol) ("small calorie") = 4.184 J⋅K−1⋅mol−1 1 kcal/(°C⋅mol) ("large calorie") = 4184 J⋅K−1⋅mol−1. The molar heat capacity of a substance has the same dimension as the heat capacity of an object; namely, L2⋅M⋅T−2⋅Θ−1, or M(L/T)2/Θ. (Indeed, it is the heat capacity of the object that consists of an Avogadro number of molecules of the substance.) Therefore, the SI unit J⋅K−1⋅mol−1 is equivalent to kilogram metre squared per second squared per kelvin (kg⋅m2⋅K−1⋅s−2). Physical basis Monatomic gases The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. Quantum mechanics predicts that, at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy. Therefore, when a certain number N of atoms of a monatomic gas receives an input Q of heat energy, in a container of fixed volume, the kinetic energy of each atom will increase by Q/N, independently of the atom's mass. This assumption is the foundation of the theory of ideal gases. In other words, that theory predicts that the molar heat capacity at constant volume cV,m of all monatomic gases will be the same; specifically, cV,m = R where R is the ideal gas constant, about 8.31446 J⋅K−1⋅mol−1 (which is the product of the Boltzmann constant kB and the Avogadro constant). And, indeed, the experimental values of cV,m for the noble gases helium, neon, argon, krypton, and xenon (at 1 atm and 25 °C) are all 12.5 J⋅K−1⋅mol−1, which is R; even though their atomic weights range from 4 to 131. The same theory predicts that the molar heat capacity of a monatomic gas at constant pressure will be cP,m = cV,m + R = R This prediction matches the experimental values, which, for helium through xenon, are 20.78, 20.79, 20.85, 20.95, and 21.01 J⋅K−1⋅mol−1, respectively; very close to the theoretical R = 20.78 J⋅K−1⋅mol−1. Therefore, the specific heat (per unit of mass, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) relative atomic mass A. That is, approximately, cV = (12470 J⋅K−1⋅kg−1)/A      cP = (20786 J⋅K−1⋅kg−1)/A Polyatomic gases Degrees of freedom A polyatomic molecule (consisting of two or more atoms bound together) can store heat energy in other forms besides its kinetic energy. These forms include rotation of the molecule, and vibration of the atoms relative to its center of mass. These extra degrees of freedom contribute to the molar heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into those other degrees of freedom. Thus, in order to achieve the same increase in temperature, more heat energy will have to be provided to a mol of that substance than to a mol of a monatomic gas. Substances with high atomic count per molecule, like octane, can therefore have a very large heat capacity per mole, and yet a relatively small specific heat (per unit mass). If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy could be used to predict that each degree of freedom would have an average energy in the amount of kT, where k is the Boltzmann constant, and T is the temperature. If the number of degrees of freedom of the molecule is f, then each molecule would be holding, on average, a total energy equal to fkT. Then the molar heat capacity (at constant volume) would be cV,m = fR where R is the ideal gas constant. According to Mayer's relation, the molar heat capacity at constant pressure would be cP,m = cV,m + R = fR + R = (f + 2)R Thus, each additional degree of freedom will contribute R to the molar heat capacity of the gas (both cV,m and cP,m). In particular, each molecule of a monatomic gas has only f = 3 degrees of freedom, namely the components of its velocity vector; therefore cV,m = R and cP,m = R. Rotational modes of a diatomic molecule For example, the molar heat capacity of nitrogen at constant volume is 20.6 J⋅K−1⋅mol−1 (at 15 °C, 1 atm), which is 2.49 R. From the theoretical equation cV,m = fR, one concludes that each molecule has f = 5 degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. The degrees of freedom due to translations and rotations are called the rigid degrees of freedom, since they do not involve any deformation of the molecule. Because of those two extra degrees of freedom, the molar heat capacity cV,m of (20.6 J⋅K−1⋅mol−1) is greater than that of an hypothetical monatomic gas (12.5 J⋅K−1⋅mol−1) by a factor of . Frozen and active degrees of freedom According to classical mechanics, a diatomic molecule like nitrogen should have more degrees of internal freedom, corresponding to vibration of the two atoms that stretch and compress the bond between them. For thermodynamic purposes, each direction in which an atom can independently vibrate relative to the rest of the molecule introduces two degrees of freedom: one associated with the potential energy from distorting the bonds, and one for the kinetic energy of the atom's motion. In a diatomic molecule like , there is only one direction for the vibration, and the motions of the two atoms must be opposite but equal; so there are only two degrees of vibrational freedom. That would bring f up to 7, and cV,m to 3.5 R. The reason why these vibrations are not absorbing their expected fraction of heat energy input is provided by quantum mechanics. According to that theory, the energy stored in each degree of freedom must increase or decrease only in certain amounts (quanta). Therefore, if the temperature T of the system is not high enough, the average energy that would be available for some of the theoretical degrees of freedom (kT/f) may be less than the corresponding minimum quantum. If the temperature is low enough, that may be the case for practically all molecules. One then says that those degrees of freedom are "frozen". The molar heat capacity of the gas will then be determined only by the "active" degrees of freedom — that, for most molecules, can receive enough energy to overcome that quantum threshold. For each degree of freedom, there is an approximate critical temperature at which it "thaws" ("unfreezes") and becomes active, thus being able to hold heat energy. For the three translational degrees of freedom of molecules in a gas, this critical temperature is extremely small, so they can be assumed to be always active. For the rotational degrees of freedom, the thawing temperature is usually a few tens of kelvins (although with a very light molecule such as hydrogen the rotational energy levels will be spaced so widely that rotational heat capacity may not completely "unfreeze" until considerably higher temperatures are reached). Vibration modes of diatomic molecules generally start to activate only well above room temperature. In the case of nitrogen, the rotational degrees of freedom are fully active already at −173 °C (100 K, just 23 K above the boiling point). On the other hand, the vibration modes only start to become active around 350 K (77 °C) Accordingly, the molar heat capacity cP,m is nearly constant at 29.1 J⋅K−1⋅mol−1 from 100 K to about 300 °C. At about that temperature, it starts to increase rapidly, then it slows down again. It is 35.5 J⋅K−1⋅mol−1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. The last value corresponds almost exactly to the predicted value for f = 7. The following is a table of some constant-pressure molar heat capacities cP,m of various diatomic gases at standard temperature (25 °C = 298 K), at 500 °C, and at 5000 °C, and the apparent number of degrees of freedom f* estimated by the formula f* = 2cP,m/R − 2: (*) At 59 C (boiling point) The quantum harmonic oscillator approximation implies that the spacing of energy levels of vibrational modes are inversely proportional to the square root of the reduced mass of the atoms composing the diatomic molecule. This fact explains why the vibrational modes of heavier molecules like are active at lower temperatures. The molar heat capacity of at room temperature is consistent with f = 7 degrees of freedom, the maximum for a diatomic molecule. At high enough temperatures, all diatomic gases approach this value. Rotational modes of single atoms Quantum mechanics also explains why the specific heat of monatomic gases is well predicted by the ideal gas theory with the assumption that each molecule is a point mass that has only the f = 3 translational degrees of freedom. According to classical mechanics, since atoms have non-zero size, they should also have three rotational degrees of freedom, or f = 6 in total. Likewise, the diatomic nitrogen molecule should have an additional rotation mode, namely about the line of the two atoms; and thus have f = 6 too. In the classical view, each of these modes should store an equal share of the heat energy. However, according to quantum mechanics, the energy difference between the allowed (quantized) rotation states is inversely proportional to the moment of inertia about the corresponding axis of rotation. Because the moment of inertia of a single atom is exceedingly small, the activation temperature for its rotational modes is extremely high. The same applies to the moment of inertia of a diatomic molecule (or a linear polyatomic one) about the internuclear axis, which is why that mode of rotation is not active in general. On the other hand, electrons and nuclei can exist in excited states and, in a few exceptional cases, they may be active even at room temperature, or even at cryogenic temperatures. Polyatomic gases The set of all possible ways to infinitesimally displace the n atoms of a polyatomic gas molecule is a linear space of dimension 3n, because each atom can be independently displaced in each of three orthogonal axis directions. However, some three of these dimensions are just translation of the molecule by an infinitesimal displacement vector, and others are just rigid rotations of it by an infinitesimal angle about some axis. Still others may correspond to relative rotation of two parts of the molecule about a single bond that connects them. The independent deformation modes—linearly independent ways to actually deform the molecule, that strain its bonds—are only the remaining dimensions of this space. As in the case diatomic molecules, each of these deformation modes counts as two vibrational degrees of freedom for energy storage purposes: one for the potential energy stored in the strained bonds, and one for the extra kinetic energy of the atoms as they vibrate about the rest configuration of the molecule. In particular, if the molecule is linear (with all atoms on a straight line), it has only two non-trivial rotation modes, since rotation about its own axis does not displace any atom. Therefore, it has 3n − 5 actual deformation modes. The number of energy-storing degrees of freedom is then f = 3 + 2 + 2(3n − 5) = 6n − 5. For example, the linear nitrous oxide molecule (with n = 3) has 3n − 5 = 4 independent infinitesimal deformation modes. Two of them can be described as stretching one of the bonds while the other retains its normal length. The other two can be identified which the molecule bends at the central atom, in the two directions that are orthogonal to its axis. In each mode, one should assume that the atoms get displaced so that the center of mass remains stationary and there is no rotation. The molecule then has f = 6n − 5 = 13 total energy-storing degrees of freedom (3 translational, 2 rotational, 8 vibrational). At high enough temperature, its molar heat capacity then should be cP,m = 7.5 R = 62.63 J⋅K−1⋅mol−1. For cyanogen and acetylene (n = 4) the same analysis yields f = 19 and predicts cP,m = 10.5 R = 87.3 J⋅K−1⋅mol−1. A molecule with n atoms that is rigid and not linear has 3 translation modes and 3 non-trivial rotation modes, hence only 3n − 6 deformation modes. It therefore has f = 3 + 3 + 2(3n − 6) = 6n − 6 energy-absorbing degrees of freedom (one less than a linear molecule with the same atom count). Water (n = 3) is bent in its non-strained state, therefore it is predicted to have f = 12 degrees of freedom. Methane (n = 5) is tridimensional, and the formula predicts f = 24. Ethane (n = 8) has 4 degrees of rotational freedom: two about axes that are perpendicular to the central bond, and two more because each methyl group can rotate independently about that bond, with negligible resistance. Therefore, the number of independent deformation modes is 3n − 7, which gives f = 3 + 4 + 2(3n − 7) = 6n − 7 = 41. The following table shows the experimental molar heat capacities at constant pressure cP,m of the above polyatomic gases at standard temperature (25 °C = 298 K), at 500 °C, and at 5000 °C, and the apparent number of degrees of freedom f* estimated by the formula f* = 2cP,m/R − 2: (*) At 3000C Specific heat of solids In most solids (but not all), the molecules have a fixed mean position and orientation, and therefore the only degrees of freedom available are the vibrations of the atoms. Thus the specific heat is proportional to the number of atoms (not molecules) per unit of mass, which is the Dulong–Petit law. Other contributions may come from magnetic degrees of freedom in solids, but these rarely make substantial contributions. and electronic Since each atom of the solid contributes one independent vibration mode, the number of degrees of freedom in n atoms is 6n. Therefore, the heat capacity of a sample of a solid substance is expected to be 3RNa, or (24.94 J/K)Na, where Na is the number of moles of atoms in the sample, not molecules. Said another way, the atom-molar heat capacity of a solid substance is expected to be 3R = 24.94 J⋅K−1⋅mol−1, where "amol" denotes an amount of the solid that contains the Avogadro number of atoms. It follows that, in molecular solids, the heat capacity per mole of molecules will usually be close to 3nR, where n is the number of atoms per molecule. Thus n atoms of a solid should in principle store twice as much energy as n atoms of a monatomic gas. One way to look at this result is to observe that the monatomic gas can only store energy as kinetic energy of the atoms, whereas the solid can store it also as potential energy of the bonds strained by the vibrations. The atom-molar heat capacity of a polyatomic gas approaches that of a solid as the number n of atoms per molecule increases. As in the case f gases, some of the vibration modes will be "frozen out" at low temperatures, especially in solids with light and tightly bound atoms, causing the atom-molar heat capacity to be less than this theoretical limit. Indeed, the atom-molar (or specific) heat capacity of a solid substance tends toward zero, as the temperature approaches absolute zero. Dulong–Petit law As predicted by the above analysis, the heat capacity per mole of atoms, rather than per mole of molecules, is found to be remarkably constant for all solid substances at high temperatures. This relationship was noticed empirically in 1819, and is called the Dulong–Petit law, after its two discoverers. This discovery was an important argument in support of the atomic theory of matter. Indeed, for solid metallic chemical elements at room temperature, atom-molar heat capacities range from about 2.8 R to 3.4 R. Large exceptions at the lower end involve solids composed of relatively low-mass, tightly bonded atoms, such as beryllium (2.0 R, only of 66% of the theoretical value), and diamond (0.735 R, only 24%). Those conditions imply larger quantum vibrational energy spacing, thus many vibrational modes are "frozen out" at room temperature. Water ice close to the melting point, too, has an anomalously low heat capacity per atom (1.5 R, only 50% of the theoretical value). At the higher end of possible heat capacities, heat capacity may exceed R by modest amounts, due to contributions from anharmonic vibrations in solids, and sometimes a modest contribution from conduction electrons in metals. These are not degrees of freedom treated in the Einstein or Debye theories. Specific heat of solid elements Since the bulk density of a solid chemical element is strongly related to its molar mass, there exists a noticeable inverse correlation between a solid's density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, despite much wider variations in density and atomic weight. These two factors (constancy of atomic volume and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal that has a density almost 36 times that of the metal lithium, but uranium's specific heat capacity on a volumetric basis (i.e. per given volume of metal) is only 18% larger than lithium's. However, the average atomic volume in solid elements is not quite constant, so there are deviations from this principle. For instance, arsenic, which is only 14.5% less dense than antimony, has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes, in this case, is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior. Effect of impurities Sometimes small impurity concentrations can greatly affect the specific heat, for example in semiconducting ferromagnetic alloys. Specific heat of liquids A general theory of the heat capacity of liquids has not yet been achieved, and is still an active area of research. It was long thought that phonon theory is not able to explain the heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin scattering experiments with neutrons and with X-rays, confirming an intuition of Yakov Frenkel, have shown that transverse phonons do exist in liquids, albeit restricted to frequencies above a threshold called the Frenkel frequency. Since most energy is contained in these high-frequency modes, a simple modification of the Debye model is sufficient to yield a good approximation to experimental heat capacities of simple liquids. Because of high crystal binding energies, the effects of vibrational mode freezing are observed in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3R per mole of atoms of the Dulong–Petit theoretical maximum. Amorphous materials can be considered a type of liquid at temperatures above the glass transition temperature. Below the glass transition temperature amorphous materials are in the solid (glassy) state form. The specific heat has characteristic discontinuities at the glass transition temperature which are caused by the absence in the glassy state of percolating clusters made of broken bonds (configurons) that are present only in the liquid phase. Above the glass transition temperature percolating clusters formed by broken bonds enable a more floppy structure and hence a larger degree of freedom for atomic motion which results in a higher heat capacity of liquids. Below the glass transition temperature there are no extended clusters of broken bonds and the heat capacity is smaller because the solid-state (glassy) structure of amorphous material is more rigid. The discontinuities in the heat capacity are typically used to detect the glass transition temperature where a supercooled liquid transforms to a glass. Effect of hydrogen bonds Hydrogen-containing polar molecules like ethanol, ammonia, and water have powerful, intermolecular hydrogen bonds when in their liquid phase. These bonds provide another place where heat may be stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the fact that liquid water stores nearly the theoretical limit of 3R per mole of atoms, even at relatively low temperatures (i.e. near the freezing point of water). See also Quantum statistical mechanics Heat capacity ratio Statistical mechanics Thermodynamic equations Thermodynamic databases for pure substances Heat equation Heat transfer coefficient Heat of mixing Latent heat Material properties (thermodynamics) Joback method (Estimation of heat capacities) Specific heat of melting (Enthalpy of fusion) Specific heat of vaporization (Enthalpy of vaporization) Volumetric heat capacity Thermal mass R-value (insulation) Storage heater Frenkel line References Physical quantities Thermodynamic properties Molar quantities
Molar heat capacity
Physics,Chemistry,Mathematics
6,416
12,424,559
https://en.wikipedia.org/wiki/Guadeloupe%20parakeet
The Guadeloupe parakeet (Psittacara labati) is a hypothetical species of parrot that would have been endemic to Guadeloupe. Description Jean-Baptiste Labat described a population of small parrots living on Guadeloupe: Taxonomy They were later named Conurus labati, and are now called the Guadeloupe parakeet. It has been postulated to be a separate species based on little evidence. There are no specimens or remains of the extinct parrots. Their taxonomy may never be fully elucidated, and so their postulated status as a separate species is hypothetical. It is presumed to have gone extinct in the late 18th century, if it did indeed exist. References Aratinga Birds described in 1905 Bird extinctions since 1500 Taxa named by Walter Rothschild † Parakeets Controversial parrot taxa Extinct birds of the Caribbean Taxonomy articles created by Polbot Hypothetical species
Guadeloupe parakeet
Biology
186
62,318,997
https://en.wikipedia.org/wiki/H3K79me2
H3K79me2 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the di-methylation at the 79th lysine residue of the histone H3 protein. H3K79me2 is detected in the transcribed regions of active genes. Nomenclature H3K79me2 indicates dimethylation of lysine 79 on histone H3 protein subunit: Lysine methylation This diagram shows the progressive methylation of a lysine residue. The di-methylation (third from left) denotes the methylation present in H3K79me2. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as Histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Three forms of H3K79 methylation (H3K79me1; H3K79me2; H3K79me3) are catalyzed by DOT1 in yeast or DOT1L in mammals. H3K79 methylation participates in the DNA damage response and has multiple roles in nucleotide excision repair and sister chromatid recombinational repair. H3K79 dimethylation has been detected in the transcribed regions of active genes. Methods The histone mark H3K36me3 can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone code Histone methylation Histone methyltransferase Methyllysine References Epigenetics Post-translational modification
H3K79me2
Chemistry
1,010
33,093,747
https://en.wikipedia.org/wiki/LEGO%20%28proof%20assistant%29
LEGO is a proof assistant developed by Randy Pollack at the University of Edinburgh. It implements several type theories: the Edinburgh Logical Framework (LF), the Calculus of Constructions (CoC), the Generalized Calculus of Constructions (GCC) and the Unified Theory of Dependent Types (UTT). References External links Proof assistants Dependently typed languages
LEGO (proof assistant)
Mathematics,Technology
72
33,509,906
https://en.wikipedia.org/wiki/Superparamagnetic%20iron%E2%80%93platinum%20particles
Superparamagnetic iron platinum particles (SIPPs) are nanoparticles that have been reported as magnetic resonance imaging contrast agents. These are, however, investigational agents which have not yet been tried in humans. References Magnetic resonance imaging MRI contrast agents
Superparamagnetic iron–platinum particles
Chemistry
54
181,718
https://en.wikipedia.org/wiki/Mimer%20SQL
Mimer SQL is a proprietary SQL-based relational database management system produced by the Swedish company Mimer Information Technology AB (Mimer AB), formerly known as Upright Database Technology AB. It was originally developed as a research project at the Uppsala University, Uppsala, Sweden in the 1970s before being developed into a commercial product. The database has been deployed in a wide range of application situations, including the National Health Service Pulse blood transfusion service in the UK, Volvo Cars production line in Sweden and automotive dealers in Australia. It has sometimes been one of the limited options available in realtime critical applications and resource restricted situations such as mobile devices. History Mimer SQL originated from a project from the ITC service center supporting Uppsala University and some other institutions to leverage the relational database capabilities proposed by Codd and others. The initial release in about 1975 was designated RAPID and was written in IBM assembler language. The name was changed to Mimer in 1977 to avoid a trademark issue. Other universities were interested in the project on a number of machine architectures and Mimer was rewritten in Fortran to achieve portability. Further models were developed for Mimer with the Mimer/QL implementing the QUEL query languages. The emergence of SQL in the 1980s as the standard query language resulted in Mimers' developers choosing to adopt it with the product becoming Mimer SQL. In 1984 Mimer was transferred to the newly established company Mimer Information Systems. Versions the Mimer SQL database server is currently supported on the main platforms of Windows, MacOS, Linux, and OpenVMS (Itanium and x86-64). Previous versions of the database engine was supported on other operating systems including Solaris, AIX, HP-UX, Tru 64, SCO and DNIX. Versions of Mimer SQL are available for download and free for development. The Enterprise product is a standards based SQL database server based upon the Mimer SQL Experience database server. This product is highly configurable and components can be added, removed or replacing in the foundation product to achieve a derived product suitable for embedded, real-time or small footprint application. The Mimer SQL Realtime database server is a replacement database engine specifically designed for applications where real-time aspects are paramount. This is sometimes marketed as the Automotive approach. For resource limited environments the Mimer SQL Mobile database server is a replacement runtime environment without a SQL compiler. This is used for portable and certain custom devices and is termed the Mobile Approach. Custom embedded approaches can be applied to multiple hardware and operating system combinations. These options enable Mimer SQL to be deployed to a wide variety of additional target platforms, such as Android, and real-time operating systems including VxWorks. The database is available in real-time, embedded and automotive specialist versions requiring no maintenance, with the intention to make the product suitable for mission-critical automotive, process automation and telecommunication systems. Features Mimer SQL provides support for multiple database application programming interfaces (APIs): ODBC, JDBC, ADO.NET, Embedded SQL (C/C++, Cobol and Fortran), Module SQL (C/C++, Cobol, Fortran and Pascal), and the native API's Mimer SQL C API, Mimer SQL Real-Time API, and Mimer SQL Micro C API. MimerPy is an adapter for Mimer SQL in Python. The Mimer Provider Manager is an ADO.NET provider dispatcher that uses different plugins to access different underlying ADO.NET providers. The Mimer Provider Manager makes it possible to write database independent ADO.NET applications. Mimer SQL mainly uses optimistic concurrency control (OCC) to manage concurrent transactions. Mimer SQL is assigned port 1360 in the Internet Assigned Numbers Authority (IANA) registry. Etymology The name "Mimer" is taken from the Norse mythology, where Mimer was the giant guarding the well of wisdom, also known as "Mímisbrunnr". Metaphorically this is what a database system is doing managing data. See also Werner Schneider the professor who started the development section for the relational database that became Mimer SQL (Swedish article) References External links Mimer SQL Official developer website Proprietary database management systems Relational database management systems Real-time databases Embedded databases OpenVMS software
Mimer SQL
Technology
880
46,230,182
https://en.wikipedia.org/wiki/USA-260
USA-260, also known as GPS IIF-9, GPS SVN-71 and NAVSTAR 73, is an American Satellite navigation which forms part of the Global Positioning System. It was the ninth of twelve Block IIF satellites to be launched. Launch Built by Boeing and launched by United Launch Alliance, USA-260 was launched at 18:36 UTC on 25 March 2015, atop a Delta IV carrier rocket, flight number D370, flying in the Medium+(4,2) configuration. The launch took place from Space Launch Complex 37B at the Cape Canaveral Air Force Station, and placed USA-260 directly into medium Earth orbit. Orbit On 25 March 2015, USA-260 was in an orbit with a perigee of , an apogee of , a period of 729.14 minutes, and 55.00 degrees of inclination to the equator. It is used to broadcast the PRN 26 signal, and operates in slot 5 of plane B of the GPS constellation. The satellite has a design life of 15 years and a mass of . It is currently in service following commissioning on April 20, 2015. References Spacecraft launched in 2015 GPS satellites USA satellites Spacecraft launched by Delta IV rockets
USA-260
Technology
248
6,180,762
https://en.wikipedia.org/wiki/Margent
Margent is a vertical arrangement of flowers, leaves or hanging vines used as a decorative ornament in architecture and furniture design in the 16th, 17th and 18th centuries. This motif was developed as a complement to other decorative ornaments, hanging as "drops" at the ends of a festoon or swag. Also used to accentuate the vertical lines of window frames and centered in ornamental panels. The term margent is an archaic word meaning "margin", a border or edge; especially handwriting on the edges of a printed book (or marginalia). Related to the word "marches", the area between two regions. Shakespeare uses the word in Act II, Scene I of A Midsummer Night's Dream: These are the forgeries of jealousy And never, since the middle summer's spring, Met we on hill, in dale, forest or mead, By paved fountain or by rushy brook, Or in the beached margent of the sea, To dance our ringlets to the whistling wind, But with thy brawls thou hast disturb'd our sport. —Titania, the queen of the fairies Beached Margent of the Sea is also the name of a painting by Canadian artist, F.M. Bell-Smith (1846–1923). Gallery See also Marginalia References Volume II Ornaments (architecture) Types of sculpture History of furniture Architectural elements Visual motifs Ornaments
Margent
Mathematics,Technology,Engineering
283
8,472,500
https://en.wikipedia.org/wiki/CCL20
Chemokine (C-C motif) ligand 20 (CCL20) or liver activation regulated chemokine (LARC) or Macrophage Inflammatory Protein-3 (MIP3A) is a small cytokine belonging to the CC chemokine family. It is strongly chemotactic for lymphocytes and weakly attracts neutrophils. CCL20 is implicated in the formation and function of mucosal lymphoid tissues via chemoattraction of lymphocytes and dendritic cells towards the epithelial cells surrounding these tissues. CCL20 elicits its effects on its target cells by binding and activating the chemokine receptor CCR6. Gene expression of CCL20 can be induced by microbial factors such as lipopolysaccharide (LPS), and inflammatory cytokines such as tumor necrosis factor and interferon-γ, and down-regulated by IL-10. CCL20 is expressed in several tissues with highest expression observed in peripheral blood lymphocytes, lymph nodes, liver, appendix, and fetal lung and lower levels in thymus, testis, prostate and gut. The gene for CCL20 (scya20) is located on chromosome 2 in humans. Recent research in an animal model of multiple sclerosis known as experimental autoimmune encephalitis (EAE) demonstrated that regional neural activation can create "gates" for pathogenic CD4+ T cells to enter the CNS by increasing CCL20 expression, especially at L5. Sensory nerve stimulation, elicited by using muscles in the leg or electrical stimulation as in Arima et al., 2012, activates sympathetic neurons whose axons run through the dorsal root ganglia containing cell bodies of the stimulated afferent sensory nerve. Sympathetic neuronal activity activates IL-6 amplifier resulting in increased regional CCL20 expression and subsequent pathogenic CD4+ T cell accumulation at the same spinal cord level. CCL20 expression was observed to be dependent on IL-6 amplifier activation, which is dependent on NF-κB and STAT3 activation. This research provides evidence for a critical role for CCL20 in autoimmune pathogenesis of the central nervous system. References Further reading External links Cytokines
CCL20
Chemistry
484
9,541
https://en.wikipedia.org/wiki/Design%20of%20experiments
The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment. Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity. Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience. History Statistical experiments, following Charles S. Peirce A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. Randomized experiments Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s. Optimal designs for regression models Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less). Sequences of experiments The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952. Fisher's principles A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research. Comparison In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline. Randomization Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment. The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things. Statistical replication Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible. Blocking Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. Orthogonality Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. Multifactorial experiments Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test. Example This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs. Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by We consider two different experiments: Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8. Do the eight weighings according to the following schedule—a weighing matrix: Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is Similar estimates can be found for the weights of the other items: The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example and others. Avoiding false positives False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention. Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible. Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. Discussion topics when setting up an experimental design An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have, and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks: did the manipulation really work? What are the background variables? What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there confounding variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. Causal attributions In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. Statistical control It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Experimental designs after Fisher Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn. The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification. Human participant constraints Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments. In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans. Balancing the constraints are views from the medical field. Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393) See also Adversarial collaboration Bayesian experimental design Block design Box–Behnken design Central composite design Clinical trial Clinical study design Computer experiment Control variable Controlling for a variable Experimetrics (econometrics-related experiments) Factor analysis Fractional factorial design Glossary of experimental design Grey box model Industrial engineering Instrument effect Law of large numbers Manipulation checks Multifactor design of experiments software One-factor-at-a-time method Optimal design Plackett–Burman design Probabilistic design Protocol (natural sciences) Quasi-experimental design Randomized block design Randomized controlled trial Research design Robust parameter design Sample size determination Supersaturated design Royal Commission on Animal Magnetism Survey sampling System identification Taguchi methods References Sources Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12–13. Relevant individual papers: (1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint. (1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint. (1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint. (1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint. (1883), "A Theory of Probable Inference", Studies in Logic, pp. 126–181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, ) External links A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Experiments Industrial engineering Metascience Quantitative research Statistical process control Statistical theory Systems engineering Mathematics in medicine
Design of experiments
Mathematics,Engineering
4,271
694,424
https://en.wikipedia.org/wiki/Chiromyiformes
Chiromyiformes is an infraorder of strepsirrhine primates that includes the aye-aye from Madagascar and its extinct relatives. Classification The aye-aye is sometimes classified as a member of Lemuriformes, but others treat Chiromyiformes as a separate infraorder, based on their very reduced dental formula. Gunnell et al. (2018) reclassified the putative bat Propotto as a close relative of the aye-aye, as well as assigning the problematic strepsirrhine primate Plesiopithecus to Chiromyiformes. Evolution The molecular clock puts the divergence of Chiromyiformes and Lemuriformes at 50-49 million years ago. References External links Primate Behavior: Aye-Aye ARKive – images and movies of the aye-aye (Daubentonia madagascariensis) Primate Info Net Daubentonia madagascariensis Factsheet U.S. Fish & Wildlife Service Species Profile EDGE species Lemurs Mammals of Madagascar
Chiromyiformes
Biology
214
162,498
https://en.wikipedia.org/wiki/Nuclear%20meltdown
A nuclear meltdown (core meltdown, core melt accident, meltdown or partial core melt) is a severe nuclear reactor accident that results in core damage from overheating. The term nuclear meltdown is not officially defined by the International Atomic Energy Agency or by the United States Nuclear Regulatory Commission. It has been defined to mean the accidental melting of the core of a nuclear reactor, however, and is in common usage a reference to the core's either complete or partial collapse. A core meltdown accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as caesium-137, krypton-85, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel–coolant interactions, hydrogen explosions, or steam hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential for radioactive materials to breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby. Causes Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely. The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes. In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized. In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases, this may reduce the heat transfer efficiency (when using an inert gas as a coolant), and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the emergency core cooling system may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; as long as at least one gas circulator is available, however, the fuel will be kept cool. Light-water reactors (LWRs) Before the core of a light-water nuclear reactor can be damaged, two precursor events must have already occurred: A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up. Failure of the emergency core cooling system (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them. The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started. If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"): Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup." Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)." Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach . At this temperature, the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However, complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression." Rapid oxidation – "The next stage of core damage, beginning at approximately , is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above , the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam." Debris bed formation – "When the temperature in the core reaches about , molten control materials (1,6) will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above , the core temperature may escalate in a few minutes to the melting point of zircaloy [] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 (1,7) would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed." (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. The release of molten core materials into the water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum." At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel–coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of , its fall into liquid water at may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place. The American Nuclear Society has commented on the TMI-2 accident, that despite melting of about one-third of the fuel, the reactor vessel itself maintained its integrity and contained the damaged fuel. Breach of the primary pressure boundary There are several possibilities as to how the primary pressure boundary could be breached by corium. Steam explosion As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin et al report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha mode. In the event of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed. Pressurized melt ejection (PME) It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH). Severe accident ex-vessel interactions and challenges to containment Haskin et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents. Overpressure Dynamic pressure (shockwaves) Internal missiles External missiles (not applicable to core melt accidents) Meltthrough Bypass Standard failure modes If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur. In modern Russian plants, there is a "core catching device" in the bottom of the containment building. The melted core is supposed to hit a thick layer of a "sacrificial metal" that would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. There has never been any full-scale testing of this device, however. In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions. In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One debated positive effect of the corium falling into water is that it is cooled and returns to a solid state. Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature. These procedures are intended to prevent release of radioactivity. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release. In the Fukushima incident, however, this design failed. Despite the efforts of the operators at the Fukushima Daiichi nuclear power plant to maintain control, the reactor cores in units 1–3 overheated, the nuclear fuel melted and the three containment vessels were breached. Hydrogen was released from the reactor pressure vessels, leading to explosions inside the reactor buildings in units 1, 3 and 4 that damaged structures and equipment and injured personnel. Radionuclides were released from the plant to the atmosphere and were deposited on land and on the ocean. There were also direct releases into the sea. As the natural decay heat of the corium eventually reduces to an equilibrium with convection and conduction to the containment walls, it becomes cool enough for water spray systems to be shut down and the reactor to be put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure. After perhaps a decade for fission products to decay, the containment can be reopened for decontamination and demolition. Another scenario sees a buildup of potentially explosive hydrogen, but passive autocatalytic recombiners inside the containment are designed to prevent this. In Fukushima, the containments were filled with inert nitrogen, which prevented hydrogen from burning; the hydrogen leaked from the containment to the reactor building, however, where it mixed with air and exploded. During the 1979 Three Mile Island accident, a hydrogen bubble formed in the pressure vessel dome. There were initial concerns that the hydrogen might ignite and damage the pressure vessel or even the containment building; but it was soon realized that lack of oxygen prevented burning or explosion. Speculative failure modes One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment. Another theory, called an "alpha mode" failure by the 1975 Rasmussen (WASH-1400) study, asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.) By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss-of-coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn through of the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment. The hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. Some fear that a molten reactor core could penetrate the reactor pressure vessel and containment structure and burn downwards to the level of the groundwater. It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the loss-of-fluid-test reactor described in Test Area North's fact sheet). The Three Mile Island accident provided real-life experience with an actual molten core: the corium failed to melt through the reactor pressure vessel after over six hours of exposure due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Other reactor types Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe. CANDU reactors CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank (or calandria vault). These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well. Gas-cooled reactors One type of Western reactor, known as the advanced gas-cooled reactor (or AGR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring. Lead and lead-bismuth-cooled reactors Recently heavy liquid metal, such as lead or lead-bismuth, has been proposed as a reactor coolant. Because of the similar densities of the fuel and the HLM, an inherent passive safety self-removal feedback mechanism due to buoyancy forces is developed, which propels the packed bed away from the wall when certain threshold of temperature is attained and the bed becomes lighter than the surrounding coolant, thus preventing temperatures that can jeopardize the vessel’s structural integrity and also reducing the recriticality potential by limiting the allowable bed depth. Experimental or conceptual designs Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety. The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built. Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used. The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times. The liquid fluoride thorium reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged. Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe. Soviet Union–designed reactors RBMKs Soviet-designed RBMK reactors (Reaktor Bolshoy Moshchnosti Kanalnyy), found only in Russia and other post-Soviet states and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and have emergency cooling systems (ECCS) considered grossly inadequate by Western safety standards. RBMK emergency core cooling systems only have one division and little redundancy within that division. Though the large core of the RBMK is less energy-dense than the smaller Western LWR core, it is harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen at high temperatures, graphite forms synthesis gas and with the water gas shift reaction, the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity. The RBMK tends towards dangerous power fluctuations. Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings. The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds. Western aid has been given to provide certain real-time safety monitoring capacities to the operating staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in response to the weaknesses that were in the RBMK. Nonetheless, numerous RBMKs still operate. Though it might be possible to stop a loss-of-coolant event prior to core damage occurring, any core damage incidents will probably allow massive release of radioactive materials. Upon entering the EU in 2004, Lithuania was required to phase out its two RBMKs at Ignalina NPP, deemed totally incompatible with European nuclear safety standards. The country planned to replace them with safer reactors at Visaginas Nuclear Power Plant. MKER The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK, designed to optimize the benefits and fix the serious flaws of the original. Several unique features of the MKER's design make it a credible and interesting option. The reactor remains online during refueling, ensuring outages only occasionally for maintenance, with uptime up to 97-99%. The moderator design allows the use of less-enriched fuels, with a high burnup rate. Neutronics characteristics have been optimized for civilian use, for superior fuel fertilization and recycling; and graphite moderation achieves better neutronics than is possible with light water moderation. The lower power density of the core greatly enhances thermal regulation. An array of improvements make the MKER's safety comparable to Western Generation III reactors: improved quality of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast-acting rapid shutdown system. The passive emergency cooling system uses reliable natural phenomena to cool the core, rather than depending on motor-driven pumps. The containment structure is designed to withstand severe stress and pressure. In the event of a pipe break of a cooling-water channel, the channel can be isolated from the water supply, preventing a general failure. The greatly enhanced safety and unique benefits of the MKER design enhance its competitiveness in countries considering full fuel-cycle options for nuclear development. VVER The VVER is a pressurized light-water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well-understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (starting from the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems. Even with these positive developments, however, certain older VVER models raise a high level of concern, especially the VVER-440 V230. The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps in thickness, grossly insufficient by Western standards. Has no ECCS. Can survive at most one pipe break (there are many pipes greater than that size within the design). Has six steam generator loops, adding unnecessary complexity. Apparently steam generator loops can be isolated, however, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop—a feature found in few Western reactors. The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility—built, no doubt, to deal with the enormous volume of rust within the primary coolant loop—the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems. Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states, rather than abandon the reactors entirely, have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced. The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models operated by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention—but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety. During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiple redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440s in the world. The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels. Effects The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur. In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radioactivity release or danger to the public. Reactor design Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios. Liquid fueled reactors can be stopped by draining the fuel into tankage, which not only prevents further fission but draws decay heat away statically, and by drawing off the fission products (which are the source of post-shutdown heating) incrementally. The ideal is to have reactors that fail-safe through physics rather than through redundant safety systems or human intervention. Certain fast breeder reactor designs may be more susceptible to meltdown than other reactor types, due to their larger quantity of fissile material and the higher neutron flux inside the reactor core. Other reactor designs, such as Integral Fast Reactor model EBR II, had been explicitly engineered to be meltdown-immune. It was tested in April 1986, just before the Chernobyl failure, to simulate loss of coolant pumping power, by switching off the power to the primary pumps. As designed, it shut itself down, in about 300 seconds, as soon as the temperature rose to a point designed as higher than proper operation would require. This was well below the boiling point of the unpressurised liquid metal coolant, which had entirely sufficient cooling ability to deal with the heat of fission product radioactivity, by simple convection. The second test, deliberate shut-off of the secondary coolant loop that supplies the generators, caused the primary circuit to undergo the same safe shutdown. This test simulated the case of a water-cooled reactor losing its steam turbine circuit, perhaps by a leak. United States The Westinghouse TR-2 suffered partial core damage in 1960 when a likely fuel cladding defect caused one fuel element (out of over 200) to overheat and melt. The reactor at EBR-I suffered a partial meltdown during a coolant flow test on 29 November 1955. The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor that operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959. The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward. The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969. The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt", led to the total dismantlement and the permanent shutdown of reactor 2. Unit 1 continued to operate until 2019. Soviet Union A number of Soviet Navy nuclear submarines experienced nuclear meltdowns, including K-27, K-140, and K-431. Reactor 4 of Chernobyl experienced a full reactor meltdown, after the failure of a test. Japan During the Fukushima Daiichi nuclear disaster following the earthquake and tsunami in March 2011, three of the power plant's six reactors suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. Switzerland The Lucens reactor, Switzerland, in 1969. Canada NRX (military), Ontario, Canada, in 1952 United Kingdom Windscale (military), Sellafield, England, in 1957 (see Windscale fire) Chapelcross nuclear power station (civilian), Scotland, in 1967 France Saint-Laurent Nuclear Power Plant (civilian), France, in 1969 China syndrome The China syndrome (loss-of-coolant accident) is a nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then (figuratively) through the crust and body of the Earth until reaching the opposite end, presumed to be in "China". While the antipodes of China include Argentina with its Atucha Nuclear Power Plant the phrasing is metaphorical; there is no way a core could penetrate the several-kilometer thickness of the Earth's crust, and even if it did melt to the center of the Earth, it would not travel back upwards against the pull of gravity. Moreover, any tunnel behind the material would be closed by immense lithostatic pressure. History The system design of the nuclear power plants built in the late 1960s raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency core cooling system to cope with the effects of a loss of coolant accident and the consequent meltdown of the fuel core. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979). The real scare, however, came from a quote in the 1979 film The China Syndrome, which stated, "It melts right down through the bottom of the plant—theoretically to China, but of course, as soon as it hits ground water, it blasts into the atmosphere and sends out clouds of radioactivity. The number of people killed would depend on which way the wind was blowing, rendering an area the size of Pennsylvania permanently uninhabitable." The actual threat of this was coincidentally tested just 12 days after the release of the film when a meltdown at Pennsylvania's Three Mile Island Plant 2 (TMI-2) created a molten core that moved toward "China" before the core froze at the bottom of the reactor pressure vessel. Thus, the TMI-2 reactor fuel and fission products breached the fuel rods, but the melted core itself did not break the containment of the reactor vessel. A similar concern arose during the Chernobyl disaster. After the reactor was destroyed, a liquid corium mass from the melting core began to breach the concrete floor of the reactor vessel, which was situated above the bubbler pool (a large water reservoir for emergency pumps and to contain any steam pipe rupture). There was concern that a steam explosion would have occurred if the hot corium made contact with the water, resulting in more radioactive materials being released into the air. Due to damages from the accident, three station workers manually operated the valves necessary to drain this pool. However, this concern was proven to be unfounded as (unknown to those at the time) the corium already contacted the reservoir before it could be drained, where instead of creating a steam explosion it harmlessly cooled rapidly and created a light-brown ceramic pumice that floated on the water. See also Behavior of nuclear fuel during a reactor accident Comparison of Chernobyl and other radioactivity releases Effects of the Chernobyl disaster High-level radioactive waste management International Nuclear Event Scale List of civilian nuclear accidents Lists of nuclear disasters and radioactive incidents Nuclear safety and security Nuclear power Nuclear power debate Scram or SCRAM, an emergency shutdown of a nuclear reactor Notes References External links Annotated bibliography on civilian nuclear accidents from the Alsos Digital Library for Nuclear Issues Partial Fuel Meltdown Events Nuclear reactor safety
Nuclear meltdown
Technology
8,568
1,460,107
https://en.wikipedia.org/wiki/Crossbar%20latch
The Crossbar latch is a technology published by Phillip Kuekes of HP Labs in 2001 and granted a US patent in 2003, with the goal of eventually replacing transistors in various applications. This would enable the creation of integrated circuits composed solely of memristors, which, according to the patent, might be easier and less expensive to create. In 2005, Phillip Kuekes stated that the crossbar latch "could someday replace transistors in computers, just as transistors replaced vacuum tubes and vacuum tubes replaced electromagnetic relays before them." Details The crossbar latch was introduced by HP Labs scientists in the Journal of Applied Physics, which provides a basis for constructing logic gates using memristors. The crossbar latch consists of a signal line crossed by two control lines. Depending on the voltages sent down the various lines, it can simulate the action of the three major logic gates: AND, OR and NOT. The abstract of the patent is as follows: Applications in arithmetic processing Greg Snider of Hewlett-Packard created this application, which uses crossbar latches to imitate the functionality of a half adder, which is the foundation of modern computing systems. A crossbar tile is created in this application from a layer of horizontal row wires and a layer of vertical column wires, with memristor or similar materials sandwiched between the horizontal and vertical wire layers. Each crossbar tile intersection or junction can be configured to be in a high-resistance state with little or no current flowing between the horizontal and vertical wires, or in a low-resistance state with current flowing. Fig. 1 illustrates the configuration of a half-adder using a crossbar tile, as taught by Snider, with the nodes identifying junctions of the crossbar tile configured as low-resistance states. By setting different logic inputs A, NOT A, B, and NOT B to different row wires this configuration produces the sum and carry outputs typical for a half-adder. Connections between multiple half-adders may then be used to form full adders in accordance with conventional arithmetic architectures. Applications of crossbar latch in neuromorphics Crossbar latches have been suggested as components of neuromorphic computing systems. One implementation of this is in the form of a neural network formed from nanowires as discussed in a patent by Greg Snider of Hewlett-Packard. See also Memristor References External links Research could send transistors the way of the vacuum tube (HP Press Release) HP claims molecular computing breakthrough (ComputerWorld) Electrical components
Crossbar latch
Technology,Engineering
526
16,796,898
https://en.wikipedia.org/wiki/HD%2080606%20b
HD 80606 b (also Struve 1341 Bb or HIP 45982 b) is an eccentric hot Jupiter 217 light-years from the Sun in the constellation of Ursa Major. HD 80606 b was discovered orbiting the star HD 80606 in April 2001 by a team led by Michel Mayor and Didier Queloz. With a mass 4 times that of Jupiter, it is a gas giant. Because the planet transits the host star its radius can be determined using the transit method, and was found to be about the same as Jupiter's. Its density is slightly less than Earth's. It has an extremely eccentric orbit like a comet, with its orbit taking it very close to its star and then back out very far away from it every 111 days. Discovery The variable radial velocity of HD 80606 was first noticed in 1999 from observations with the 10-m Keck 1 telescope at the W. M. Keck Observatory in Hawaii by the G-Dwarf Planet Search, a survey of nearly 1000 nearby G dwarfs to identify extrasolar planet candidates. The star was then followed up by the Geneva Extrasolar Planet Search team using the ELODIE spectrograph mounted on the 1.93-m telescope at the Haute-Provence Observatory. The discovery of HD 80606 b was announced on 4 April 2001. The transit was detected using a Celestron 35-cm Schmidt–Cassegrain telescope at the UCL Observatory. Prior to the large data release of the Kepler Mission in February 2011, HD 80606 b had the longest orbital period of any known transiting planet. It takes 12.1 hours to transit its star. The transit of 14 January 2010 was partially observed by MOST; but there were equipment failures over part of this time, and the 8 January secondary transit was entirely lost. The midpoint of the next transit was 1 February 2013 11:37 UT. Physical properties HD 80606 b has the most eccentric orbit of any known planet after HD 20782 b. Its eccentricity is 0.9336, comparable to Halley's Comet. The eccentricity may be a result of the Kozai mechanism, which would occur if the planet's orbit is significantly inclined to that of the binary stars. This interpretation is supported by measurements of the Rossiter–McLaughlin effect, which indicate that the planet's orbit may be significantly inclined (by 42°) to the rotational axis of the star, a configuration which would be expected if the Kozai mechanism were responsible for the orbit. As a result of this high eccentricity, the planet's distance from its star varies from 0.03 to 0.88 AU. At apastron it would receive an insolation similar to that of Earth, while at periastron the insolation would be around 800 times greater, far more than that experienced by Mercury in the Solar System. In 2009, the eclipse of HD 80606 b by its parent star was detected, allowing measurements of the planet's temperature to be made as the planet passed through periastron. These measurements indicated that the temperature rose from around to in just 6 hours. An observer above the cloud tops of the gas giant would see the parent star swell to 30 times the apparent size of the Sun in our own sky. The planet's rotation period has been measured to be hours, longer than the predicted pseudo-synchronous rotation period of 40 hours. Weather The planet has wild variations in its weather as it orbits its parent star. Computer models predict the planet heats up in just a matter of hours, triggering "shock wave storms" that ripple out from the point facing its star, with winds that move at around . Notes References External links Heating Up on a Distant Planet (ScienceFriday) Ursa Major Giant planets Exoplanets discovered in 2001 Exoplanets detected by radial velocity Transiting exoplanets
HD 80606 b
Astronomy
798
207,036
https://en.wikipedia.org/wiki/Copepod
Copepods (; meaning "oar-feet") are a group of small crustaceans found in nearly every freshwater and saltwater habitat. Some species are planktonic (living in the water column), some are benthic (living on the sediments), several species have parasitic phases, and some continental species may live in limnoterrestrial habitats and other wet terrestrial places, such as swamps, under leaf fall in wet forests, bogs, springs, ephemeral ponds, puddles, damp moss, or water-filled recesses of plants (phytotelmata) such as bromeliads and pitcher plants. Many live underground in marine and freshwater caves, sinkholes, or stream beds. Copepods are sometimes used as biodiversity indicators. As with other crustaceans, copepods have a larval form. For copepods, the egg hatches into a nauplius form, with a head and a tail but no true thorax or abdomen. The larva molts several times until it resembles the adult and then, after more molts, achieves adult development. The nauplius form is so different from the adult form that it was once thought to be a separate species. The metamorphosis had, until 1832, led to copepods being misidentified as zoophytes or insects (albeit aquatic ones), or, for parasitic copepods, 'fish lice'. Classification and diversity Copepods are assigned to the class Copepoda within the superclass Multicrustacea in the subphylum Crustacea. An alternative treatment is as a subclass belonging to class Hexanauplia. They are divided into 10 orders. Some 13,000 species of copepods are known, and 2,800 of them live in fresh water. Characteristics Copepods vary considerably, but are typically long, with a teardrop-shaped body and large antennae. Like other crustaceans, they have an armoured exoskeleton, but they are so small that in most species, this thin armour and the entire body is almost totally transparent. Some polar copepods reach . Most copepods have a single median compound eye, usually bright red and in the centre of the transparent head. Subterranean species may be eyeless, and members of the genera Copilia and Corycaeus possess two eyes, each of which has a large anterior cuticular lens paired with a posterior internal lens to form a telescope. Like other crustaceans, copepods possess two pairs of antennae; the first pair is often long and conspicuous. Free-living copepods of the orders Calanoida, Cyclopoida, and Harpacticoida typically have a short, cylindrical body, with a rounded or beaked head, although considerable variation exists in this pattern. The head is fused with the first one or two thoracic segments, while the remainder of the thorax has three to five segments, each with limbs. The first pair of thoracic appendages is modified to form maxillipeds, which assist in feeding. The abdomen is typically narrower than the thorax, and contains five segments without any appendages, except for some tail-like "rami" at the tip. Parasitic copepods (the other seven orders) vary widely in morphology and no generalizations are possible. Because of their small size, copepods have no need of any heart or circulatory system (the members of the order Calanoida have a heart, but no blood vessels), and most also lack gills. Instead, they absorb oxygen directly into their bodies. Their excretory system consists of maxillary glands. Behavior The second pair of cephalic appendages in free-living copepods is usually the main time-averaged source of propulsion, beating like oars to pull the animal through the water. However, different groups have different modes of feeding and locomotion, ranging from almost immotile for several minutes (e.g. some harpacticoid copepods) to intermittent motion (e.g., some cyclopoid copepods) and continuous displacements with some escape reactions (e.g. most calanoid copepods). Some copepods have extremely fast escape responses when a predator is sensed, and can jump with high speed over a few millimetres. Many species have neurons surrounded by myelin (for increased conduction speed), which is very rare among invertebrates (other examples are some annelids and malacostracan crustaceans like palaemonid shrimp and penaeids). Even rarer, the myelin is highly organized, resembling the well-organized wrapping found in vertebrates (Gnathostomata). Despite their fast escape response, copepods are successfully hunted by slow-swimming seahorses, which approach their prey so gradually, it senses no turbulence, then suck the copepod into their snout too suddenly for the copepod to escape. Several species are bioluminescent and able to produce light. It is assumed this is an antipredatory defense mechanism. Finding a mate in the three-dimensional space of open water is challenging. Some copepod females solve the problem by emitting pheromones, which leave a trail in the water that the male can follow. Copepods experience a low Reynolds number and therefore a high relative viscosity. One foraging strategy involves chemical detection of sinking marine snow aggregates and taking advantage of nearby low-pressure gradients to swim quickly towards food sources. Diet Most free-living copepods feed directly on phytoplankton, catching cells individually. A single copepod can consume up to 373,000 phytoplankton per day. They generally have to clear the equivalent to about a million times their own body volume of water every day to cover their nutritional needs. Some of the larger species are predators of their smaller relatives. Many benthic copepods eat organic detritus or the bacteria that grow in it, and their mouth parts are adapted for scraping and biting. Herbivorous copepods, particularly those in rich, cold seas, store up energy from their food as oil droplets while they feed in the spring and summer on plankton blooms. These droplets may take up over half of the volume of their bodies in polar species. Many copepods (e.g., fish lice like the Siphonostomatoida) are parasites, and feed on their host organisms. In fact, three of the 10 known orders of copepods are wholly or largely parasitic, with another three comprising most of the free-living species. Life cycle Most nonparasitic copepods are holoplanktonic, meaning they stay planktonic for all of their lifecycles, although harpacticoids, although free-living, tend to be benthic rather than planktonic. During mating, the male copepod grips the female with his first pair of antennae, which is sometimes modified for this purpose. The male then produces an adhesive package of sperm and transfers it to the female's genital opening with his thoracic limbs. Eggs are sometimes laid directly into the water, but many species enclose them within a sac attached to the female's body until they hatch. In some pond-dwelling species, the eggs have a tough shell and can lie dormant for extended periods if the pond dries up. Eggs hatch into nauplius larvae, which consist of a head with a small tail, but no thorax or true abdomen. The nauplius moults five or six times, before emerging as a "copepodid larva". This stage resembles the adult, but has a simple, unsegmented abdomen and only three pairs of thoracic limbs. After a further five moults, the copepod takes on the adult form. The entire process from hatching to adulthood can take a week to a year, depending on the species and environmental conditions such as temperature and nutrition (e.g., egg-to-adult time in the calanoid Parvocalanus crassirostris is ~7 days at but 19 days at . Biophysics Copepods jump out of the water - porpoising. The biophysics of this motion has been described by Waggett and Buskey 2007 and Kim et al 2015. Ecology Planktonic copepods are important to global ecology and the carbon cycle. They are usually the dominant members of the zooplankton, and are major food organisms for small fish such as the dragonet, banded killifish, Alaska pollock, and other crustaceans such as krill in the ocean and in fresh water. Some scientists say they form the largest animal biomass on earth. Copepods compete for this title with Antarctic krill (Euphausia superba). C. glacialis inhabits the edge of the Arctic icepack, especially in polynyas where light (and photosynthesis) is present, in which they alone comprise up to 80% of zooplankton biomass. They bloom as the ice recedes each spring. The ongoing large reduction in the annual ice pack minimum may force them to compete in the open ocean with the much less nourishing C. finmarchicus, which is spreading from the North Sea and the Norwegian Sea into the Barents Sea. Because of their smaller size and relatively faster growth rates, and because they are more evenly distributed throughout more of the world's oceans, copepods almost certainly contribute far more to the secondary productivity of the world's oceans, and to the global ocean carbon sink than krill, and perhaps more than all other groups of organisms together. The surface layers of the oceans are believed to be the world's largest carbon sink, absorbing about 2 billion tons of carbon a year, the equivalent to perhaps a third of human carbon emissions, thus reducing their impact. Many planktonic copepods feed near the surface at night, then sink (by changing oils into more dense fats) into deeper water during the day to avoid visual predators. Their moulted exoskeletons, faecal pellets, and respiration at depth all bring carbon to the deep sea. About half of the estimated 14,000 described species of copepods are parasitic and many have adapted extremely modified bodies for their parasitic lifestyles. They attach themselves to bony fish, sharks, marine mammals, and many kinds of invertebrates such as corals, other crustaceans, molluscs, sponges, and tunicates. They also live as ectoparasites on some freshwater fish. Copepods as parasitic hosts In addition to being parasites themselves, copepods are subject to parasitic infection. The most common parasites are marine dinoflagellates of the genus Blastodinium, which are gut parasites of many copepod species. Twelve species of Blastodinium are described, the majority of which were discovered in the Mediterranean Sea. Most Blastodinium species infect several different hosts, but species-specific infection of copepods does occur. Generally, adult copepod females and juveniles are infected. During the naupliar stage, the copepod host ingests the unicellular dinospore of the parasite. The dinospore is not digested and continues to grow inside the intestinal lumen of the copepod. Eventually, the parasite divides into a multicellular arrangement called a trophont. This trophont is considered parasitic, contains thousands of cells, and can be several hundred micrometers in length. The trophont is greenish to brownish in color as a result of well-defined chloroplasts. At maturity, the trophont ruptures and Blastodinium spp. are released from the copepod anus as free dinospore cells. Not much is known about the dinospore stage of Blastodinium and its ability to persist outside of the copepod host in relatively high abundances. The copepod Calanus finmarchicus, which dominates the northeastern Atlantic coast, has been shown to be greatly infected by this parasite. A 2014 study in this region found up to 58% of collected C. finmarchicus females to be infected. In this study, Blastodinium-infected females had no measurable feeding rate over a 24-hour period. This is compared to uninfected females which, on average, ate 2.93 × 104 cells per day. Blastodinium-infected females of C. finmarchicus exhibited characteristic signs of starvation, including decreased respiration, fecundity, and fecal pellet production. Though photosynthetic, Blastodinium spp. procure most of their energy from organic material in the copepod gut, thus contributing to host starvation. Underdeveloped or disintegrated ovaries and decreased fecal pellet size are a direct result of starvation in female copepods. Parasitic infection by Blastodinium spp. could have serious ramifications on the success of copepod species and the function of entire marine ecosystems. Blastodinium parasitism is not lethal, but has negative impacts on copepod physiology, which in turn may alter marine biogeochemical cycles. Freshwater copepods of the Cyclops genus are the intermediate host of the Guinea worm (Dracunculus medinensis), the nematode that causes dracunculiasis disease in humans. This disease may be close to being eradicated through efforts by the U.S. Centers for Disease Control and Prevention and the World Health Organization. Evolution Despite their modern abundance, due to their small size and fragility, copepods are extremely rare in the fossil record. The oldest known fossils of copepods are from the late Carboniferous (Pennsylvanian) of Oman, around 303 million years old, which were found in a clast of bitumen from a glacial diamictite. The copepods present in the bitumen clast were likely residents of a subglacial lake which the bitumen had seeped upwards through while still liquid, before the clast subsequently solidified and was deposited by glaciers. Though most of the remains were undiagnostic, at least some likely belonged to the extant harpacticoid family Canthocamptidae, suggesting that copepods had already substantially diversified by this time. Possible microfossils of copepods are known from the Cambrian of North America. Transitions to parasitism have occurred within copepods independently at least 14 different times, with the oldest record of this being from damage to fossil echinoids done by cyclopoids from the Middle Jurassic of France, around 168 million years old. Practical aspects In marine aquaria Live copepods are used in the saltwater aquarium hobby as a food source and are generally considered beneficial in most reef tanks. They are scavengers and also may feed on algae, including coralline algae. Live copepods are popular among hobbyists who are attempting to keep particularly difficult species such as the mandarin dragonet or scooter blenny. They are also popular to hobbyists who want to breed marine species in captivity. In a saltwater aquarium, copepods are typically stocked in the refugium. Water supplies Copepods are sometimes found in public main water supplies, especially systems where the water is not mechanically filtered, such as New York City, Boston, and San Francisco. This is not usually a problem in treated water supplies. In some tropical countries, such as Peru and Bangladesh, a correlation has been found between copepods' presence and cholera in untreated water, because the cholera bacteria attach to the surfaces of planktonic animals. The larvae of the guinea worm must develop within a copepod's digestive tract before being transmitted to humans. The risk of infection with these diseases can be reduced by filtering out the copepods (and other matter), for example with a cloth filter. Copepods have been used successfully in Vietnam to control disease-bearing mosquitoes such as Aedes aegypti that transmit dengue fever and other human parasitic diseases. The copepods can be added to water-storage containers where the mosquitoes breed. Copepods, primarily of the genera Mesocyclops and Macrocyclops (such as Macrocyclops albidus), can survive for periods of months in the containers, if the containers are not completely drained by their users. They attack, kill, and eat the younger first- and second-instar larvae of the mosquitoes. This biological control method is complemented by community trash removal and recycling to eliminate other possible mosquito-breeding sites. Because the water in these containers is drawn from uncontaminated sources such as rainfall, the risk of contamination by cholera bacteria is small, and in fact no cases of cholera have been linked to copepods introduced into water-storage containers. Trials using copepods to control container-breeding mosquitoes are underway in several other countries, including Thailand and the southern United States. The method, though, would be very ill-advised in areas where the guinea worm is endemic. The presence of copepods in the New York City water supply system has caused problems for some Jewish people who observe kashrut. Copepods, being crustaceans, are not kosher, nor are they quite small enough to be ignored as nonfood microscopic organisms, since some specimens can be seen with the naked eye. Hence, large specimens are certainly non-Kosher. However, some species are visible to the naked eye, but are small enough that they only appear as little white specks. These are problematic, as it is a question as to whether they are considered visible enough to be non-Kosher. When a group of rabbis in Brooklyn, New York, discovered these copepods in the summer of 2004, they triggered such debate in rabbinic circles that some observant Jews felt compelled to buy and install filters for their water. The water was ruled kosher by posek Yisrael Belsky, chief posek of the OU and one of the most scientifically literate poskim of his time. Meanwhile, Rabbi Dovid Feinstein, based on the ruling of Rabbi Yosef Shalom Elyashiv - the two widely considered to be the greatest poskim of their time - ruled it was not kosher until filtered. Several major kashrus organizations (e.g OU Kashrus and Star-K) require tap water to have filters. In popular culture The Nickelodeon television series SpongeBob SquarePants features a copepod named Sheldon J. Plankton as a recurring character. See also Particle (ecology) World Association of Copepodologists References External links Copepod fact sheet – Guide to the marine zooplankton of south eastern Australia Diversity and geographical distribution of pelagic copepoda Copepod World Neotropical Copepoda Database Project The World Copepod Culture Database Bioindicators Extant Early Cretaceous first appearances Maxillopoda
Copepod
Chemistry,Environmental_science
3,867
3,385,228
https://en.wikipedia.org/wiki/Sod%20house
The sod house or soddy was a common alternative to the log cabin during frontier settlement of the Great Plains of Canada and the United States in the 1800s and early 1900s. Primarily used at first for animal shelters, corrals, and fences, they came into use also to house humans, for the prairie often lacked standard building materials such as wood or stone, while sod from thickly rooted prairie grass was abundant and free and could be used for house construction. Prairie grass has a much thicker, tougher root structure than a modern lawn. Construction of a sod house involved cutting patches of sod in triangles and piling them into walls. Builders employed a variety of roofing methods. Sod houses accommodated normal doors and windows. The resulting structure featured less expensive materials and was quicker to build than a wood-frame house, but required frequent maintenance and was vulnerable to rain damage, especially if the roof was also primarily of sod. Stucco was sometimes used to protect the outer walls. Canvas or stucco often lined the interior walls. There are a variety of designs, including a type built by Mennonites in Prussia, Russia, and Canada called a semlin and another in Alaska known as a barabara. Notable sod houses Sod houses that are individually notable and historic sites that include one or more sod houses or other sod structures include: Iceland Skagafjörður Folk Museum, turf/sod houses of the burstabær style in Glaumbær. Arbaer Folk Museum. Canada Addison Sod House, a Canadian National Historic Landmark building, in Saskatchewan. L'Anse aux Meadows, the site of the pioneering 10th–11th century CE Norse settlement near the northern tip of Newfoundland, has reconstructions of eight sod houses in their original locations, used for various purposes when built by Norse settlers there a millennium ago. The Mennonite Heritage Village in Steinbach, Manitoba contains a Mennonite-style sod hut known as a semlin United States Cottonwood Ranch, Sheridan County, Kansas. The ranch site, listed in the National Register of Historic Places (NRHP), included a sod stable. Dowse Sod House, near Comstock, Nebraska; NRHP-listed and operated as museum. Heman Gibbs Farmstead, Falcon Heights, Minnesota; the NRHP-listed site includes a replica of the original 1849 sod house. Jackson-Einspahr Sod House, Holstein, Nebraska, NRHP-listed. Leffingwell Camp Site, Flaxman Island, Alaska, listed on the U.S. National Register of Historic Places (NRHP). Minor Sod House, McDonald, Kansas, NRHP-listed. Page Soddy, Harper County, Oklahoma, NRHP-listed. Pioneer Sod House, Wheat Ridge, Colorado, NRHP-listed. Gustav Rohrich Sod House, Bellwood, Nebraska, NRHP-listed. Sod House (Cleo Springs, Oklahoma), also known as Marshall McCully Sod House, NRHP-listed. Sod House Ranch, Burns, Oregon (does not include a sod house). Wallace W. Waterman Sod House, Big Springs, Nebraska, NRHP-listed. The Netherlands Netherlands Open Air Museum near Arnhem. De Spitkeet in Harkema is an open air museum reconstructing how people used to live in the area. Ellert en Brammert, a museum in Drenthe with multiple sod houses. See also Burdei Canadian Prairies Dugout (shelter) Earth structure Earth shelter Icelandic turf houses Rammed earth Sod roof Vernacular architecture References Further reading Two books by Solomon D. Butcher (1856–1927), Nebraska photographer of the homestead era, whose works include over 1,000 photos of sod houses: Pioneer History of Custer County and Short Sketches of Early Days in Nebraska (1901), and Sod Houses, or the Development of the Great American Plains (1904) American frontier House types Western (genre) staples and terminology Earth structures is:Torfbær
Sod house
Engineering
827
683,109
https://en.wikipedia.org/wiki/Triality
In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry. Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group S3 which acts by permuting the three legs. This gives rise to an S3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the vector representation and two chiral spin representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on —consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations. No other connected Dynkin diagram has an automorphism group of order greater than 2; for other Dn (corresponding to other even Spin groups, Spin(2n)), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation. Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality". The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4. General formulation A duality between two vector spaces over a field is a non-degenerate bilinear form i.e., for each non-zero vector in one of the two vector spaces, the pairing with is a non-zero linear functional on the other. Similarly, a triality between three vector spaces over a field is a non-degenerate trilinear form i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two. By choosing vectors in each on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by , the triality may be re-expressed as a bilinear multiplication where each corresponds to the identity element in . The non-degeneracy condition now implies that is a composition algebra. It follows that has dimension 1, 2, 4 or 8. If further and the form used to identify with its dual is positive definite, then is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O. Conversely, composition algebras immediately give rise to trialities by taking each equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form. An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8). See also Triple product, may be related to the 4-dimensional triality (on quaternions) References John Frank Adams (1981), Spin(8), Triality, F4 and all that, in "Superspace and supergravity", edited by Stephen Hawking and Martin Roček, Cambridge University Press, pages 435–445. John Frank Adams (1996), Lectures on Exceptional Lie Groups (Chicago Lectures in Mathematics), edited by Zafer Mahmud and Mamora Mimura, University of Chicago Press, . Further reading External links Spinors and Trialities by John Baez Triality with Zometool by David Richter Lie groups Spinors
Triality
Mathematics
917
41,799,215
https://en.wikipedia.org/wiki/Lakester
A Lakester is a car with a streamlined body but with four exposed wheels. It is most often made out of a modified aircraft drop tank. The main attraction is the drop tank's excellent aerodynamics due to it being streamlined for its original use on aircraft. Building lakesters became popular after World War II when surplus drop tanks were available cheaply. History During the late 1940s Bill Burke of the So-Cal Speed Shop built the first "Lakester" from a surplus aircraft drop tank. The idea of using a tank as an aerodynamic car body came to Burke when he saw some drop tanks on a barge being taken ashore at Guadalcanal. Burke recalls thinking, "My god, what a beautiful piece of streamlining that is!" With a tape measure, Burke went aboard and measured one of the tanks. He knew the dimensions of a Ford rear end and engine block, and he could see that the automotive components would fit. Production After World War II, surplus tanks were sold for $35 or $40 apiece, and hundreds of them were stockpiled in surplus yards. Burke's first Lakester was created from a 168-gallon tank used on the P-51 Mustang. However, with experience it was found that the 315-gallon tank used on the P-38 Lightning was more practical due to its greater size. The tanks consisted of two halves bolted together, however since the top half had fuel openings and all the necessary hardware to fasten it to the aircraft, usually only two bottom halves were used to create a Lakester. Racing The Lakester's first race appearance was at Bonneville Salt Flats. Even today, Lakesters can still be seen racing there. See also Drop tank So-Cal Speed Shop References External links 2003 So-Cal Lakester Vehicles by type Vehicles
Lakester
Physics
366
22,449,980
https://en.wikipedia.org/wiki/Coprinellus%20verrucispermus
Coprinellus verrucispermus is a species of fungus in the family Psathyrellaceae. The fungus has been identified in a study of soils from the northern-central region of New South Wales, Australia. Formerly in the genus Coprinus, it was given its current name in 2001. References verrucispermus Fungi described in 1988 Fungi of Australia Fungus species
Coprinellus verrucispermus
Biology
80
43,630,513
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi%20Prize
The Erdős–Rényi Prize of the Network Science Society is named after Paul Erdős and Alfréd Rényi. This international prize is awarded annually in a special ceremony at the International Conference on Network Science to a selected young scientist (under 40 years old on the day of the nomination deadline) for their research achievements in the area of network science, broadly construed. While the achievements can be both theoretical and experimental, the prize is aimed at emphasizing outstanding contributions relevant to the interdisciplinary progress of network science. Past recipients are: 2012: Roger Guimera, Rovira i Virgili University, for outstanding work as a young researcher in Network Science for the technical depth and the interdisciplinary values of his scientific contributions to the analysis of network cartography and community identification. 2013: Adilson E. Motter, Northwestern University, for his groundbreaking contributions to the study of synchronization phenomena and the control of cascading failures in complex networks. 2014: Mason A. Porter, University of Oxford, for his fundamental research on the mathematics of networks and his outreach efforts to teach network science to students in schools. 2015: Chaoming Song, University of Miami, for the breadth and depth of his influential work, ranging from network applications of self-similarity and renormalization group theory to the in-depth analysis of big data on human mobility. 2016: Aaron Clauset, University of Colorado Boulder, for his contributions to the study of network structure, including Internet mapping, inference of missing links, and community structure, and for his provocative analyses of human conflicts and social stratification. 2017: Vittoria Colizza, Inserm, for contributions to fundamental and data-driven network-based modeling of epidemic processes, including seminal studies on metapopulation systems, the impact of air transportation, and the predictability of epidemic outbreaks. 2018: Danielle Bassett, University of Pennsylvania, for fundamental contributions to our understanding of the network architecture of the human brain, its evolution over learning and development, and its alteration in neurological disease. 2019: Tiago P. Peixoto, Central European University, for groundbreaking contributions to the statistical analysis and visualization of networks, including efficient and principled inference algorithms based on the stochastic block model, and compression and prediction of richly annotated or hierarchical structures. 2020: Sonia Kéfi, CNRS, for foundational and empirically grounded theoretical research that has advanced network science and its applications in ecology, with a focus on multiple types of interactions among species and the implications for global change, opening the path to new ways to study ecosystems. 2021: Dashun Wang, Northwestern University, for foundational and empirically grounded theoretical research that has advanced network science and its applications in the field of computational social sciences, with a focus on the science of science and on developing methods related to quantifying and improving human achievement and standards-of-living. 2022: Linyuan Lü, University of Electronic Science and Technology of China, for groundbreaking contributions to network information filtering, including seminal works on link prediction and detection of influential nodes in networked structures, and their technological applications. 2023: Daniel B. Larremore, University of Colorado Boulder, for socially impactful contributions to network models of human-disease dynamics, with applications to malaria and the COVID-19 pandemic, and for foundational research in the theoretical and practical use of algorithms for community detection. 2024: Antoine Allard, Université Laval, for the breadth and depth of his contributions to modeling complex systems as networks, including the geometry of networks and the role of heterogeneity and superspreading in contemporary diseases and complex contagions. See also List of computer science awards References Computer science awards
Erdős–Rényi Prize
Technology
752
15,403,926
https://en.wikipedia.org/wiki/All-number%20calling
All-number calling (ANC) is a telephone numbering plan that was introduced into the North American Numbering Plan by the Bell System in the United States starting in 1958 to replace the previous system of using a telephone exchange name as the first part of a telephone number. The plan prescribed the format of a telephone number assigned to subscriber telephones to consist of ten digits, composed from a three-digit area code, a three-digit central office code, and a four-digit station number. This increased the number of effectively available central office codes in each numbering plan area (NPA) from 540 to 792, thereby staving off the threat of exhausting the number pool, which was forecast to occur by the late 20th century. History Until the 1950s, a typical telephone number in the United States and many other countries consisted of a telephone exchange name and a four- or five-digit subscriber number. The first two or three letters of the exchange name translated into digits given by a mapping typically displayed on the telephone's rotary dial by grouping the letters around the associated digit. The table (right) shows the typical assignment in the Bell System in use at the time. The letter Q was not used, and Z was translated to 0 (zero) on some dials, albeit never used in the name system. For example, a New Yorker's telephone number might have been CHelsea 2-5034, which a calling telephone subscriber would dial as the digit sequence 2425034, translating C to 2, and H to 4. After World War II, the newly conceived nationwide numbering plan of 1947 sought to unify all local numbering plans by using a system of two central office letters and one digit to complete the office prefix, and four digits for line number. This system was referred to as 2L-5N, or simply 2-5. This plan was projected to be usable beyond the year 2000. However, with increasing demand for telephone service in the post-war decades, it became apparent by the late 1950s, that the system would be outgrown by about 1975. The limitations for the usable leading digits of central office codes, imposed by using common names for central office names, and their leading two characters as guides for customer dialing could no longer be maintained when opening new central offices. By 1962 it was forecast that in 1985 the number of telephones in the nation would equal its population of 280 million and increase to 600 million telephones for 340 million people in 2000. As a result, the North American telephone administrations first introduced letter combinations that could not be linked to a familiar pronounceable central office name in some highly populated states, such as New York. The final solution to the growing threat of numbering exhaustion was decided by AT&T engineers and administrators from in-depth studies of all alternative methods. It was decided to eliminate the restrictions of using names and lettered prefixes. This goal had been envisioned internally within AT&T for some time. In 1954, John Karlin directed a research project that investigated the memory capacity and dialing accuracy of employees when using seven-digit telephone numbers comprising only digits. At that time, some directory publishing departments also began removing the entire central office name from telephone directories, preferring to only list the dialed letters of the prefix. The practice did not produce any adverse effects, and opened the path for listing telephone numbers in the 2-5 style, where the two letters were unrelated to any pronounceable name. Under all-number calling, the number of permissible central office prefixes increased from 540 to potentially 800, but the first two digits of the central office code were still restricted to the range 2 to 9, and the eight combinations that ended in 11 were reserved as special calling codes. This increased the numbering pool for central office codes to 640, and resulted in the partitioning of the prefix space (000—999) according to the table at the right. All-number calling was first field-tested in Wichita Falls, Texas starting on January 19, 1958, with the installation of a new dial exchange. The results indicated a substantial reduction of dialing errors over new system installations that used the 2-5 numbering system. In small communities the new system was met with little resistance. In Council Bluffs, Iowa with roughly 26,000 telephone subscribers, it also caused no major resistance in March 1960. During larger scale introductions in California in 1962, this change sparked an intense outcry among urban users who considered all-digit dialing to be dehumanizing. Karlin, the inventor of the all-number system, stated that he had been approached by a woman at a cocktail party, "Are you the John Karlin who is responsible for all-number dialing?" He proudly replied, "Yes, I am." She then asked him, "How does it feel to be the most hated man in America?" Opponents created a variety of organizations to oppose all-number calling, including the Anti-Digit Dialing League and the Committee of Ten Million to Oppose All-Number Calling to pressure AT&T to drop the plan. Other countries introduced similar transitions for eliminating exchange names. In the United Kingdom, the new system was known as all-figure dialling. See also Rotary dial E.161 ITU-T recommendation References External links A Capsule of History of the Bell System Telephone numbers Telephone numbers in the United States
All-number calling
Mathematics
1,100
22,993,469
https://en.wikipedia.org/wiki/USAir%20Flight%20427
USAir Flight 427 was a scheduled flight from Chicago's O'Hare International Airport to Palm Beach International Airport, Florida, with a stopover at Pittsburgh International Airport. On Thursday, September 8, 1994, the Boeing 737 flying this route crashed in Hopewell Township, Pennsylvania while approaching Runway 28R at Pittsburgh, which was USAir's largest hub at the time. This accident was the second longest air crash investigation in history. The investigation into USAir 427 helped to also solve the crash of United Airlines Flight 585. The National Transportation Safety Board (NTSB) determined that the probable cause was that the aircraft's rudder malfunctioned and went hard over in a direction opposite to that commanded by the pilots, causing the plane to enter an aerodynamic stall from which Captain Peter Germano and First Officer Charles B. Emmet III were unable to recover. All 132 people on board were killed, making the accident the deadliest air disaster in Pennsylvania's history. The reports indicated that hot hydraulic fluid entering the rudder's dual servo valve froze, causing the rudder to work in the opposite direction. Background Aircraft The aircraft involved, manufactured in 1987, was a Boeing 737-3B7 registered as N513AU with serial number 23699. It had logged a total of 23,846 flight time and in 14,489 take-off and landing cycles. It was powered by two CFM International CFM56-3B-2 engines. Crew The flight crew consisted of Captain Peter Germano, aged 45, who was hired by USAir on February 4, 1981, and First Officer Charles B. "Chuck" Emmett III, aged 38, who was hired in February 1987 by Piedmont Airlines (which merged into USAir in 1989). Both were regarded as excellent pilots and were very experienced: Germano logged approximately 12,000 flight hours, including 4,064 on the Boeing 737, while Emmett logged 9,119 flight hours, 3,644 on the 737. Flight attendants Stanley Canty and April Slater were hired in 1989 by Piedmont Airlines. Flight attendant Sarah Slocum-Hamley was hired in October 1988 by USAir. Accident In its arrival phase approaching Pittsburgh, Flight 427 was sequenced behind Delta Air Lines Flight 1083, a Boeing 727-200. At no time was Flight 427 closer than to Delta 1083, according to radar data. Flight 427 was on approach at altitude, at flaps 1 configuration, and at approximately . At 19:02:57, the aircraft entered the wake turbulence of Delta 1083, and three sudden thumps, clicking sounds and a louder thump occurred, after which the 737 began to bank and roll to the left. The autopilot disconnected, and First Officer Emmett stomped on the rudder pedal, and held it there for the remainder of the flight, unaware that the rudder reversed hard to the left. As the aircraft's heading and bank angle skewed dramatically to the left, Emmett and Germano both rolled their yokes to the right and pulled back on the elevators to counter the gradually decreasing pitch angle, as the stick shaker activated and the airplane entered an aerodynamic stall, caused by the wing's critical high angle of attack. As the stick shaker activated, Germano exclaimed "Hold on!" numerous times, while Emmett, under physical exertion, said, "Oh shit!" Germano exclaimed, "What the hell is this?" As air traffic control noticed Flight 427 descending without permission, Germano keyed the microphone and stated, "Four-twenty-seven, emergency!" Because the microphone remained keyed for the rest of the accident, the ensuing exclamations in the cockpit were heard in the tower at Pittsburgh. The aircraft continued to roll while pitched nose-down at the ground. Trying to counteract sharply rising G-forces, Germano yelled "Pull!" three consecutive times before screaming, during which Emmett stated "God, no" seconds before impact. Pitched 80° nose-down and banked 60° left while traveling at approximately , the 737 slammed into the ground and exploded at 19:03:25 in Hopewell Township, Beaver County, near Aliquippa, approximately 28 seconds after entering the wake turbulence. At the time of the accident, many people had gathered at a nearby soccer field for evening soccer practice. These people witnessed the crash of the aircraft and described the plane as suddenly falling out of the sky. Investigation The NTSB investigated the crash. All 127 passengers and 5 crew members on board were killed. For the first time in NTSB history, investigators were required to wear full-body biohazard suits while inspecting the accident site. As a result of the severity of the crash impact, the bodies of the passengers and crew were severely fragmented, leading investigators to declare the site a biohazard, requiring 2,000 body bags for the 6,000 recovered human remains. USAir had difficulty determining Flight 427's passenger list, facing confusion regarding five or six passengers. Several employees of the United States Department of Energy had tickets to take later flights, but used them to fly on Flight 427. One young child was not ticketed. Among the victims of the crash was noted neuroethologist Walter Heiligenberg. Both the cockpit voice recorder (CVR) and flight data recorder (FDR) were recovered and used for the investigation. Because of the limited parameters recorded by the FDR, investigators did not have access to the position of the flight-control surfaces (rudder, ailerons, elevator, etc.) during the accident sequence. However, two parameters recorded were crucial — the aircraft's heading and the pitch-control yoke position. During the approach, Flight 427 encountered wake turbulence from Delta 1083, but the FAA determined "the wake vortex encounter alone would not have caused the continued heading change that occurred after 19:03:00." The abrupt heading change shortly before the dive pointed investigators immediately to the rudder. Without data relating to the rudder pedal positions, investigators attempted to determine whether the rudder moved hard over by a malfunction or by pilot command. The CVR was heavily scrutinized as investigators examined the pilots' words and their breathing to determine whether they were fighting for control over a rudder malfunction or had inadvertently stomped on the wrong rudder pedal in reaction to the wake turbulence. Boeing felt the latter more likely, while USAir and the pilots' union felt that the former was more likely. The FDR revealed that after the aircraft stalled, the plane and its occupants were subjected to a load as high as 4 g throughout the dive until impact with the ground in an 80-degree nose-down attitude at approximately under significant sideslip. Reading the control-yoke data from the FDR revealed that the pilots made a crucial error by pulling back on the yoke throughout the dive, with the stick shaker audible on the CVR from the onset of the dive. This raised the aircraft's angle of attack, removed all aileron authority, prevented recovery from the roll induced by the rudder and caused an aerodynamic stall. Because the aircraft had entered a slip, pulling back on the yoke only further aggravated the bank angle. Boeing's test pilots reenacted the dive in a simulator and in a test 737-300 by flying with the same parameters recorded by the accident FDR, and found that recovery from a fully deflected rudder at level flight, while at 190-knot crossover speed, was accomplished by turning the wheel to the opposite direction of the roll, and not pulling back on the yoke to regain aileron authority. The FAA later remarked that the CVR proved that the pilots failed to utilize proper crew resource management during the upset while continuing to apply full up elevator after receiving a stall warning. The NTSB remarked that no airline had ever trained a pilot to properly recover from the situation experienced by the Flight 427 pilots and that the pilots had just 10 seconds from the onset of the roll to troubleshoot before recovery of the aircraft was impossible. Investigators later discovered that the recovered accident rudder power control unit was much more sensitive to bench tests than new control units. The exact mechanism of the failure involved the servo valve, which remains dormant and cold for much of the flight at high altitude, seizing after being injected with hot hydraulic fluid that has been in continuous action throughout the plane. This specific condition occurred in fewer than 1% of the laboratory tests, but explained the rudder malfunction that caused Flight 427 to crash. The jam left no trace of evidence after it occurred, and a Boeing engineer later found that a jam under this controlled condition could also lead to the slide moving in the opposite direction than that commanded. Boeing felt that the test results were unrealistic and inapplicable, given the extremes under which the valve was tested. It stated that the cause of the rudder reversal was more likely psychological and likened the event to a circumstance in which an automobile driver panics during an accident and accidentally presses on the gas pedal rather than the brake pedal. The official position of the Federal Aviation Administration (FAA) was that sufficient probable cause did not exist to substantiate the possibility of rudder system failure. After the longest accident investigation in NTSB history — lasting more than four and a half years — the NTSB released its final report on March 24, 1999. The NTSB concluded that the accident was the result of mechanical failure: The NTSB concluded that similar rudder problems had caused the previously mysterious March 3, 1991 crash of United Airlines Flight 585 and the June 9, 1996 incident involving Eastwind Airlines Flight 517, both Boeing 737s. The final report also included detailed responses to Boeing's arguments about the causes of the three accidents. Aftermath At the time of the crash, Flight 427 was the second-deadliest accident involving a Boeing 737 (all series); as of 2024, it now ranks as the ninth-deadliest. It was also the seventh-deadliest aviation disaster in the history of the United States, and the deadliest in the U.S. involving a 737; as of 2024, it ranks eleventh. The accident marked USAir's fifth crash in the period from 1989 to 1994. The Commonwealth of Pennsylvania spent approximately $500,000 in recovery and cleanup for the accident site. The FAA disagreed with the NTSB's probable-cause verdict and Tom McSweeney, the FAA's director of aircraft certification, issued a statement on the same day on which the NTSB report was issued that read: "We believe, as much as we have studied this aircraft and this rudder system, that the actions we have taken assure a level of safety that is commensurate with any aircraft." However, the FAA changed its attitude after a special task force, the Engineering Test and Evaluation Board, reported in July 2000 that it had detected 46 potential failures and jams in the 737 rudder system that could have catastrophic effects. In September 2000, the FAA announced that it wanted Boeing to redesign the rudder for all iterations of the 737, affecting more than 3,400 aircraft in the U.S. alone. USAir submitted to the NTSB that pilots should receive training with regard to a plane's crossover speed and recovery from full rudder deflection. As a result, pilots were warned of and trained how to deal with insufficient aileron authority at an airspeed at or less than , formerly the usual approach speed for a Boeing 737. Boeing maintained that the most likely cause of the accident was that the co-pilot inadvertently deflected the rudder hard over in the wrong direction while in a panic and for unknown reasons maintained this input until impact with the ground. Boeing agreed to redesign the rudder control system with a redundant backup and paid to retrofit the entire worldwide 737 fleet. Following one of the NTSB's main recommendations, airlines were required to add four additional channels of information into flight data recorders in order to capture pilot rudder pedal commands, and the FAA set a deadline of August 2001 for airlines to comply. In 2016, former investigator John Cox stated that time has proven the NTSB correct in its findings, because no additional rudder-reversal incidents have occurred since Boeing's redesign. Following the airline's response to the Flight 427 accident, the United States Congress required that airlines "provide families of crash victims courteous and sensitive treatment and assistance with the various needs that accompany an accident". USAir ceased using Flight 427 as a flight number. The accident was the second fatal USAir crash in just over two months, following the July 2 Flight 1016 accident at Charlotte-Douglas International Airport that killed 37. The crashes contributed to the financial crisis that USAir was experiencing at the time. Memorial The crash site, near the Aliquippa exit of I-376, is located on private property. The road to the site is accessible only to 427 Support League and Pine Creek Land Conservation Trust members. Three tombstones are located at the Sewickley Cemetery, from the site of the crash and within the flight path of USAir 427. In popular culture The New Yorker published an article on the Flight 427 investigation on July 28, 1996. "Searching for the Cause of a Catastrophic Plane Crash" was written by Jonathon Harr. Bill Adair published "The Mystery of Flight 427: Inside a Crash Investigation," 2002. The Discovery Channel Canada / National Geographic TV series Mayday — also known as Air Disasters — dramatized the crash of Flight 427 and the subsequent 737 rudder investigation in the 2007 episode "Hidden Danger". The accident was dramatized in the episode "Fatal Flaws" of Why Planes Crash. See also Boeing 737 rudder issues Similar incidents United Airlines Flight 585 Eastwind Airlines Flight 517 American Airlines Flight 1 Northwest Airlines Flight 85 American Airlines Flight 587 Indonesia AirAsia Flight 8501 References Informational notes Citations External links NTSB Accident Report - Copy at Embry–Riddle Aeronautical University NTSB Accident Investigation Docket (Alternate) CVR Transcript (Archive) FDR Transcript (Archive) Boeing's report to the NTSB (Archive) ALPA's report to the NTSB (Addendum) (Archive) (Addendum archive) Photos of the accident aircraft from Airliners.net 25 years ago, USAir Flight 427 crashed near Hopewell despite perfect conditions Aviation accidents and incidents in 1994 1994 in Pennsylvania Accidents and incidents involving the Boeing 737 Classic Airliner accidents and incidents caused by design or manufacturing errors Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents in Pennsylvania Aviation accidents and incidents in the United States in 1994 Beaver County, Pennsylvania 427 September 1994 events in the United States Aviation accidents and incidents caused by wake turbulence Aviation accidents and incidents caused by loss of control Pittsburgh International Airport
USAir Flight 427
Materials_science
3,022
34,165,119
https://en.wikipedia.org/wiki/Alexander%20Nabutovsky
Alexander Nabutovsky is a leading Canadian mathematician specializing in differential geometry, geometric calculus of variations and quantitative aspects of topology of manifolds. He is a professor at the University of Toronto Department of Mathematics. Nabutovsky earned a Ph.D. degree from the Weizmann Institute of Science in 1993; his advisor was Shmuel Kiro. He was an invited speaker on "Geometry" at International Congress of Mathematicians, 2010 in Hyderabad. References External links Living people Canadian mathematicians Academic staff of the University of Toronto Geometers Weizmann Institute of Science alumni Year of birth missing (living people)
Alexander Nabutovsky
Mathematics
125
24,132,172
https://en.wikipedia.org/wiki/WASP-18
WASP-18 is a magnitude 9 star located away in the Phoenix constellation of the southern hemisphere. It has a mass of 1.29 solar masses. The star, although similar to the Sun in terms of overall contents of heavy elements, is depleted in carbon. The carbon to oxygen molar ratio of 0.23 for WASP-18 is well below the solar ratio of 0.55. There is a red dwarf companion star at a separation of 3,519 AU. Planetary system In 2009, the SuperWASP project announced the discovery of a large, hot Jupiter type exoplanet, WASP-18b, orbiting very close to this star. It has an orbital period of less than a day and a mass 10 times that of Jupiter. Observations from the Chandra X-ray Observatory failed to find any X-rays coming from WASP-18, and it is thought that this is caused by WASP-18b disrupting the star's magnetic field by causing a reduction in convection in the star's atmosphere. Tidal forces from the planet may also explain the higher amounts of lithium measured in earlier optical studies of WASP-18. A 2019 study proposed a second candidate planet with a 2-day orbital period based on transit-timing variations, but a 2020 study using data from both TESS and ground-based surveys ruled out the existence of a planet with the proposed properties, setting an upper limit of 10 Earth masses on any planet with this period. See also SuperWASP List of exoplanets References Phoenix (constellation) Planetary transit variables F-type subgiants F-type main-sequence stars M-type main-sequence stars Binary stars Planetary systems with one confirmed planet J01372503-4540404 CD-46 00449 10069 07562 0185 100100827 18
WASP-18
Astronomy
371
14,860,424
https://en.wikipedia.org/wiki/EKV%20MOSFET%20model
The EKV Mosfet model is a mathematical model of metal-oxide semiconductor field-effect transistors (MOSFET) which is intended for circuit simulation and analog circuit design. It was developed in the Swiss EPFL by Christian C. Enz, François Krummenacher and Eric A. Vittoz (hence the initials EKV) around 1995 based in part on work they had done in the 1980s. Unlike simpler models like the Quadratic Model, the EKV Model is accurate even when the MOSFET is operating in the subthreshold region (e.g. when Vbulk=Vsource then the MOSFET is subthreshold when Vgate-source < VThreshold). In addition, it models many of the specialized effects seen in submicrometre CMOS IC design. See also Transistor models MOSFET Ngspice SPICE References External links Web Page of Christian Enz Web Page of François Krummenacher About Eric Vittoz Main Web Page for the EKV Model Transistor modeling Electronic engineering it:MOSFET#Modello EKV
EKV MOSFET model
Technology,Engineering
236
30,858,027
https://en.wikipedia.org/wiki/Argininosuccinic%20acid
Argininosuccinic acid is a non-proteinogenic amino acid that is an important intermediate in the urea cycle. It is also known as argininosuccinate. Reactions Some cells synthesize argininosuccinic acid from citrulline and aspartic acid and use it as a precursor for arginine in the urea cycle (or citrulline-NO cycle), releasing fumarate as a by-product to enter the TCA cycle. The enzyme that catalyzes the reaction is argininosuccinate synthetase. Argininosuccinic acid is a precursor to fumarate in the citric acid cycle via argininosuccinate lyase. See also Succinic acid References Basic amino acids Guanidines Dipeptides Urea cycle Tricarboxylic acids
Argininosuccinic acid
Chemistry
181
77,295,256
https://en.wikipedia.org/wiki/Austroboletus%20asper
Austroboletus asper is a species of bolete fungus found in Australia. It was described only recently identified in 2020 by the mycologists Roy Halling, Katrina Syme, Gregory Bonito, Teresa Lebel, and Nigel Fechner. The species name is derived from the Latin word asper meaning 'rough'. Austroboletus asper is an interesting mushroom-forming fungus species found amidst the eucalyptus forests of southeastern Australia and Tasmania. It features including a dry cap and a stem adorned with subtle reticulations. This species has a cap with a pale appendiculate margin, whose spores are Q ≥ 3. According to the state of Queensland, Australia, it has no conservation significance as of 20 May 2024, which means that its existence is not at threat. References External sources Mysterious Mushroom: Austroboletus asper Revealed - Garigal Country, Youtube by Mary Bell, 26 March 2024 asper Fungi described in 2020 Fungi native to Australia Fungus species Taxa named by Teresa Lebel Taxa named by Roy Halling
Austroboletus asper
Biology
215
7,927,191
https://en.wikipedia.org/wiki/Dimensional%20metrology
Dimensional metrology, also known as industrial metrology, is the application of metrology for quantifying the physical size, form (shape), characteristics, and relational distance from any given feature. History Standardized measurements are essential to technological advancement, and early measurement tools have been found dating back to the dawn of human civilization. Early Mesopotamian and Egyptian metrologists created a set of measurement standards based on body parts known as anthropic units. These ancient systems of measurements utilized fingers, palms, hands, feet, and paces as intervals. Carpenters and surveyors were some of the first dimensional inspectors, and many specialized units of craftsmen, such as the remen, were worked into a system of unit fractions that allowed for calculations utilizing analytic geometry. Later agricultural measures such as feet, yards, paces, cubits, fathoms, rods, cords, perch, stadia, miles and degrees of the Earth's circumference, many of which are still in use. Early measurement tools and standardization Early Egyptian rulers were incremented in units of fingers, palms, and feet based on standardized inscription grids. These grids outlined the standards of measurement as canons of proportion and were made commensurate with Mesopotamian standards based on fingers, hands, and feet. In this system, four palms or three hands measured one foot; ten hands equaled one meter. These standards were used to measure and define property and were regulated by law for several purposes, such as taxation, infrastructure, and more, such as buildings and fields were adopted by the Greeks, Romans, and Persians as legal standards and became the basis of European standards of measure. They were also used to relate length to area with units such as the khet, setat and aroura; area to volume with units such as the artaba; and space to time with units such as the Egyptian minute of march, which was recorded on travel on a river. Modern tools Modern measurement equipment includes hand tools, CMMs (coordinate-measuring machines), machine vision systems, laser trackers, and optical comparators. A CMM is based on CNC technology to automate measurement of Cartesian coordinates using a touch probe, contact scanning probe, or non-contact sensor. Optical comparators are used when physically touching the part is undesirable; components that consist of fragile or mailable materials require measurement using non-contact techniques. Instruments can now build 3D models of a part and its internal features using CT scanning or X-ray imaging. Relative measurements Measurements are often expressed as a size relative to a theoretically perfect part that has its geometry defined in a print or computer model. A print is a blueprint illustrating the defined geometry of a part and its features. Each feature can have a size, a distance from other features, and an allowed tolerance set for each element. The international language used to describe physical parts is called Geometric Dimensioning and Tolerancing (colloquially known as GD&T). Prints can be hand-drawn or automatically generated by a computer CAD model. However, computer-controlled measurement machines like coordinate measuring machines (CMMs) and vision measuring machines (VMMs) can measure a part relative to a CAD model without the need for a print. Typically, this process is done to reverse engineer components. Mechanical engineering Industrial metrology is common in manufacturing quality control systems to help identify errors in component production and ensure proper performance. Blueprints and 3D CAD models are usually made by a mechanical engineer. See also Position sensor Positioning system References Further reading External links National Institute for Standards and Technology Dimensional Metrology Portal An example of Industrial Metrology equipment. The Dimensional Metrology Standards Consortium is an ANSI standards organization that identify needed standards in the field of digital metrology Metrology Dimension Manufacturing
Dimensional metrology
Physics,Engineering
770
9,725,350
https://en.wikipedia.org/wiki/TRPC
TRPC is a family of transient receptor potential cation channels in animals. TRPC channels form the subfamily of channels in humans most closely related to drosophila TRP channels. Structurally, members of this family possess a number of similar characteristics, including 3 or 4 ankyrin repeats near the N-terminus and a TRP box motif containing the invariant EWKFAR sequence at the proximal C-terminus. These channels are non-selectively permeable to cations, with a prevalence of calcium over sodium variable among the different members. Many of TRPC channel subunits are able to coassemble. The predominant TRPC channels in the mammalian brain are the TRPC 1,4 and 5 and they are densely expressed in corticolimbic brain regions, like the hippocampus, prefrontal cortex and lateral septum. These 3 channels are activated by the metabotropic glutamate receptor 1 agonist dihydroxyphenylglycine. In general, TRPC channels can be activated by phospholipase C stimulation, with some members also activated by diacylglycerol. There is at least one report that TRPC1 is also activated by stretching of the membrane and TRPC5 channels are activated by extracellular reduced thioredoxin. It has long been proposed that TRPC channels underlie the calcium release activated channels observed in many cell types. These channels open due to the depletion of intracellular calcium stores. Two other proteins, stromal interaction molecules (STIMs) and Orais, however, have more recently been implicated in this process. STIM1 and TRPC1 can coassemble, complicating the understanding of this phenomenon. TRPC6 has been implicated in late onset Alzheimer's disease. Role in cardiomyopathies Research on the role of TRPC channels in cardiomyopathies is still in progress. An upregulation of TRPC1, TRPC3, and TRPC6 genes are seen in heart disease states including fibroblast formation and cardiovascular disease. The TRPC channels are suspected of responding to an overload of hormonal and mechanical stimulation in cardiovascular disease, contributing to pathological remodelling of the heart. TRPC1 channels are activated by receptors coupled to phospholipase C (PLC), mechanical stimulation, and depletion of intracellular calcium stores. TRPC1 channels are found on cardiomyocytes, smooth muscle, and endothelial cells. Upon stimulation of these channels in cardiovascular disease, there is an increase in hypertension and cardiac hypertrophy. TRPC1 channels mediate smooth muscle proliferation in the presence of pathological stimuli which contributes to hypertension. Mice with myocardial hypertrophy exhibit increased expression of TRPC1. The deletion of the TRPC1 gene in these mice resulted in reduced hypertrophy upon stimulation with hypertrophic stimuli, inferring that TRPC1 has a role in the progression of cardiac hypertrophy. TRPC3 and TRPC6 channels are activated by PLC stimulation and diacylglycerol (DAG) production. Both these TRPC channel types play a role in cardiac hypertrophy and vascular disease like TRPC1. In addition, TRPC3 is upregulated in the atria of patients with atrial fibrillation (AF). TRPC3 regulates angiotensin II-induced cardiac hypertrophy which contributes to the formation of fibroblasts. Accumulation of fibroblasts in the heart can manifest into AF. Experiments blocking TRPC3 show a decrease in fibroblast formation and reduced AF susceptibility. TRPC1, TRPC3, and TRPC6 channels are all involved in cardiac hypertrophy. The mechanism of how TRPC channels promote cardiac hypertrophy is through activation of the calcineurin pathway and the downstream transcription factor nuclear factor of activated T-cells (NFAT). Pathological stress or hypertrophic agonists will trigger G-protein coupled receptors (GPCRs) and activates PLC to form DAG and inositol triphosphate (IP3). IP3 promotes the release of internal calcium stores and the influx of calcium via TRPC. When intracellular calcium reaches a threshold, it will activate the calcineurin /NFAT pathway. DAG activates the calcineurin/NFAT pathway directly. NFAT translocate into the nucleus and induce gene transcription of more TRPC genes. This creates a positive feedback loop, leading to a state of hypertrophic gene expression and thus, cardiac growth and remodelling of the heart. TRPC channel's involvement in well studied signaling pathways and significance in gene impact on human diseases make it a potential target for drug therapy. TRPC has been shown to potentiate inhibition in the olfactory bulb circuit, providing a mechanism for improving olfactory abilities. Genes TRPC1, TRPC2, TRPC3, TRPC4, TRPC5, TRPC6, TRPC7 References External links Membrane proteins Ion channels
TRPC
Chemistry,Biology
1,061
25,795,844
https://en.wikipedia.org/wiki/Alexander%20Carnegie%20Kirk
Alexander Carnegie Kirk (16 July 1830 – 5 October 1892) was a British engineer responsible for several major innovations in the shipbuilding, refrigeration, and oil shale industries of the 19th century. Kirk, born in Barry, Angus, received his formal education at the University of Edinburgh and a technical education at plants operated by Robert Napier and Sons. Family Alexander Carnegie Kirk was the eldest son of Rev. John Kirk (died 1858) and Christian Guthrie, née Carnegie, (died 1865). The naturalist John Kirk was his younger brother. A.C. Kirk married Ada Waller at Croydon in 1869 and they had six children. Career In 1850, Kirk began a five-year apprenticeship with Robert Napier and Sons. In 1861, he became chief draughtsman at Maudslay, Sons and Field in London but this seems to have lasted less than a year. Later in 1861 he became an engineering manager in the shale-oil industry, working for James Young. During this employment he developed an oil shale retort and a refrigeration technology, involving the delivery of ether. The latter was to address production problems stemming from summer heat. In 1865 he joined the management of James Aitken and Company, an engine works in Glasgow. In 1870 he was appointed manager of the John C. Elder engineering works. After returning to the Napier firm as a senior partner in 1877, his work was thereafter focused on marine engineering. His triple-expansion engines as designed for the steamship Propontis were unsuccessful, but his subsequent versions of the engine design, particularly those designed for the steamship Aberdeen, are credited as technological breakthroughs. Professional appointments He served as the President of The Institution of Engineers and Shipbuilders in Scotland from 1887 to 1889. Honours In 2020 he was inducted into the Scottish Engineering Hall of Fame. See also Alexander Selligue James Young (Scottish chemist) Pumpherston retort References 1830 births 1892 deaths Presidents of the Institution of Engineers and Shipbuilders in Scotland 19th-century Scottish inventors 19th-century Scottish people 19th-century Scottish engineers People from Angus, Scotland Alumni of the University of Edinburgh British marine engineers Oil shale in Scotland Oil shale technology inventors 19th-century Scottish businesspeople Scottish Engineering Hall of Fame inductees
Alexander Carnegie Kirk
Chemistry
453
4,328,275
https://en.wikipedia.org/wiki/Carbaminohemoglobin
Carbaminohemoglobin (carbaminohaemoglobin BrE) (CO2Hb, also known as carbhemoglobin and carbohemoglobin) is a compound of hemoglobin and carbon dioxide, and is one of the forms in which carbon dioxide exists in the blood. In blood, 23% of carbon dioxide is carried this way, while 70% is converted into bicarbonate by carbonic anhydrase and then carried in plasma, and 7% carried as free CO2, dissolved in plasma. Structure The structure of carbaminohemoglobin can be described as the binding of carbon dioxide to the amino groups of the globin chains of hemoglobin. This occurs at the N-terminals of the globin chains and at the amino sidebranches of arginine and lysine residues. The process of carbon dioxide binding to hemoglobin is generally known as carbamino formation. This is the source from where the protein gets its name, as it is a combination of carbamino and hemoglobin. Function One of the primary functions of carbaminohemoglobin is to enable the transport of carbon dioxide in the bloodstream. When carbon dioxide is produced as a waste product of cellular metabolism in tissues, the compound is diffused into the bloodstream and it works to react with hemoglobin. When the binding of molecules occurs to form carbaminohemoglobin, it allows for the transport of carbon dioxide from the tissues to the lungs. Once it is in the lungs, carbon dioxide is released from carbaminohemoglobin and can be let out from the body during the exhalation process. This complete process is very important for maintaining the balance of gases in the blood and to ensure that gas exchange is being transported between tissues and organs. Interaction Carbaminohemoglobin interacts with carbon dioxide in the respiratory gas exchange process. The interaction involves the binding of carbon dioxide to hemoglobin. Carbon dioxide binds to the protein chains of hemoglobin. The ability of hemoglobin to bind to both oxygen and carbon dioxide molecules is what makes it an important protein to the respiratory system in respiratory gas exchange. The interactions between carbon dioxide and hemoglobin helps in the transport of carbon dioxide from the tissues to the lungs for elimination. When carbon dioxide is transported from the tissues, it is produced as a waste product of cellular metabolism. Most importantly, the binding of carbon dioxide to hemoglobin helps buffer blood pH by preventing carbonic acid from decreasing the pH. Although, the carbaminohemoglobin protein interacts with another protein (like hemoglobin) found in red blood cells, this interaction only takes place in the bloodstream and its products can be expelled. Carbaminohemoglobin does not interact with DNA because DNA is in the nucleus. Regulation The formation and dissociation of the protein carbaminohemoglobin are controlled by many factors to guarantee the transport of carbon dioxide to the blood stream. A list of regulatory factors are listed below: Partial Pressure of Carbon Dioxide (pCO2): The measure of carbon dioxide within arterial or venous blood. The amount of carbon dioxide in the bloodstream is influenced by the partial pressure of the molecule carbon dioxide. In tissues where cellular metabolism produces carbon dioxide, the partial pressure is higher and it leads to the binding of carbon dioxide to hemoglobin. On the other hand, in the lungs, there is a lower amount of partial pressure of carbon dioxide, which promotes the separation of carbon dioxide from hemoglobin. pH: The Bohr effect outlines how the binding and release of oxygen and carbon dioxide by hemoglobin are influenced by fluctuations of pH in the blood. When tissues metabolize, they produce carbon dioxide and acidic products, which eventually lead to a decrease in pH levels in the blood. When the pH is low, this promotes the binding of carbon dioxide to hemoglobin and facilities the transport to the lungs. On the contrary, when the pH is higher in the lungs, carbon dioxide is released from hemoglobin. Temperature: A factor such as temperature can affect the binding and release of gases by hemoglobin. The effect of temperature on the binding of carbon dioxide to hemoglobin is less noticeable compared to other gases, but this factor can still have an influence on the overall regulation of gas exchange. Concentration of Bicarbonate (HCO3−): A high percentage of carbon dioxide in the bloodstream is transferred in the form of bicarbonate ions. Carbonic anhydrase catalyzes the conversion of carbon dioxide and water into carbonic acid. This molecule breaks down into bicarbonate and hydrogen ions. This break down process occurs in red blood cells. Ultimately, the concentration of bicarbonate ions in the bloodstream affects the formation of the protein carbaminohemoglobin in the body. Synthesis When the tissues release carbon dioxide into the bloodstream, around 10% is dissolved into the plasma. The rest of the carbon dioxide is carried either directly or indirectly by hemoglobin. Approximately 10% of the carbon dioxide carried by hemoglobin is in the form of carbaminohemoglobin. This carbaminohemoglobin is formed by the reaction between carbon dioxide and an amino (-NH2) residue from the globin molecule, resulting in the formation of a carbamino residue (-NH.COO−). The rest of the carbon dioxide is transported in the plasma as bicarbonate anions. Mechanism When carbon dioxide binds to hemoglobin, carbaminohemoglobin is formed, lowering hemoglobin's affinity for oxygen via the Bohr effect. The reaction is formed between a carbon dioxide molecule and an amino residue. In the absence of oxygen, unbound hemoglobin molecules have a greater chance of becoming carbaminohemoglobin. The Haldane effect relates to the increased affinity of de-oxygenated hemoglobin for : offloading of oxygen to the tissues thus results in increased affinity of the hemoglobin for carbon dioxide, and , which the body needs to get rid of, which can then be transported to the lung for removal. Because the formation of this compound generates hydrogen ions, haemoglobin is needed to buffer it. Hemoglobin can bind to four molecules of carbon dioxide. The carbon dioxide molecules form a carbamate with the four terminal-amine groups of the four protein chains in the deoxy form of the molecule. Thus, one hemoglobin molecule can transport four carbon dioxide molecules back to the lungs, where they are released when the molecule changes back to the oxyhemoglobin form. Hydrogen ion and oxygen-carbon dioxide coupling When carbon dioxide diffuses as a dissolved gas from the tissue capillaries, it binds to the α-amino terminus of the globulin chain, forming Carbaminohemoglobin. Carbaminohemoglobin is able to directly stabilise the T conformation as part of the carbon dioxide Bohr effect. Deoxyhemoglobin in turn subsequently increases the uptake of carbon dioxide in the form of favouring the formation of Bicarbonate as well as Carbaminohemoglobin through the Haldane effect. Disease association Dysfunctional or altered levels of carbaminohemoglobin do not generally cause disease or disorders. Carbaminohemoglobin is a part of the carbon dioxide transport process in the body. The levels of this protein can decrease and increase based on factors that regulate the protein in the body. A way that carbaminohemoglobin can be associated with disease is when there is a change in its level caused by a pre-existing condition or imbalance in the respiratory and metabolic systems of the human body. Some of these existing medical conditions can be the following: Respiratory acidosis: This condition is characterized by a build up of carbon dioxide in the blood, which leads to a drop in the blood's pH. This occurs when there is an impairment in the gas exchange process, such as respiratory failure. Hypoventilation: This type of condition can result in higher levels of carbaminohemoglobin. This condition can be caused by many factors, such as central nervous system disorders, and even some medications. Biological Importance The protein carbaminohemoglobin plays an important role in the transport of carbon dioxide in the blood, and its biologically important in many functions: Transporting Carbon Dioxide: This process allows for the transport of carbon dioxide from the tissues to the lungs. It is essential for maintaining the balance of gases in the bloodstream and to guarantee the removal of waste carbon dioxide from the body. Buffering Blood pH: The binding of carbon dioxide to hemoglobin helps buffer blood pH. When tissues produce carbon dioxide, the increase in acidity is reduced by forming bicarbonate ions. This buffering process helps prevent a decrease in pH and helps maintain a stable environment. Facilitating Gas Exchange: Hemoglobin facilitates the exchange of gases in the lungs and tissues. In the lungs, oxygen binds to hemoglobin and carbon dioxide is released. In the tissues, carbon dioxide binds to form carbaminohemoglobin and oxygen is released. This exchange process is important because tissues need oxygen supplied and carbon dioxide removed. See also Hemoglobin Blood Carbamic acid Carbamino References Further reading External links Biomolecules Hemoglobins
Carbaminohemoglobin
Chemistry,Biology
2,012
33,349,956
https://en.wikipedia.org/wiki/Variator%20%28variable%20valve%20timing%29
Variable valve timing (VVT) is a system for varying the valve opening of an internal combustion engine. This allows the engine to deliver high power, but also to work tractably and efficiently at low power. There are many systems for VVT, which involve changing either the relative timing, duration or opening of the engine's inlet and exhaust valves. One of the first practical VVT systems used a variator to change the phase of the camshaft and valves. This simple system cannot change the duration of the valve opening, or their lift. Later VVT systems, such as the helical camshaft or the movable fulcrum systems, could change these factors too. Despite this limitation, the variator is a relatively simple device to add to an existing engine and so they remain in service today. As the benefit of the variator relies on changing the relative timing between inlet and exhaust, variator systems are only applied to double overhead camshaft engines. A variator system that moved a single camshaft for both inlet and exhaust would be possible, but would have no performance benefit. Alfa Romeo system Alfa Romeo was the first manufacturer to use a variable valve timing system in production cars (US Patent 4,231,330). The 1980 Alfa Romeo Spider 2.0 L had a mechanical VVT system in SPICA fuel injected cars sold in the US. Later this was also used in the 1983 Alfetta 2.0 Quadrifoglio Oro models as well as other cars. The technique derives from work carried in the 1970s by Alfa Romeo engineer Giampaolo Garcea and in Italian the device is termed variatore di fase. The Alfa Romeo Twin Spark engine, introduced in the 1987 Alfa Romeo 75, also uses variable valve timing. The Alfa system varies the phase (not the duration) of the cam timing and operates on the inlet camshaft. Applications 1980 Alfa Romeo Spider 1987 Alfa Romeo 75 Twinspark 1991 Alfa Romeo 155 1.8l and 2.0l petrol engines 1997 Alfa Romeo 156 with the 1.6l, 1.8l and 2.0l petrol engines 1998 Alfa Romeo GTV and Spider with the 1.8l and 2.0l petrol engines 2000 Alfa Romeo 147 with the 1.6l and 2.0l petrol engines, except for the model with the 105 bhp engine 2004 Alfa Romeo GT with the 1.8l and 2.0l petrol engines 2001 Fiat Stilo with the 1.8l or 2.4l engines Volkswagen system Volkswagen use a variator system with two variators, one for each camshaft. Like the Alfa Romeo system, these are electrically-controlled hydraulic units, mounted in the camshaft's timing belt pulley. These systems are fitted to the Volkswagen VR5 and VR6 engines, and also to the W8 and W12 engines. The multiple-bank W engines have four variators in total, one for each camshaft. The Volkswagen variator is referred to as a 'fluted variator', owing to the shape of the hydraulic components. Unlike the Alfa Romeo system with its helical splines and indirect actuation, the Volkswagen system has a direct rotational action. The internal components of the variator resemble a paddle wheel inside a loose casing, so that it is free to move a few degrees from side to side. By applying hydraulic pressure on one side of these paddles, a phase shift is achieved. The hydraulic fluid is engine oil, controlled by a solenoid valve mounted on the cylinder head and controlled by the ECU. A Hall effect sensor also monitors the camshaft position. Other variator-based VVT systems Variable Cam Timing, Ford VANOS, BMW References Variable valve timing Engine technology
Variator (variable valve timing)
Technology
777
67,357,922
https://en.wikipedia.org/wiki/Trichromium%20silicide
Trichromium silicide is an inorganic compound of chromium and silicon with the chemical formula Cr3Si. References Chromium compounds Group 6 silicides
Trichromium silicide
Chemistry
37
1,011,270
https://en.wikipedia.org/wiki/Bourbaki%E2%80%93Witt%20theorem
In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed-point theorem for partially ordered sets. It states that if X is a non-empty chain complete poset, and such that for all then f has a fixed point. Such a function f is called inflationary or progressive. Special case of a finite poset If the poset X is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates, where x0 is any element of X, is monotone increasing. By the finiteness of X, it stabilizes: for n sufficiently large. It follows that x∞ is a fixed point of f. Proof of the theorem Pick some . Define a function K recursively on the ordinals as follows: If is a limit ordinal, then by construction is a chain in X. Define This is now an increasing function from the ordinals into X. It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some that is, So letting we have our desired fixed point. Q.E.D. Applications The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where X is chain complete and has no maximal element. Let g be a choice function on Define a function by This is allowed as, by assumption, the set is non-empty. Then f(x) > x, so f is an inflationary function with no fixed point, contradicting the theorem. This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma. Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions. It is also used to define recursive data types, e.g. linked lists, in domain theory. See also Kleene fixed-point theorem for Scott-continuous functions Knaster–Tarski theorem for complete lattices References Order theory Fixed-point theorems Theorems in the foundations of mathematics Articles containing proofs
Bourbaki–Witt theorem
Mathematics
528
16,781,820
https://en.wikipedia.org/wiki/HD%2033283%20b
HD 33283 b is an exoplanet orbiting around HD 33283. The mass of the planet is about 1/3 that of Jupiter or about the same as Saturn. However, the planet orbits very close to the star, taking only 18 days to complete its orbit with average speed of 86.5 km/s (311400 km/h). Despite this, its orbit is eccentric, bringing it as close as 0.075 AU to the star and as far away as 0.215 AU. See also HD 33564 b HD 86081 b HD 224693 b References External links Lepus (constellation) Exoplanets discovered in 2006 Giant planets Exoplanets detected by radial velocity
HD 33283 b
Astronomy
151
62,641
https://en.wikipedia.org/wiki/Vector%20field
In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point. The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow). A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space. Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other. Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector). More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field. Definition Vector fields on subsets of Euclidean space Given a subset of , a vector field is represented by a vector-valued function in standard Cartesian coordinates . If each component of is continuous, then is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space. One standard notation is to write for the unit vectors in the coordinate directions. In these terms, every smooth vector field on an open subset of can be written as for some smooth functions on . The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, , given by differentiating in the direction of the vector field. Example: The vector field describes a counterclockwise rotation around the origin in . To show that the function is rotationally invariant, compute: Given vector fields , defined on and a smooth function defined on , the operations of scalar multiplication and vector addition, make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise. Coordinate transformation law In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector. Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector are and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law () relating the different coordinate systems. Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes. Vector fields on manifolds Given a differentiable manifold , a vector field on is an assignment of a tangent vector to each point in . More precisely, a vector field is a mapping from into the tangent bundle so that is the identity mapping where denotes the projection from to . In other words, a vector field is a section of the tangent bundle. An alternative definition: A smooth vector field on a manifold is a linear map such that is a derivation: for all . If the manifold is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold is often denoted by or (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by (a fraktur "X"). Examples A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas. Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid. Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are: streaklines: the line produced by particles passing through a specific fixed point over various times pathlines: showing the path that a given particle (of zero mass) would follow. streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed). Magnetic fields. The fieldlines can be revealed using small iron filings. Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electric field. A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases. Gradient field in Euclidean spaces Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇). A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that The associated flow is called the , and is used in the method of gradient descent. The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero: Central field in euclidean spaces A -vector field over is called a central field if where is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0. The point 0 is called the center of the field. Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient. Operations on vector fields Line integral A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve. The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous. Given a vector field and a curve , parametrized by in (where and are real numbers), the line integral is defined as To show vector field topology one can use line integral convolution. Divergence The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem. The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors. Curl in three dimensions The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem. Index of a vector field The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity. Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself. The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions. The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes. For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem. For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic. Physical intuition Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory. In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field. In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm. Flow curves Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity. Given a vector field defined on , one defines curves on such that for each in an interval , By the Picard–Lindelöf theorem, if is Lipschitz continuous there is a unique -curve for each point in so that, for some , The curves are called integral curves or trajectories (or less commonly, flow lines) of the vector field and partition into equivalence classes. It is not always possible to extend the interval to the whole real number line. The flow may for example reach the edge of in a finite time. In two or three dimensions one can visualize the vector field as giving rise to a flow on . If we drop a particle into this flow at a point it will move along the curve in the flow depending on the initial point . If is a stationary point of (i.e., the vector field is equal to the zero vector at the point ), then the particle will remain at . Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups. Complete vector fields By definition, a vector field on is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If is a complete vector field on , then the one-parameter group of diffeomorphisms generated by the flow along exists for all time; it is described by a smooth mapping On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field on the real line is given by . For, the differential equation , with initial condition , has as its unique solution if (and for all if ). Hence for , is undefined at so cannot be defined for all values of . The Lie bracket The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions : f-relatedness Given a smooth function between manifolds, , the derivative is an induced map on tangent bundles, . Given vector fields and , we say that is -related to if the equation holds. If is -related to , , then the Lie bracket is -related to . Generalizations Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields. Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras. See also Eisenbud–Levine–Khimshiashvili signature formula Field line Field strength Gradient flow and balanced flow in atmospheric dynamics Lie derivative Scalar field Time-dependent vector field Vector fields in cylindrical and spherical coordinates Tensor fields Slope field References Bibliography External links Online Vector Field Editor Vector field — Mathworld Vector field — PlanetMath 3D Magnetic field viewer Vector fields and field lines Vector field simulation An interactive application to show the effects of vector fields Differential topology Field Functions and mappings F
Vector field
Physics,Mathematics
3,473
793,343
https://en.wikipedia.org/wiki/Mesangial%20cell
Mesangial cells are specialised cells in the kidney that make up the mesangium of the glomerulus. Together with the mesangial matrix, they form the vascular pole of the renal corpuscle. The mesangial cell population accounts for approximately 30-40% of the total cells in the glomerulus. Mesangial cells can be categorized as either extraglomerular mesangial cells or intraglomerular mesangial cells, based on their relative location to the glomerulus. The extraglomerular mesangial cells are found between the afferent and efferent arterioles towards the vascular pole of the glomerulus. The extraglomerular mesangial cells are adjacent to the intraglomerular mesangial cells that are located inside the glomerulus and in between the capillaries. The primary function of mesangial cells is to remove trapped residues and aggregated protein from the basement membrane thus keeping the filter free of debris. The contractile properties of mesangial cells have been shown to be insignificant in changing the filtration pressure of the glomerulus. Structure Mesangial cells have irregular shapes with flattened-cylinder-like cell bodies and processes at both ends containing actin, myosin and actinin, giving mesangial cells contractile properties. The anchoring filaments from mesangial cells to the glomerular basement membrane can alter capillary flow by changing glomerular ultrafiltration surface area. Extraglomerular mesangial cells are in close connection to afferent and efferent arteriolar cells by gap junctions, allowing for intercellular communication. Mesangial cells are separated by intercellular spaces containing extracellular matrix called the mesangial matrix that is produced by the mesangial cells. Mesangial matrix provides structural support for the mesangium. Mesangial matrix is composed of glomerular matrix proteins such as collagen IV (α1 and α2 chains), collagen V, collagen VI, laminin A, B1, B2, fibronectin, and proteoglycans. Development It is unclear whether the mesangial cells originate from mesenchymal or stromal cells. However there is evidence suggesting that they originate elsewhere outside of the glomerulus and then migrate into the glomerulus during development. Human foetal and infant kidneys stained for alpha smooth muscle actin (α-SMA), a marker for mesangial cells, demonstrated that α-SMA-positive mesenchymal cells migrate towards the glomerulus and during a later stage they can be found within the mesangium. It is possible that they share the same origin as supporting cells such as pericytes and vascular smooth muscle cells, or even be a type of specialised vascular smooth muscle cell. Function Formation of capillary loops during development During development mesangial cells are important in the formation of convoluted capillaries allowing for efficient diffusion to occur. Endothelial precursor cells secrete platelet-derived growth factor (PDGF)-B and mesangial cells have receptors for PDGF. This induces mesangial cells to attach to endothelial cells causing developing blood vessels to loop resulting in convoluted capillaries. Mice lacking the growth factor PDGF-B or PDGFRβ do not develop mesangial cells. When mesangial cells are absent the blood vessel becomes a single dilated vessel with up to 100-fold decrease in surface area. The transcription factor for PDGFRβ, Tbx18, is crucial for the development of mesangial cells. Without Tbx18 the development of mesangial cells is compromised and results in the formation of dilated loops. Mesangial cell progenitors are also a target of PDGF-B and can be selected for by the signal to then develop into mesangial cells. Interactions with other renal cells Mesangial cells form a glomerular functional unit with glomerular endothelial cells and podocytes through interactions of molecular signalling pathways which are essential for the formation of the glomerular tuft. Mesangial cells aid filtration by constituting part of the glomerular capillary tuft structure that filters fluids to produce urine. Communication between mesangial cells and vascular smooth muscle cells via gap junctions helps regulate the process of tubuloglomerular feedback and urine formation. Damage to mesangial cells using Thy 1-1 antibody specific to mesangial cells causes the vasoconstriction of arterioles mediated by tubuloglomerular feedback to be lost. Contractions regulate capillary flow Mesangial cells can contract and relax to regulate capillary flow. This is regulated by vasoactive substances. Contraction of mesangial cells is dependent on cell membrane permeability to calcium ions and relaxation is mediated by paracrine factors, hormones and cAMP. In response to capillary stretching, mesangial cells can respond by producing several growth factors: TGF-1, VEGF and connective tissue growth factor. Removal of macromolecules The mesangium is exposed to macromolecules from the capillary lumen as they are separated only by fenestrated endothelium without basement membrane. Mesangial cells play a role in restricting macromolecules from accumulating in the mesangial space by receptor- independent uptake processes of phagocytosis, micro- and macro-pinocytosis, or receptor-dependent processes and then transported along the mesangial stalk. Size, charge, concentration, and affinity for mesangial cell receptors of the macromolecule affects how the macromolecule is removed. Triglycerides may undergo pinocytosis and antibody IgG complexes may lead to activation of adhesion molecules and chemokines by mesangial cells. They also regulate glomerular filtration. Clinical significance Diabetic nephropathy The expansion of mesangial matrix is one characteristic of diabetic nephropathy although it also involves other cells in interaction including podocytes and endothelial cells. Mesangial expansion occurs due to increased deposition of extracellular matrix proteins, for example fibronectin, into the mesangium. Accumulation of extracellular matrix proteins then occurs due to insufficient degradation by matrix metalloproteinases. Increased glucose levels results in the activation of metabolic pathways leading to increased oxidative stress. This in turn results in the over-production and accumulation of advanced glycosylation end products responsible for enhancing the risk of developing glomerular diseases. Mesangial cells grown on advanced glycosylation end product-modified matrix proteins demonstrate increased production of fibronectin and a decrease in proliferation. These factors eventually lead to the thickening of the glomerular basement membrane, mesangial matrix expansion then glomerulosclerosis and fibrosis. Mesangial pathologies may also develop during the early phase of diabetes. Glomerular hypertension causes mesangial cells to stretch which causes induced expression of GLUT1 leading to increased cellular glucose. The repetition of stretching and relaxation cycle of mesangial cells due to hypertension increases mesangial cell proliferation and the production of extracellular matrix which can then accumulate and lead to glomerular disease. See also List of human cell types derived from the germ layers List of distinct cell types in the adult human body References Cell biology Human cells
Mesangial cell
Biology
1,587
7,890,616
https://en.wikipedia.org/wiki/Boromycin
Boromycin is a bacteriocidal polyether-macrolide antibiotic. It was initially isolated from the Streptomyces antibioticus, and is notable for being the first natural product found to contain the element boron. It is effective against most Gram-positive bacteria, but is ineffective against Gram-negative bacteria. Boromycin kills bacteria by negatively affecting the cytoplasmic membrane, resulting in the loss of potassium ions from the cell. Boromycin has not been approved as a drug for medical use. Discovery Boromycin was discovered by the scholars of the Institute for Special Botany and Organic Chemical Laboratories at the Swiss Federal Institute of Technology, Zurich, Switzerland, who, in 1967, published a study in as an article called "Metabolic products of microorganisms" in the Helvetica Chimica Acta journal. In this article, the authors described that a new strain of Streptomyces antibioticus produces a novel antibiotic which was the first boron-containing organic compound found in nature. The authors called this new compound boromycin and characterized it as a complex of boric acid with a tetradentate organic complexing agent that yields by hydrolysis D-valine, boric acid, and a polyhydroxy compound of macrolide type. General information Boromycin has potential medical uses as an antibiotic for treating Gram-positive bacterial infections, coccidiosis, and certain protozoal infections, but its efficacy and safety in clinical settings were not determined. Boromycin has not been approved as a drug for medical use in the USA (by the FDA), Europe, Canada, Japan, Russia, China, or the former Soviet Union. Boromycin is a boron-containing compound produced by Streptomyces antibioticus, isolated from the soil of Ivory Coast. It exhibits antimicrobial properties, inhibiting the growth of Gram-positive bacteria while having no effect on certain Gram-negative bacteria and fungi. Boromycin has also shown activity against protozoa of the genera plasmodiae and babesiae. In addition to its antimicrobial effects, boromycin has been studied to treat and prevent coccidiosis in susceptible poultry. It has been predicted to inhibit the replication of HIV-1 and the synthesis of proteins, RNA, and DNA in whole cells of Bacillus subtilis. Boromycin binds to the cytoplasmic membrane within the cell and is antagonized by surface-active compounds. It is bound to lipoprotein and does not influence the K+, Na+-ATPase of the cytoplasmic membrane. The removal of boric acid from the boromycin molecule leads to a loss of antibiotic activity. There are minor products of boromycin fermentation, differing in the acylation position. Experiments with feeding the production strain Sorangium cellulosum with specific isotopes have shed light on the biosynthesis of tartrolons, which are closely related to boromycin and aplasmomycin. Research Boron, the essential trace element found in boromycin, benefits plants, animals, and humans. Boron-containing compounds such as boromycin have gained attention for their potential medicinal applications. Researchers are exploring the incorporation of boron into biologically active molecules, including for boron neutron capture therapy of brain tumors. The role of the boron atom in neutron capture therapy for malignant brain tumors is to target tumor cells selectively. When a non-radioactive boron isotope (10B) is administered and accumulates in tumor cells, these cells can be selectively destroyed when irradiated with low-energy thermal neutrons. The collision of neutrons with 10B releases high linear energy transfer particles, such as α-particles and lithium-7 nuclei, which can selectively destroy the tumor cells while sparing surrounding normal cells. Some boron-containing biomolecules may also act as signaling molecules interacting with cell surfaces. Anti-HIV activity A 1996 study suggests that boromycin has anti-HIV activity in in vitro laboratory experiments. In that study, boromycin inhibited the replication of both clinically isolated HIV-1 strains and cultured strains. The mechanism of action was believed to involve blocking the later stage of HIV infection, specifically the maturity step for replication of the HIV molecule. While the study provides promising results in a controlled laboratory setting, it is important to note that in vitro experiments do not always accurately predict the effectiveness of a compound in living organisms. Strong evidence should be accumulated to determine boromycin's actual in vivo anti-HIV activity in a living human organism. Accumulating such evidence typically involves preclinical studies in animal models to assess safety, efficacy, and pharmacokinetics before progressing to clinical trials in humans. The lack of replication of the 1996 study's findings by other studies suggests a lack of confirmation regarding the anti-HIV activity of boromycin. This could be due to potential methodological limitations in the original study, such as variations in experimental conditions or difficulties in isolating and purifying boromycin. It is also possible that the initial study produced a false positive result, where the observed anti-HIV activity resulted from chance or experimental artifacts rather than a true effect. Additionally, publication bias may play a role, as positive or novel findings are more likely to be published, potentially leading to an incomplete picture of the overall research on boromycin's anti-HIV activity. Studies are needed to address these factors and determine the true effectiveness of boromycin as an in vivo anti-HIV agent. Anti-plasmodium activity In a 2021 study, boromycin showed activity against Plasmodium falciparum and Plasmodium knowlesi, two species of malaria parasites. It demonstrated rapid killing of asexual stages of both species, including multidrug-resistant strains, at low concentrations. Additionally, boromycin exhibited activity against Plasmodium falciparum stage V gametocytes. However, other studies have not confirmed these results and should be interpreted cautiously. Additional scientific investigation and validation are required to establish the efficacy of boromycin as a potential antimalarial candidate. It is essential to conduct further studies to confirm and substantiate the findings, ensuring reliable and reproducible results. The potential of boromycin in the context of malaria treatment warrants continued research and rigorous examination to assess its effectiveness and potential implications for therapeutic applications fully. Activity against intracellular protozoal parasites A 2021 study by scholars from Central Luzon State University, Philippines, and Washington State University, USA, showed the activity of boromycin against Toxoplasma gondii and Cryptosporidium parvum, which are intracellular protozoal parasites affecting humans and animals. The study found that boromycin effectively inhibited the intracellular proliferation of both parasites at low concentrations. However, these preliminary results have not yet been confirmed by further studies. To validate the results and understand the potential of boromycin as a therapeutic option for the treatment of toxoplasmosis and cryptosporidiosis, it is critical to conduct studies to confirm the activity of boromycin against intracellular protozoan parasites in living host organisms. References Macrolide antibiotics Tetrahydroxyborate esters Secondary alcohols Alkene derivatives Boron heterocycles Spiro compounds Lactones Heterocyclic compounds with 4 rings
Boromycin
Chemistry
1,583
16,859,086
https://en.wikipedia.org/wiki/EIF1AY
Eukaryotic translation initiation factor 1A, Y-chromosomal is a protein that in humans is encoded by the EIF1AY gene. Like its X-chromosomal counterpart EIF1AX, it encodes an isoform of eukaryotic translation initiation factor 1A (EIF1A). EIF1A is required for the binding of the 43S complex (a 40S subunit, eIF2/GTP/Met-tRNAi and eIF3) to the 5' end of capped RNA. It has one amino acid difference (M50L) from EIF1AX. References Further reading
EIF1AY
Chemistry
132
70,627,195
https://en.wikipedia.org/wiki/Trichophyton%20erinacei
Trichophyton erinacei is a species in the fungal genus Trichophyton that is associated with hedgehogs. The fungi is normally isolated from the quills and underbelly of hedgehogs. Common symptoms of infection include crusting around the face and loss of spines. Trichophyton erinacei is also known to affect humans through hedgehog contact that transmits the fungi. Infections can also occur with indirect contact. References Arthrodermataceae Animal fungal diseases Fungus species
Trichophyton erinacei
Biology
103
339,110
https://en.wikipedia.org/wiki/Bulat%20steel
Bulat is a type of steel alloy known in Russia from medieval times; it was regularly mentioned in Russian legends as the material of choice for cold steel. The name is a Russian transliteration of the Persian word , meaning steel. This type of steel was used by the armies of nomadic peoples. Bulat steel was the main type of steel used for swords in the armies of Genghis Khan. Bulat steel is generally agreed to be a Russian name for wootz steel, the production method of which has been lost for centuries, and the bulat steel used today makes use of a more recently developed technique. History The secret of bulat manufacturing had been lost by the beginning of the 19th century. It is known that the process involved dipping the finished weapon into a vat containing a special liquid of which spiny restharrow extract was a part (the plant's name in Russian, , reflects its historical role), then holding the sword aloft while galloping on a horse, allowing it to dry and harden against the wind. Pavel Anosov eventually managed to duplicate the qualities of that metal in 1838, when he completed ten years of study into the nature of Damascus steel swords. Anosov had entered the Saint Petersburg Mine Cadet School in 1810, where a Damascus steel sword was stored in a display case. He became enchanted with the sword, and was filled with stories of them slashing through their European counterparts. In November 1817, he was sent to the factories of Zlatoust mining region in the southern Urals, where he was soon promoted to the inspector of the "weapon decoration department". Here he again came into contact with Damascus steel of European origin (which was in fact pattern welded steel, and not at all similar), but quickly found that this steel was quite inferior to the original forged in the Middle East from wootz steel from India. Anosov had been working with various quenching techniques, and decided to attempt to duplicate Damascus steel with quenching. He eventually developed a methodology that greatly increased the hardness of his steels. Bulat became popular in cannon manufacturing, until the Bessemer process was able to make the same quality steels for far less money. Structure Carbon steel consists of two components: pure iron, in the form of ferrite, and cementite or iron carbide, a compound of iron and carbon. Cementite is very hard and brittle; its hardness is about 640 by the Brinell hardness test, whereas ferrite is only 200. The amount of the carbon and the cooling regimen determine the crystalline and chemical composition of the final steel. In bulat, the slow cooling process allowed the cementite to precipitate as micro particles in between ferrite crystals and arrange in random patterns. The color of the carbide is dark while steel is grey. This mixture is what leads to the famous patterning of Damascus steel. Cementite is essentially a ceramic, which accounts for the sharpness of Damascus (and bulat) steel. Cementite is unstable and breaks down between 600 and 1100 °C into ferrite and carbon, so working the hot metal must be done very carefully. See also Toledo steel Damascus steel Wootz steel Noric steel Tamahagane steel References Bibliography Steels Steel industry of Russia History of metallurgy pl:Bułat (stal)
Bulat steel
Chemistry,Materials_science
686
41,550,247
https://en.wikipedia.org/wiki/Double%20H%20Pipeline
Double H Pipeline is a crude oil pipeline from Dore, North Dakota to Guernsey, Wyoming. It is supposed to carry 100,000 barrels (50,000 initially) of crude oil from the North Dakota Bakken formation shale plays as well as Montana and Wyoming oil fields. Participates in a joint tariff transportation arrangement with Tallgrass Pony Express Pipeline to transport oil from Seiler Station at Baker, MT to delivery points at the Phillips 66 Refinery at Ponca City, OK and Deeprock Terminal at Cushing Oklahoma. Double H Pipeline also delivers to other connecting pipelines at Guernsey, Wyoming. Overview The project was proposed by Hiland Crude, LLC a subsidiary of Hiland Partners that was owned by the Harold Hamm family from Enid, Oklahoma. The 12-inch line was scheduled to begin operating after completion in January 2015. It would connect to the Pony Express Pipeline owned by Tallgrass Energy to connect with the crude oil hub of Cushing, Oklahoma and access lucrative oil markets. In January 2015, it was reported that Hamm was selling the Bakken pipeline network to Kinder Morgan Inc. See also Tallgrass Energy Partners List of oil pipelines List of oil refineries References Further reading Hiland Partners Planning New Bakken Oil Pipeline; Potential Capacity Of 100,000 b/d To Guernsey, WY, April 8, 2013, by R.T. Dukes BakkenShale.com Crude oil pipelines in the United States
Double H Pipeline
Chemistry
292
6,152,185
https://en.wikipedia.org/wiki/Softmax%20function
The softmax function, also known as softargmax or normalized exponential function, converts a vector of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, and is used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes. Definition The softmax function takes as input a vector of real numbers, and normalizes it into a probability distribution consisting of probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval , and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities. Formally, the standard (unit) softmax function , where , takes a vector and computes each component of vector with In words, the softmax applies the standard exponential function to each element of the input vector (consisting of real numbers), and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of is approximately , which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8). In general, instead of a different base can be used. As above, if then larger input components will result in larger output probabilities, and increasing the value of will create probability distributions that are more concentrated around the positions of the largest input values. Conversely, if then smaller input components will result in larger output probabilities, and decreasing the value of will create probability distributions that are more concentrated around the positions of the smallest input values. Writing or (for real ) yields the expressions: A value proportional to the reciprocal of is sometimes referred to as the temperature: , where is typically 1 or the Boltzmann constant and is the temperature. A higher temperature results in a more uniform output distribution (i.e. with higher entropy; it is "more random"), while a lower temperature results in a sharper output distribution, with one value dominating. In some fields, the base is fixed, corresponding to a fixed scale, while in others the parameter (or ) is varied. Interpretations Smooth arg max The Softmax function is a smooth approximation to the arg max function: the function whose value is the index of a vector's largest element. The name "softmax" may be misleading. Softmax is not a smooth maximum (that is, a smooth approximation to the maximum function). The term "softmax" is also used for the closely related LogSumExp function, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", though the term "softmax" is conventional in machine learning. This section uses the term "softargmax" for clarity. Formally, instead of considering the arg max as a function with categorical output (corresponding to the index), consider the arg max function with one-hot representation of the output (assuming there is a unique maximum arg): where the output coordinate if and only if is the arg max of , meaning is the unique maximum value of . For example, in this encoding since the third argument is the maximum. This can be generalized to multiple arg max values (multiple equal being the maximum) by dividing the 1 between all max args; formally where is the number of arguments assuming the maximum. For example, since the second and third argument are both the maximum. In case all arguments are equal, this is simply Points with multiple arg max values are singular points (or singularities, and form the singular set) – these are the points where arg max is discontinuous (with a jump discontinuity) – while points with a single arg max are known as non-singular or regular points. With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as , softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg max pointwise, meaning for each fixed input as , However, softargmax does not converge uniformly to arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example, but and for all inputs: the closer the points are to the singular set , the slower they converge. However, softargmax does converge compactly on the non-singular set. Conversely, as , softargmax converges to arg min in the same way, where here the singular set is points with two arg min values. In the language of tropical analysis, the softmax is a deformation or "quantization" of arg max and arg min, corresponding to using the log semiring instead of the max-plus semiring (respectively min-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization". It is also the case that, for any fixed , if one input is much larger than the others relative to the temperature, , the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1: However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100: As , temperature goes to zero, , so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior. Probability theory In probability theory, the output of the softargmax function can be used to represent a categorical distribution – that is, a probability distribution over different possible outcomes. Statistical mechanics In statistical mechanics, the softargmax function is known as the Boltzmann distribution (or Gibbs distribution): the index set are the microstates of the system; the inputs are the energies of that state; the denominator is known as the partition function, often denoted by ; and the factor is called the coldness (or thermodynamic beta, or inverse temperature). Applications The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression), multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of distinct linear functions, and the predicted probability for the th class given a sample vector and a weighting vector is: This can be seen as the composition of linear functions and the softmax function (where denotes the inner product of and ). The operation is equivalent to applying a linear operator defined by to vectors , thus transforming the original, probably highly-dimensional, input to vectors in a -dimensional space . Neural networks The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific index to a real value, the derivative needs to take the index into account: This expression is symmetrical in the indexes and thus may also be expressed as Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself). To ensure stable numerical computations subtracting the maximum value from the input vector is common. This approach, while not altering the output or the derivative theoretically, enhances stability by directly controlling the maximum exponent value computed. If the function is scaled with the parameter , then these expressions must be multiplied by . See multinomial logit for a probability model which uses the softmax activation function. Reinforcement learning In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is: where the action value corresponds to the expected reward of following action a and is called a temperature parameter (in allusion to statistical mechanics). For high temperatures (), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (), the probability of the action with the highest expected reward tends to 1. Computational complexity and remedies In neural network applications, the number of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the , followed by the application of the softmax function itself) computationally expensive. What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times. Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax. The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forming latent variables. The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf. Ideally, when the tree is balanced, this would reduce the computational complexity from to . In practice, results depend on choosing a good strategy for clustering the outcomes into classes. A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability. A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor. These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling). Mathematical properties Geometrically the softmax function maps the vector space to the boundary of the standard -simplex, cutting the dimension by one (the range is a -dimensional simplex in -dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyperplane. Along the main diagonal softmax is just the uniform distribution on outputs, : equal scores yield equal probabilities. More generally, softmax is invariant under translation by the same value in each coordinate: adding to the inputs yields , because it multiplies each exponent by the same factor, (because ), so the ratios do not change: Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average: where ), and then the softmax takes the hyperplane of points that sum to zero, , to the open simplex of positive values that sum to 1, analogously to how the exponent takes 0 to 1, and is positive. By contrast, softmax is not invariant under scaling. For instance, but The standard logistic function is the special case for a 1-dimensional axis in 2-dimensional space, say the x-axis in the plane. One variable is fixed at 0 (say ), so , and the other variable can vary, denote it , so the standard logistic function, and its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line , with outputs and The softmax function is also the gradient of the LogSumExp function, a smooth maximum: where the LogSumExp function is defined as . History The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper , formalized and popularized in the influential textbook . The use of the softmax in decision theory is credited to R. Duncan Luce, who used the axiom of independence of irrelevant alternatives in rational choice theory to deduce the softmax in Luce's choice axiom for relative preferences. In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers, : and : Example With an input of , the softmax is approximately . The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: a change of temperature changes the output. When the temperature is multiplied by 10, the inputs are effectively and the softmax is approximately . This shows that high temperatures de-emphasize the maximum value. Computation of this example using Python code: >>> import numpy as np >>> z = np.array([1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]) >>> beta = 1.0 >>> np.exp(beta * z) / np.sum(np.exp(beta * z)) array([0.02364054, 0.06426166, 0.1746813, 0.474833, 0.02364054, 0.06426166, 0.1746813]) Alternatives The softmax function generates probability predictions densely distributed over its support. Other functions like sparsemax or α-entmax can be used when sparse probability predictions are desired. Also the Gumbel-softmax reparametrization trick can be used when sampling from a discrete-discrete distribution needs to be mimicked in a differentiable manner. See also Softplus Multinomial logistic regression Dirichlet distribution – an alternative way to sample categorical distributions Partition function Exponential tilting – a generalization of Softmax to more general probability distributions Notes References Computational neuroscience Logistic regression Artificial neural networks Functions and mappings Articles with example Python (programming language) code Exponentials Articles with example Julia code Articles with example R code
Softmax function
Mathematics
3,220
8,830,624
https://en.wikipedia.org/wiki/National%20minimum%20dataset
In health informatics, a national minimum dataset is a database of health encounters held by a central repository. "Minimum" implies that the data fields will be only those required to aggregate information for the purposes of administering the health system in the particular country and for reporting information required as a member country of WHO. See also Minimum Data Set (MDS), US National Minimum Data Set for Social Care (NMDS-SC), England Nursing Minimum Data Set (NMDS), US References External links New Zealand NMDS National Minimum Dataset (Hospital Events) data dictionary Health informatics
National minimum dataset
Biology
120
18,125,091
https://en.wikipedia.org/wiki/Toda%27s%20theorem
Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize. Statement The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P. Definitions #P is the problem of exactly counting the number of solutions to a polynomially-verifiable question (that is, to a question in NP), while loosely speaking, PP is the problem of giving an answer that is correct more than half the time. The class P#P consists of all the problems that can be solved in polynomial time if you have access to instantaneous answers to any counting problem in #P (polynomial time relative to a #P oracle). Thus Toda's theorem implies that for any problem in the polynomial hierarchy there is a deterministic polynomial-time Turing reduction to a counting problem. An analogous result in the complexity theory over the reals (in the sense of Blum–Shub–Smale real Turing machines) was proved by Saugata Basu and Thierry Zell in 2009 and a complex analogue of Toda's theorem was proved by Saugata Basu in 2011. Proof The proof is broken into two parts. First, it is established that The proof uses a variation of Valiant–Vazirani theorem. Because contains and is closed under complement, it follows by induction that . Second, it is established that Together, the two parts imply References Structural complexity theory Theorems in computational complexity theory
Toda's theorem
Mathematics
335
26,008,898
https://en.wikipedia.org/wiki/Ra.One
Ra.One is a 2011 Indian Hindi-language superhero film directed by Anubhav Sinha and produced by Gauri Khan under Red Chillies Entertainment. The film stars Shah Rukh Khan in a dual role and Arjun Rampal as the titular antagonist, with Kareena Kapoor, Armaan Verma, Shahana Goswami, Tom Wu, Dalip Tahil and Satish Shah in supporting roles. The name is inspired by Hindu mythological character Ravana. The film, Ra.One, a video game antagonist, escapes from the virtual world and arrives in the real world to hunt down Lucifer, the ID of Ra.One's creator Shekhar Subramaniam's son Prateek, as he had defeated Ra.One in a game. Shekhar gets killed by Ra.One, but Prateek resurrects G.One, the video game protagonist and Shekhar's lookalike, to defeat Ra.One and protect him and his family. Principal photography began in March 2010 and took place in India and the United Kingdom and was overseen by an international crew. The post-production involved 3-D conversion and the application of visual effects, the latter being recognised as a technological breakthrough among Indian films. With a budget of , inclusive of publicity costs, Ra.One was the most expensive Indian film at the time of release, surpassing the budget of Enthiran (2010). The producers spent , including a marketing budget, which involved a nine-month publicity campaign, brand tie-ups, merchandise, video games and viral marketing. The film faced controversies involving plagiarism, content leaks and copyright challenges. Ra.One was theatrically released on 24 October 2011, the beginning of the five-day Diwali weekend, in 2D, 3D and dubbed versions in Tamil and Telugu languages with three international premieres being held between 24 October 2011 and 26 October 2011. The film witnessed the largest international theatrical release for an Indian film as of 2011 and was preceded by high audience and commercial expectations. Upon release, Ra.One earned praise for the visual effects, action sequences, direction, music and the performances of Khan and Rampal, but criticism for the script. The film became the third highest-grossing Indian Hindi language film of 2011 domestically, the second highest-grossing Hindi film of 2011 worldwide, and broke a number of opening box office records. The film also earned more than worldwide against a budget of , and was a commercial success. It subsequently won a number of awards for its technical aspects, notably one National Film Award, one Filmfare Award and four International Indian Film Academy Awards. While initially mixed around its time of release, the film's reception has improved over the years. Plot In London, Jenny Nair, an employee of UK-based company Barron Industries, introduces a new technology that allows things from the digital world to enter the real world using wireless transmissions from multiple devices. Shekhar Subramaniam, who also works for the company, is given a final chance to devise a unique video game. To impress his skeptical son Prateek and upon the request of his wife Sonia, Shekhar uses his son's idea that the antagonist should be more powerful than the protagonist. Shekhar's colleague Akashi does the motion capture of the game's characters, Jenny does the programming, and Shekhar gives his face to the game's protagonist, G. One, while the antagonist, Ra. One, is faceless and has substantially greater powers than those granted to G. One, including shapeshifting and mind reading. Ra. One is imbibed with a self-learning AI. The two characters are given a H.A.R.T, which gives them powers, but also without which they cannot be killed. The game has three levels, and either of the players can only be killed in the third level using a special gun that holds a single bullet, which destroys the opponent's H.A.R.T. While designing the game, Akashi notices some malfunctions but ignores them. When the game is finally launched, it receives a standing ovation, and Prateek is so impressed that he insists on playing it immediately. Prateek logs in under the alias Lucifer and proceeds to the final level but is interrupted by Akashi. The self-aware Ra. One, being unable to end his turn with Lucifer, becomes determined that Lucifer shall die. When the mainframe fails to shut down, Akashi calls Shekhar, who notices a problem with the game. Ra. One uses the new technology to enter the real world, breaks free, and goes to find Lucifer. Ra. One first assumes Akashi's form and asks him about Lucifer. When Akashi fails to reply, Ra. One abruptly murders him. After finding the dead Akashi, Shekhar rushes home, but is confronted on the way by Ra. One. In an attempt to save Prateek, Shekhar claims that he is Lucifer, but Ra. One scans Shekhar's ID and kills him after discovering that he is lying. Prateek notices the strange circumstances of his father's death and realises that Ra. One has come to life. Prateek and Jenny attempt to bring G.One to life. Meanwhile, Sonia tells Prateek that the family will return to India. Having taken the form of Akashi, Ra. One chases them, but G. One enters the real world through Jenny's computer and after a fight with Ra. One, causes a gas explosion, which breaks Ra. One into cubes and temporarily disables him. G. One takes Ra. One's H.A.R.T., without which Ra. One is weakened but also cannot die. Sonia finds she cannot leave G. One and takes him along with them to India through Shekhar's passport. G. One promises Sonia that he will protect Prateek from all harm. Ra. One returns to life takes the form of a billboard model and tracks G. One and Prateek by infiltrating Baron's office. During Prateek's birthday party, Ra. One hypnotizes Sonia, assumes her form, and kidnaps Prateek. Ra. One then instructs G. One to give him his H.A.R.T. back and sends the real Sonia in an uncontrollable Mumbai Suburban Railway train. G. One saves Sonia just in the nick of time despite the train crashing and the Chhatrapati Shivaji Terminus getting destroyed. He had a brief moment with Sonia and returns to save Prateek. The game resumes with Prateek controlling G. One's moves. Following a lengthy fight, they successfully beat the first and second levels and reach the third level. With little power left, G. One and Prateek trick Ra. One into shooting G. One without his H.A.R.T. attached, which leaves Ra. One helpless. Furious, Ra. One creates ten copies of himself. Prateek is unable to differentiate the real Ra. One, and asks G.One to quote one of Shekhar's sayings, which was actually a hint in disguise: "If you join the forces of evil, its shadows shall always follow you." The pair then realise that only one of the ten Ra. Ones has a shadow. G. One shoots and destroys him. After absorbing Ra.One's remains, G. One transports himself back into the digital world. Several months later, Prateek and Sonia return to London, where Prateek finally manages to restore G. One to the real world. Cast Shah Rukh Khan in a dual role as Shekhar Subramaniam, a video game programmer G.One (Good One), a superhero player character who came out of the game Kareena Kapoor as Sonia S. Subramaniam: Shekhar's wife and Prateek's mother. Arjun Rampal as the final form of Ra.One (Random Access One, also sounds similar to Ravana, the antagonist demon king of Lanka in Ramayana), a supervillain player character who comes out of the game into the real life. Armaan Verma as Prateek Subramaniam / Lucifer: Shekhar and Sonia's son. Shahana Goswami as Jenny Nair Tom Wu as Akashi Dalip Tahil as Barron Satish Shah as Iyer Suresh Menon as the taxi driver Additionally, Priyanka Chopra and Sanjay Dutt appears in cameos as the Desi Girl and Khal Nayak, respectively. Delnaaz Irani as the school teacher aboard the hijacked train. Amitabh Bachchan provides the voiceover for the introduction sequence of the game 'Ra.One' in the film. The character Chitti, played by Rajinikanth, from Enthiran appears in the film. The character is computer-generated and is not played by the actor himself. Production Development According to director Anubhav Sinha, the idea of Ra.One originated in 2005 when he saw an advertisement on television which showed children remotely controlling a human. He was attracted to the concept and wrote a script based on it. Sinha then approached Shah Rukh Khan, who liked the story and decided to produce the film under his production company Red Chillies Entertainment. Sinha wasn't sure about retaining Khan's support after the former's previous film Cash (2007) became a commercial failure, but Khan reportedly "remained unchanged". Khan felt that the film possessed significant commercial potential in addition to being a fulfilment of his "childhood dream" to be a superhero and to fly. He stated that he wanted to "make a film that gives me the right to deserve the iconic status that I've got for 20 years", and also said that he wanted to make a film dedicated to father-son relationships, which were, in his opinion, "neglected" in Bollywood. Khan's idea was to make a simple family drama which expanded into an action film. He declined to make the film in English to increase its appeal for Western audiences, feeling that "cracking Hollywood on their terms" was unnecessary. Both Khan and Sinha credited their children for providing encouragement and regularly "approving" the film's execution. Red Chillies Entertainment continued to work on other projects before finalising the production aspects of Ra.One. After providing the visual effects for My Name Is Khan (2010), the studio focused solely on Ra.One and did not take up any other films. Khan initially approached a number of directors to helm the film, including Aditya Chopra and Karan Johar, but they declined; eventually, Sinha was made as the film's director. To prepare the film's premises and characterisation, Sinha spent several months viewing video clips, digital art portals and comic books. Sinha and Khan also watched around 200 superhero films from all over the world. The storyboards were designed by Atul Chouthmal, who was contracted after he met Khan at Yash Raj Studios. While the former began work on the storyboards, the producers hired a storyboard artist from Hollywood. Chouthmal revealed that Khan and the other artist differed on their visions of the film, and so he was brought back. Before filming, Khan reportedly took tips from actor Kamal Haasan regarding the production of large-scale films, having been impressed by Haasan's Dasavathaaram (2008). The title of the film received significant media attention due to it being the name of the antagonist rather than the protagonist. The move was considered innovative and noted as a sign of the "rising importance of the villain in Bollywood." According to Sinha, the title had not been planned as such, and was ultimately chosen because Ra.One "sounded cooler" than G.One. Khan was advised to name the film after his own character; he declined to do so, citing the inter-dependence between good and evil. He also called Alfred Hitchcock as his inspiration, and pointed out that the antagonists in films like Sholay, Mr. India and Sadak were better remembered than the protagonists. Casting Shah Rukh Khan was the first actor to be cast in the film. Kareena Kapoor, Priyanka Chopra and Asin had initially been considered for the lead female role; Kareena Kapoor was ultimately chosen because she insisted on playing the part. Arjun Rampal accepted the role of Ra.One after Anubhav Sinha expressed a strong desire to cast him in the film. Tom Wu was contracted to the film in July 2010 and Shahana Goswami was cast one month later. Amitabh Bachchan agreed to be a part of the film after being requested by Khan and Sinha. Several cast members prepared extensively for their roles; Rampal and Kapoor followed special diets to lose weight, Khan and Verma performed their own stunts and Kapoor subsequently did so as well despite initial reluctance. The cast encountered problems during production, Khan faced difficulties with his superhero suit and prosthetic makeup and injured his left knee. The decision to cast Rampal was met with scepticism due to "questionable acting abilities", a statement Sinha criticised. In addition, Rampal encountered back problems (which were treated by the time production began), prompting speculation of a possible replacement by Vivek Oberoi. Jackie Chan had initially been approached for the role of Aakaashi, but he declined the offer. Rajinikanth suffered from health problems which caused a delay in the filming of his cameo appearance. Sanjay Dutt faced a scheduling conflict with Agneepath (2012), which was later resolved. Filming The crew of Ra.One featured both Indian and overseas personnel. Nicola Pecorini served as the director of photography with V. Manikandan providing assistance. Andy Gill and Spiro Razatos were hired as the stunt supervisors, and Nino Pansini was hired as the stunt cinematographer. Sabu Cyril and Marcus Wookey were responsible for the production design. The film's producer was Bobby Chawla, but Gauri Khan later stepped in after the former suffered a brain haemorrhage. Filming took place at a number of studios, notably Filmistan Studios, Film City and Yash Raj Studios in India and the Black Hangar Studios in the UK. Principal photography was initially set to begin in Miami, but the idea was abandoned due to budget constraints. The first phase of filming began in Goa on 21 March 2010 and continued until May. The second and third phases took place in London with the entire cast, beginning in July 2010 and ending in August. The next phase was split into two schedules; the first schedule commenced at Filmistan Studios in the first week of September 2010, while the second schedule began in December 2010 and took place over a seven-day period. The remaining portions were filmed in July 2011 at Film City. A cameo appearance and a music video were filmed in the weeks leading up to the release, the former at the Whistling Woods Studios in Mumbai. Ra.One featured three major action sequences, which were filmed in sets and real locations across Mumbai and London. The cinematography borrowed ideas from video games, such as rapid transitions between first-person and third-person perspectives. Procedures such as bullet time were also incorporated into the film. The production design was closely associated with the lighting and cinematography to facilitate smooth filming. However, filming faced a number of difficulties including increasing costs, delays and safety constraints. In addition, differences between Khan and Sinha caused tensions on the sets. Post-production As with the filming crew, the post-production crew of the film included both Indian and overseas personnel. Prime Focus carried out the film's 3-D conversion with London-based colorist Richard Fearon performing the colour grading. Red Chillies VFX partnered with a number of visual effects studios around the world and undertook the incorporation of the visual effects under the supervision of Jeffrey Kleiser. Nvidia provided the information technology–based software utilised for the effects, while Edwark Quirk supervised over the computer-generated imagery used in the film. Resul Pookutty was responsible for the film's sound design. The idea for converting the film to 3-D was put forth during filming, and was implemented in July 2011 due to a revived interest in 3-D films. The process required 2,600 artists to convert 4,400 shots of the film. The sound design involved bridging the real and the virtual world, and the required sound enhancements were achieved by using the Dolby Surround 7.1 system. Incorporating the visual effects began in April 2010, and was preceded by extensive research. 1,200 artists worked for 2 years to complete the visual effects work. A number of complex procedures were executed, including cubical transformations and the design of the faceless form of Ra.One. Despite precautions, the post-production faced significant delays owing to the digital inter-mediation, increased work-load due to the 3-D and dubbed versions of the film, and delays in the completion of the visual effects. The post-production also faced budget constraints and witnessed an overuse of CGI according to the cinematographer. The delays left only two days for printing the film and sending it to theatres, generating significant anxiety over a possible delay in the release. Khan subsequently kept strict tabs on the progress of work and postponed his knee surgery to complete the film on time. Costumes The bodysuits worn by Khan and Rampal were designed by Robert Kurtzman and Tim Flattery, and made by a team of specialists based in Los Angeles. Sinha spent around three months conceptualising the costumes, watching various superhero films to design a costume, which is not created already. He then wrote a 23-page document with his sketches and details of what he wanted and gave it to the designers to work upon. To create the suit, Khan was required to enter a small chamber where a warm latex-like liquid was released up to his neck and allowed to solidify, forming the mould which was then peeled off his body. The suit was joined by a concealed zipper and subsequently modified. Computer-generated embellishments such as light beams and electricity were added to the suits after Khan expressed dissatisfaction with the initial rushes of the film. A total of 21 costumes were made for the film, with each suit reportedly costing . Khan's suit was made of reinforced latex, coloured steel-blue and fitted with micro-computer circuitry. Rampal's suit was made of three-inch thick solid rubber, and was red in colour. Both actors were required to wear additional suits inside their body suits to prevent skin contact. Wearing the suits created a number of difficulties for the actors. It took 20 minutes to put on the suits and 40 minutes to remove them. In addition, the non-porous nature of the suits created intense heat inside, causing excessive perspiration despite the presence of special air conditioning ducts. Khan later felt that the suits' conception had been a mistake since filming occurred during the day; digital adjustments to the suits brought "all the efforts to naught". Manish Malhotra designed the look and the costume of Kareena Kapoor for the song "Chammak Challo", which received widespread media coverage. Kapoor wore a red sari draped in the style of a dhoti. Since the release of the song, the costume was termed a "fashion rage", becoming popular in India and some overseas countries. Fashion experts applauded the costume and Kapoor's ability to carry it off "stunningly", though certain experts dismissed the naming of the sari colour. Music The soundtrack of Ra.One was composed by Vishal–Shekhar, with the lyrics being written by Atahar Panchi, Vishal Dadlani and Kumaar. A. R. Rahman provided the background score for a single sequence. Sinha announced that R&B singer Akon and the Prague Philharmonic Orchestra would be a part of the soundtrack; the former lent his vocals for "Chammak Challo" and "Criminal", while the latter performed in "Bhare Naina". The composers obtained the official license to use Ben E. King's "Stand By Me", on which they based the song "Dildaara". The soundtrack contains fifteen tracks, including seven original songs, four remixes, three instrumentals and an international version of "Chammak Challo". The music rights were bought by T-Series for . The Hindi version of the soundtrack was released on 21 September 2011; the Tamil version was released on 5 October 2011 and Telugu version was released on 9 October 2011 respectively, featuring six tracks each. Marketing Promotion The producers of Ra.One spent out of a marketing budget, of this was utilised for internet promotions alone. The film's first theatrical poster was released in December 2010, and was followed by the release of two teaser trailers during the 2011 ICC Cricket World Cup. The first theatrical trailer premiered three months later. Khan and Sinha undertook a multi-city tour during which they unveiled a 3,600 feet-long piece of fan mail to collect audience messages. The official website of Ra.One was launched on 31 May 2011, and an official YouTube channel for the film was subsequently unveiled. On 20 October 2011, Khan held a live chat with fans on Google Plus, the first time an Indian film personality had done so. Rampal's look in the film, which had been kept secret, was revealed in late October 2011. The film's marketing utilised merchandise and games to facilitate the creation of a franchise. Khan marketed merchandise related to the film, which included toys, PC tablets and apparel. On 14 October 2011, a gaming tournament featuring games like Call of Duty was conducted in Mumbai and telecast live on YouTube. Video game Red Chillies Entertainment partnered with Sony Computer Entertainment Europe to create "Ra.One – The Game", a game for PlayStation 2 and PlayStation 3 which was released on 5 October 2011. The producers further collaborated with UTV Indiagames to design a social game titled Ra.One Genesis, with an independent plot based on G.One, in addition to designing digital comics based on the film's characters. Release Statistics In India, the Hindi version of Ra.One was released across more than 4,000 plus screens worldwide- 3,100 screens in 2,100 theatres, breaking the record for the widest Hindi film release previously held by Salman Khan's Bodyguard (2011). The Tamil and Telugu versions were released on 275 prints and 125 prints respectively. A week before the release, multiplex owners throughout India decided to allot 95% of the total available screen space to the film. Overseas, Ra.One was released in 904 prints. This including 600 prints in Germany, 344 prints in the US, 200–300 prints in South Korea, 202 prints in the UK, 79 prints in the Middle East, 75 prints in Russia, 51 prints in Australia, 49 prints in Canada and 25 prints in New Zealand and Taiwan. In early October 2011, a partnership deal was being finalised by the distributors to allow the film to be released in China across 1,000 prints. In addition, the film was released in Pakistan and non-traditional territories like Brazil, Spain, Italy, Greece and Hong Kong. The 3D version was released in 550 screens across the world. Ra.One was noted for the extensive use of digital prints, reportedly making up 50–60% of the total release; in India, the film was exhibited in over 1,300 digital theatres, breaking the record previously held by Bodyguard. The wide digital release was implemented to lower distribution costs, make the film accessible to a wider audience and reduce piracy. Despite the measures taken, pirated versions of Ra.One were available on the Internet within hours of the film's release. Screenings In May 2011, the first rushes of Ra.One were shown to the cast of Khan's other home production Always Kabhi Kabhi (2011). Subsequently, the film was screened for test audiences to study and gauge the film's appeal across different age groups. A few days prior to the theatrical release, Khan arranged a special screening of the film's final cut in Yash Raj Studios, where he invited close friends, his family and the film's crew. Between 24 and 26 October 2011, Ra.One had international premieres in Dubai, London and Toronto, all of which were chosen due to their international significance and large South Asian populations. The premiere in Dubai was held on 24 October 2011 at the Grand Cinemas, Wafi. A high-profile dinner and charity auction followed, where Khan raised AED30,000 (approximately US$8,200) to build a workshop for children with special needs. The premiere included three simultaneous screenings of the film, for which tickets were placed on sale for the public. The premiere in London took place at the O2 Cineworld the following day, and the premiere in Toronto took place at the TIFF Bell Lightbox on 26 October 2011. Censorship Ra.One was submitted to the Central Board of Film Certification on 14 October 2011 to receive its viewership rating. The Board raised strong objections to the film's action scenes, fearing that they would influence young children to emulate the stunts. The police and the Indian Railways security force had made similar objections to the train-based stunts in the film, claiming that youngsters would "blindly imitate them" and hence put their lives at risk. The film was finally passed with a 'U' certificate without cuts, but under the condition that prominent disclaimers were shown, stating that the stunts were computer-generated and should not be imitated. The British Board of Film Classification rated the film 12A for "moderate fantasy violence". In March 2012, a Mid-Day report alleged that Ra.One had received a favourable rating, pointing out that the producers had violated the rules by meeting the Board officials during the screening. Home media The television broadcasting rights for Ra.One were bought by Star India for a then-record sum of , surpassing 3 Idiots (2009). The Indian television premiere of Ra.One took place on 21 January 2012 on STAR Gold, garnering a 28% market sharefor the channel and a TVR of 6.7. Star India subsequently syndicated the television screening rights to Disney XD, where it premiered on 2 June 2012. In May 2012, International Media Distribution announced that Ra.One would be televised on Comcast and Cox, as a part of the celebrations of the Asian Pacific American Heritage Month. Discovery Channel tied up with Red Chillies Entertainment to produce a one-hour program titled "Revealed: The Making of Ra.One", which aired on the channel on 30 March 2012. The program discussed the making of the film in detail, including the visual effects and the challenges faced while filming. Eros International released the DVD of Ra.One on 13 December 2011 across all regions in one-disc and two-disc packs complying with the NTSC format. The DVD of the film contained alternate endings. Initially, Khan had wanted to add alternate endings to the theatrical release itself, but later deemed it risky. The DVD version was made interactive as well. VCD and Blu-ray versions of the film were also released. The German Blu-ray edition was released on 1 June 2012 by Rapid Eye Movies. On 24 July 2013, French distributor Condor released the film as Voltage on 3D Blu-ray. It was released in a SteelBook edition containing the 3D/2D Blu-ray discs and the DVD. The same year, Japanese distributor Maxam released the Blu-ray on 27 September 2013. Controversies Plagiarism allegations The film faced allegations of plagiarism with similarities to Terminator 2: Judgment Day (1991), the Batman series, Iron Man (2008), The Sorcerer's Apprentice (2010) and Tron: Legacy (2010). Khan denied the allegations, saying, "I got inspired from a lot of superhero movies but the movie is original. In fact, Ra.One will be the first superhero-based movie in the world in which the superhero lives in a family." A few days before the release, screenwriter Yash Patnaik claimed that Ra.One resembled a concept that he had developed several years before. Patnaik appealed to the Bombay High Court to delay the film's release, until he was given due credit or 10% of the film's overall profit. The court, observing prima facie evidence that there had been copyright violations, asked the filmmakers to deposit with the court on 21 October 2011 before releasing the film. Patnaik challenged the court's decision and demanded that the producers give him credit and not cash. Sinha later claimed that he alone had developed the film's story. Hacking Ra.One also faced cybertheft and hacking issues. On 3 June 2011, three days after its launch, the official website of the film was hacked by suspected Pakistani cyber criminals who stated that the act was in revenge for a similar attack on a Karachi press club website. The hackers defaced the homepage and left a note threatening the Indian Press Club. Despite precautions, the song "Chammak Challo" was leaked several months before the official release of the soundtrack. Khan clarified that the leaked song was a "rough version" of the actual song, and that the person responsible for the leak was being looked for. He subsequently refuted claims that the leak had been engineered as a publicity stunt. Reception Box office In India, Ra.One debuted at the beginning of the five-day Diwali weekend, and subsequently broke the Diwali opening day record. The film then set the records for the biggest single-day net revenue and the biggest three-day opening weekend earned by a Hindi film, breaking the previous records held by Bodyguard. Subsequently, the film began to suffer significant drops in its collections, with its five-day extended weekend and nine-day extended week coming second to the records of Bodyguard. The film faced an 84 percent drop in collections in its second week and fell a further 90 percent in its third week, the latter primarily due to the release of Rockstar. The dubbed versions showed similar trends. The Tamil and Telugu versions together earned around nett. In overseas markets, Ra.One earned the highest three-day and five-day opening weekends among the Hindi film releases of 2011; by its second weekend, the film had become the highest-grossing Hindi film of 2011 in overseas markets, but the collections suffered drops throughout. In general, families and children formed the major portion of the film's audience, and the 3D version is regarded as a success. Pre-sale record The budget of Ra.One was the subject of significant speculation prior to its release. A number of estimates placed the budget between and . It was universally accepted that the film was the most expensive Bollywood film of all time, with certain sources stating that the film was the most expensive Indian film ever. The original budget was revealed to be after promotional expenses. Khan stated that he had "worked very hard" to finance the film without borrowing money, and reportedly hosted a television show just to finance the film. Ra.One earned from pre-release revenue sources, setting a new record for Bollywood films. The extensive marketing campaign greatly increased audience expectations of the film. Ra.One set records for the level of pre-release buzz for an Indian film, and also topped a number of polls gauging the most awaited Bollywood films of the year. Anticipation for the film was equally high among the trade analysts, with some commenting that the film would pass the mark in one week and the mark in over three weeks. Advance bookings commenced on 20 October 2011 on a limited scale, and expanded later. While initial ticket sales were low, they picked up considerably near the release date. A few days prior to the release, the advance booking was described as "phenomenal", with an overall advance booking rate of 20–25% across the country. A number of advantages of the film's release were pointed out, such as the festive season and higher 3D ticket prices, though there had been doubt regarding the timely release of the 3D version. Critical response Upon release, Ra.One received mixed reviews from critics in India and generally positive reviews overseas. Review aggregator Rotten Tomatoes reported a 61% approval rating, based on 23 reviews with an average rating of 5.41/10. On Metacritic, which assigns a weighted mean from film reviews, Ra.One holds a rating of 60% based on eight reviews, signifying "mixed or average reviews". Positive reviews described the film as an ambitious initiative and a technological success, with some critics thinking that Ra.One had put Indian films on par with Hollywood. The visual effects received near-universal praise, though dissenting opinions stated that they were "all over the place". The action sequences were also widely praised. Other aspects of the film received more polarising opinions, and one very positive review was criticised for "over-rating" the film. Mixed views were opined regarding the plot's gaming concept, with some critics deeming it "far-fetched" and others lauding the "gaming-style aesthetics". Similarly, some critics called the emotional scenes "fulfilling" while others felt them to be "lacking in connect with the audience". Raja Sen of Rediff.com gave a 1.5 out of 5 star rating explaining that "Ra.One is a subpar superhero film with a mediocre soundtrack and occasionally terrific effects. For those of you looking to compare, it's well below Krrish on the super-pecking order, and far, far below Robot." The story was negatively received by several critics, with a number of them deeming it to be disappointing and lacking in originality; one critic praised the original idea but criticised its "Bollywoodization". The direction was criticised in a number of reviews, though a few critics praised Sinha's pacing of the film and the execution of the action sequences. Some critics pointed to the presence of scenes which were not child-friendly, despite Ra.One being promoted as a children's film. Particular reviews criticised the lack of character development and the film's "incoherently hackneyed morality". A few critics panned the film as a whole, describing it as a "mess"; one review commented, "It's convenient to say that if you have no expectations from the film, you wouldn't be disappointed." Accolades After its release, Ra.One received numerous nominations and awards in India and abroad, a majority of them for its technical aspects. The film notably won the National Film Award and the Filmfare Award for Best Special Effects, and four International Indian Film Academy Awards. The film also received several business awards for its marketing and distribution. Potential sequel Reports of a planned sequel of Ra.One began surfacing prior to the film's release, though the extent of real progress on the sequel is unknown. Both Khan and Sinha admitted to formulating plans for a sequel, though the former noted that it would be "presumptuous" to start the sequel before the first film's release. Khan later refuted the speculations, saying that a sequel was unlikely due to his other commitments. After Ra.One won a National Award, an "overjoyed" Khan said that the film's world could be further explored. He stated that the sequel, if made, would be titled G.One and not Ra.Two, and that he would make it "faster, bigger and better" than Ra.One. Khan was reported to be looking for a script, without a fixed release date. In April 2012, Mushtaq Sheikh said that the pre-production of the sequel had begun. A number of reports stated that Kareena Kapoor would not be a part of the sequel. Despite Khan's enthusiasm for the idea of a sequel, the film industry expressed mixed opinions regarding it. Filmmaker Rajkumar Gupta commented, "It would be challenging to take forward a story that has not worked earlier." Producer Ramesh Taurani responded negatively to the idea, saying, "It is important for the film to be appreciated so that a sequel can be made." Trade analyst Atul Mohan called the sequel "a bad idea". Conversely, others were supportive of the sequel. Producer Goldie Behl brushed aside arguments about the success of the first film, saying, "If the people think that they can earn some more, then it doesn't matter how big or small the hit was." Director Kunal Kohli also reacted positively, saying, "Certain ideas naturally lend themselves to sequels. So why not use that investment of your time and effort to make a sequel that will take the brand further?" References Notes External links 2011 3D films 2011 films 2011 science fiction action films 2010s Indian films 2010s superhero films Films about artificial intelligence Films about shapeshifting Films about telepresence Films about telekinesis Films about telepathy Films about video games Films directed by Anubhav Sinha Films involved in plagiarism controversies Films scored by Vishal–Shekhar Films shot in India Films shot in London Films that won the Best Special Effects National Film Award India-exclusive video games Indian 3D films Indian crossover films Indian science fiction action films Red Chillies Entertainment films Indian superhero films Films set in the 2010s Films set in 2010 Films about father–son relationships Films set in Mumbai Films about mother–son relationships Films about robots Works subject to a lawsuit Indian intellectual property law Films adapted for other media Films adapted into comics Enthiran 2011 controversies Films set in London British Indian films Films using motion capture Indian chase films Indian films about revenge Films about cyborgs Films about single parent families Films about technological impact Films about computing Films about computer and internet entrepreneurs Films about virtual reality Films set in computers Films about computer hacking Fiction about nanotechnology Techno-thriller films Indian science fiction adventure films Films about technology Indian children's adventure films
Ra.One
Materials_science,Technology
7,799
23,484,799
https://en.wikipedia.org/wiki/Fluorescamine
Fluorescamine is a spiro compound that is not fluorescent itself, but reacts with primary amines to form highly fluorescent products, i.e. it is fluorogenic. It hence has been used as a reagent for the detection of amines and peptides. 1-100 μg of protein and down to 10 pg of protein can be detected. Once bound to protein the excitation wavelength is 381 nm (near ultraviolet) and the emission wavelength is 470 nm (blue). This method is found to suffer from high blanks resulting from a high rate of hydrolysis due to requiring a large excess concentration. Alternative methods are based on ortho-phthalaldehyde (OPA), Ellman's reagent (DTNB), or epicocconone. Reaction See also FQ References Lactones Spiro compounds
Fluorescamine
Chemistry
178
1,515,472
https://en.wikipedia.org/wiki/Stokes%20parameters
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1851, as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. They can be determined from directly observable phenomena. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947, who named it as the Stokes parameters. Definitions The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right. Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively. Given the Stokes parameters, one can solve for the spherical coordinates with the following equations: Stokes vectors The Stokes parameters are often combined into a vector, known as the Stokes vector: The Stokes vector spans the space of unpolarized, partially polarized, and fully polarized light. For comparison, the Jones vector only spans the space of fully polarized light, but is more useful for problems involving coherent light. The four Stokes parameters are not a preferred coordinate system of the space, but rather were chosen because they can be easily measured or calculated. Note that there is an ambiguous sign for the component depending on the physical convention used. In practice, there are two separate conventions used, either defining the Stokes parameters when looking down the beam towards the source (opposite the direction of light propagation) or looking down the beam away from the source (coincident with the direction of light propagation). These two conventions result in different signs for , and a convention must be chosen and adhered to. Examples Below are shown some Stokes vectors for common states of polarization of light. {| |- | || Linearly polarized (horizontal) |- | || Linearly polarized (vertical) |- | || Linearly polarized (+45°) |- | || Linearly polarized (−45°) |- | || Right-hand circularly polarized |- | || Left-hand circularly polarized |- | || Unpolarized |} Alternative explanation A monochromatic plane wave is specified by its propagation vector, , and the complex amplitudes of the electric field, and , in a basis . The pair is called a Jones vector. Alternatively, one may specify the propagation vector, the phase, , and the polarization state, , where is the curve traced out by the electric field as a function of time in a fixed plane. The most familiar polarization states are linear and circular, which are degenerate cases of the most general state, an ellipse. One way to describe polarization is by giving the semi-major and semi-minor axes of the polarization ellipse, its orientation, and the direction of rotation (See the above figure). The Stokes parameters , , , and , provide an alternative description of the polarization state which is experimentally convenient because each parameter corresponds to a sum or difference of measurable intensities. The next figure shows examples of the Stokes parameters in degenerate states. Definitions The Stokes parameters are defined by where the subscripts refer to three different bases of the space of Jones vectors: the standard Cartesian basis (), a Cartesian basis rotated by 45° (), and a circular basis (). The circular basis is defined so that , . The symbols ⟨⋅⟩ represent expectation values. The light can be viewed as a random variable taking values in the space C2 of Jones vectors . Any given measurement yields a specific wave (with a specific phase, polarization ellipse, and magnitude), but it keeps flickering and wobbling between different outcomes. The expectation values are various averages of these outcomes. Intense, but unpolarized light will have I > 0 but Q = U = V = 0, reflecting that no polarization type predominates. A convincing waveform is depicted at the article on coherence. The opposite would be perfectly polarized light which, in addition, has a fixed, nonvarying amplitude—a pure sine curve. This is represented by a random variable with only a single possible value, say . In this case one may replace the brackets by absolute value bars, obtaining a well-defined quadratic map from the Jones vectors to the corresponding Stokes vectors; more convenient forms are given below. The map takes its image in the cone defined by |I |2 = |Q |2 + |U |2 + |V |2, where the purity of the state satisfies p = 1 (see below). The next figure shows how the signs of the Stokes parameters are determined by the helicity and the orientation of the semi-major axis of the polarization ellipse. Representations in fixed bases In a fixed () basis, the Stokes parameters when using an increasing phase convention are while for , they are and for , they are Properties For purely monochromatic coherent radiation, it follows from the above equations that whereas for the whole (non-coherent) beam radiation, the Stokes parameters are defined as averaged quantities, and the previous equation becomes an inequality: However, we can define a total polarization intensity , so that where is the total polarization fraction. Let us define the complex intensity of linear polarization to be Under a rotation of the polarization ellipse, it can be shown that and are invariant, but With these properties, the Stokes parameters may be thought of as constituting three generalized intensities: where is the total intensity, is the intensity of circular polarization, and is the intensity of linear polarization. The total intensity of polarization is , and the orientation and sense of rotation are given by Since and , we have Relation to the polarization ellipse In terms of the parameters of the polarization ellipse, the Stokes parameters are Inverting the previous equation gives Measurement The Stokes parameters (and thus the polarization of some electromagnetic radiation) can be directly determined from observation. Using a linear polarizer and a quarter-wave plate, the following system of equations relating the Stokes parameters to measured intensity can be obtained: where is the irradiance of the radiation at a point when the linear polarizer is rotated at an angle of , and similarly is the irradiance at a point when the quarter-wave plate is rotated at an angle of . A system can be implemented using both plates at once at different angles to measure the parameters. This can give a more accurate measure of the relative magnitudes of the parameters (which is often the main result desired) due to all parameters being affected by the same losses. Relationship to Hermitian operators and quantum mixed states From a geometric and algebraic point of view, the Stokes parameters stand in one-to-one correspondence with the closed, convex, 4-real-dimensional cone of nonnegative Hermitian operators on the Hilbert space C2. The parameter I serves as the trace of the operator, whereas the entries of the matrix of the operator are simple linear functions of the four parameters I, Q, U, V, serving as coefficients in a linear combination of the Stokes operators. The eigenvalues and eigenvectors of the operator can be calculated from the polarization ellipse parameters I, p, ψ, χ. The Stokes parameters with I set equal to 1 (i.e. the trace 1 operators) are in one-to-one correspondence with the closed unit 3-dimensional ball of mixed states (or density operators) of the quantum space C2, whose boundary is the Bloch sphere. The Jones vectors correspond to the underlying space C2, that is, the (unnormalized) pure states of the same system. Note that the overall phase (i.e. the common phase factor between the two component waves on the two perpendicular polarization axes) is lost when passing from a pure state |φ⟩ to the corresponding mixed state |φ⟩⟨φ|, just as it is lost when passing from a Jones vector to the corresponding Stokes vector. In the basis of horizontal polarization state and vertical polarization state , the +45° linear polarization state is , the -45° linear polarization state is , the left hand circular polarization state is , and the right hand circular polarization state is . It's easy to see that these states are the eigenvectors of Pauli matrices, and that the normalized Stokes parameters (U/I, V/I, Q/I) correspond to the coordinates of the Bloch vector (, , ). Equivalently, we have , , , where is the density matrix of the mixed state. Generally, a linear polarization at angle θ has a pure quantum state ; therefore, the transmittance of a linear polarizer/analyzer at angle θ for a mixed state light source with density matrix is , with a maximum transmittance of at if , or at if ; the minimum transmittance of is reached at the perpendicular to the maximum transmittance direction. Here, the ratio of maximum transmittance to minimum transmittance is defined as the extinction ratio , where the degree of linear polarization is . Equivalently, the formula for the transmittance can be rewritten as , which is an extended form of Malus's law; here, are both non-negative, and is related to the extinction ratio by . Two of the normalized Stokes parameters can also be calculated by . It's also worth noting that a rotation of polarization axis by angle θ corresponds to the Bloch sphere rotation operator . For example, the horizontal polarization state would rotate to . The effect of a quarter-wave plate aligned to the horizontal axis is described by , or equivalently the Phase gate S, and the resulting Bloch vector becomes . With this configuration, if we perform the rotating analyzer method to measure the extinction ratio, we will be able to calculate and also verify . For this method to work, the fast axis and the slow axis of the waveplate must be aligned with the reference directions for the basis states. The effect of a quarter-wave plate rotated by angle θ can be determined by Rodrigues' rotation formula as , with . The transmittance of the resulting light through a linear polarizer (analyzer plate) along the horizontal axis can be calculated using the same Rodrigues' rotation formula and focusing on its components on and : The above expression is the theory basis of many polarimeters. For unpolarized light, T=1/2 is a constant. For purely circularly polarized light, T has a sinusoidal dependence on angle θ with a period of 180 degrees, and can reach absolute extinction where T=0. For purely linearly polarized light, T has a sinusoidal dependence on angle θ with a period of 90 degrees, and absolute extinction is only reachable when the original light's polarization is at 90 degrees from the polarizer (i.e. ). In this configuration, and , with a maximum of 1/2 at θ=45°, and an extinction point at θ=0°. This result can be used to precisely determine the fast or slow axis of a quarter-wave plate, for example, by using a polarizing beam splitter to obtain a linearly polarized light aligned to the analyzer plate and rotating the quarter-wave plate in between. Similarly, the effect of a half-wave plate rotated by angle θ is described by , which transforms the density matrix to: The above expression demonstrates that if the original light is of pure linear polarization (i.e. ), the resulting light after the half-wave plate is still of pure linear polariztion (i.e. without component) with a rotated major axis. Such rotation of the linear polarization has a sinusoidal dependence on angle θ with a period of 90 degrees. See also Mueller calculus Jones calculus Polarization (waves) Rayleigh Sky Model Stokes operators Polarization mixing Notes References Jackson, J. D., Classical Electrodynamics, John Wiley & Sons, 1999. Stone, J. M., Radiation and Optics, McGraw-Hill, 1963. Collett, E., Field Guide to Polarization, SPIE Field Guides vol. FG05, SPIE, 2005. . E. Hecht, Optics, 2nd ed., Addison-Wesley (1987). . Polarization (waves) Radiometry
Stokes parameters
Physics,Engineering
2,743
1,130,651
https://en.wikipedia.org/wiki/Beagle%203
Beagle 3 (also called Beagle 2: Evolution) was a proposed Mars lander mission to search for life on Mars, past or present. Beagle 3 was the proposed successor to the failed British Beagle 2 Mars lander, with which communication was lost. Beagle 3 was promoted by Professor Colin Pillinger, lead scientist on the Beagle 2. EADS Astrium also played a part in funding and early development of the project. Pillinger dreamed of launching up to two landing craft from an orbiter in 2009 as part of the European Space Agency's Aurora Programme. The putative Beagle 3 would be named after the ship HMS Beagle that took Charles Darwin around the world. After the Beagle 3 project was rejected by ESA in 2004, Pillinger proposed to the NASA to hitch a ride on the Mars Science Laboratory Mars lander, but the proposal was not accepted. One of the goals of Beagle 3 was to support the ESA Aurora programme if chosen. Proposed payload Advanced solar cell technology, meaning two disc-shaped solar arrays (as opposed to the previous four) A gas analysis package (Gap) to test soil and rock for biosignatures and biomolecules Powerful X-band (8.0 to 12.0 GHz) antenna for direct vehicle-to-Earth radio link on the vehicle's main shell, to provide real-time descent data. New lithium-ion battery technology—to be able to operate at lower temperatures, meaning less power wasted on heating—a possible 60% capacity boost to that of Beagle 2. Deadbeat airbags, which inflate just before touch-down, and gently deflate during landing, so that the probe could come to a stop where it lands, and not bounce to a stop. Life-chips, which detect the presence of amino acids. Impact of Beagle 2 discovery Beagle 2 was found in 2015, which overturned a previous theory that it hit thin air and collided with Mars at high speed, however it was not known for sure because it did not transmit any data during descent. When the Beagle 2 leader tried to raise money for Beagle 3, the EDL system used on Beagle 2 was an unknown. However, after its discovery there was realization that the EDL must have worked as it was found on the surface with several panels deployed, even though it did not transmit. One of the goals of Beagle 3 was to use lessons learned from Beagle 2 to improve the spacecraft, and also take advantage of newer technology. However, since it was not clear what happened to Beagle 2, it was not obvious what should be changed. See also Beagle 2 ExoMars ExoMars Schiaparelli EDM lander Life on Mars Mars Reconnaissance Orbiter Mars Science Laboratory rover British space programme References External links Missions to Mars Cancelled astrobiology space missions Space programme of the United Kingdom Cancelled space probes Astronomy projects
Beagle 3
Astronomy
596
9,503,180
https://en.wikipedia.org/wiki/Generation%20time
In population biology and demography, generation time is the average time between two consecutive generations in the lineages of a population. In human populations, generation time typically has ranged from 20 to 30 years, with wide variation based on gender and society. Historians sometimes use this to date events, by converting generations into years to obtain rough estimates of time. Definitions and corresponding formulas The existing definitions of generation time fall into two categories: those that treat generation time as a renewal time of the population, and those that focus on the distance between individuals of one generation and the next. Below are the three most commonly used definitions: Time for a population to grow by a factor of its net reproductive rate The net reproductive rate is the number of offspring an individual is expected to produce during its lifetime: means demographic equilibrium. One may then define the generation time as the time it takes for the population to increase by a factor of . For example, in microbiology, a population of cells undergoing exponential growth by mitosis replaces each cell by two daughter cells, so that and is the population doubling time. If the population grows with exponential growth rate , so the population size at time is given by , then generation time is given by . That is, is such that , i.e. . Average difference in age between parent and offspring This definition is a measure of the distance between generations rather than a renewal time of the population. Since many demographic models are female-based (that is, they only take females into account), this definition is often expressed as a mother-daughter distance (the "average age of mothers at birth of their daughters"). However, it is also possible to define a father-son distance (average age of fathers at the birth of their sons) or not to take sex into account at all in the definition. In age-structured population models, an expression is given by: , where is the growth rate of the population, is the survivorship function (probability that an individual survives to age ) and the maternity function (birth function, age-specific fertility). For matrix population models, there is a general formula: , where is the discrete-time growth rate of the population, is its fertility matrix, its reproductive value (row-vector) and its stable stage distribution (column-vector); the are the elasticities of to the fertilities. Age at which members of a cohort are expected to reproduce This definition is very similar to the previous one but the population need not be at its stable age distribution. Moreover, it can be computed for different cohorts and thus provides more information about the generation time in the population. This measure is given by: . Indeed, the numerator is the sum of the ages at which a member of the cohort reproduces, and the denominator is R0, the average number of offspring it produces. References Ecology Population dynamics Time in life
Generation time
Physics,Biology
589
42,681,781
https://en.wikipedia.org/wiki/Principal%20factor
In algebra, the principal factor of a -class J of a semigroup S is equal to J if J is the kernel of S, and to otherwise. Properties A principal factor is a simple, 0-simple or null semigroup. References Further reading . . . Semigroup theory
Principal factor
Mathematics
59
48,261,164
https://en.wikipedia.org/wiki/K2-3b
K2-3b, also known as EPIC 201367065 b, is an exoplanet orbiting the red dwarf K2-3 every 10 days. It is the largest and most massive planet of the K2-3 system, with about 2.1 times the radius of Earth and about 5 times the mass. Its density of about 3.1 g/cm3 may indicate a composition of almost entirely water, or a hydrogen envelope comprising about 0.7% of the planet's mass. References Exoplanets discovered in 2015 Transiting exoplanets K2-3 3 Leo (constellation)
K2-3b
Astronomy
129
5,652,850
https://en.wikipedia.org/wiki/Biodegradable%20polythene%20film
Polyethylene or polythene film biodegrades naturally, albeit over a long period of time. Methods are available to make it more degradable under certain conditions of sunlight, moisture, oxygen, and composting and enhancement of biodegradation by reducing the hydrophobic polymer and increasing hydrophilic properties. If traditional polyethylene film is littered it can be unsightly, and a hazard to wildlife. Some people believe that making plastic shopping bags biodegradable is one way to try to allow the open litter to degrade. Plastic recycling improves usage of resources. Biodegradable films need to be kept away from the usual recycling stream to prevent contaminating the polymers to be recycled. If disposed of in a sanitary landfill, most traditional plastics do not readily decompose. The conditions of a sealed landfill additionally deter degradation of biodegradable polymers. Polyethylene is a polymer consisting of long chains of the monomer ethylene (IUPAC name ethene). The recommended scientific name polyethene is systematically derived from the scientific name of the monomer.[1] [2] In certain circumstances it is useful to use a structure–based nomenclature. In such cases IUPAC recommends poly(methylene).[2] The difference is due to the opening up of the monomer's double bond upon polymerisation. In the polymer industry the name is sometimes shortened to PE in a manner similar to that by which other polymers like polypropylene and polystyrene are shortened to PP and PS respectively. In the United Kingdom the polymer is commonly called polythene, although this is not recognised scientifically. The ethene molecule (known almost universally by its common name ethylene) C2H4 is CH2=CH2, Two CH2 groups connected by a double bond, thus: Polyethylene is created through polymerization of ethene. It can be produced through radical polymerization, anionic addition polymerization, ion coordination polymerization or cationic addition polymerization. This is because ethene does not have any substituent groups that influence the stability of the propagation head of the polymer. Each of these methods results in a different type of polyethylene. Alternatives to biodegradable polythene film Polythene or polyethylene film will naturally fragment and biodegrade, but it can take many decades to do this. There are two methods to resolve this problem. One is to modify the carbon chain of polyethylene with an additive to improve its degradability and then its biodegradability; the other is to make a film with similar properties to polyethylene from a biodegradable substance such as starch. The latter are however much more expensive. Starch based or biobased (hydrodegradable) film This type is made from corn (maize), potatoes or wheat. This form of biodegradable film meets the ASTM standard (American Standard for Testing Materials) and European Norm EN13432 for compostability as it degrades at least 90% within 90 days or less at 140 degrees F. However, actual products made with this type of film may not meet those standards. Examples of polymers made from starch Polycaprolactone (PCL) Polyvinyl alcohol (PVA) Polylactic acid (PLA) The heat, moisture and aeration in an industrial composting plant are required for this type of film to biodegrade, so it will not therefore readily degrade if littered in the environment. Pros & cons of starch based film/bag Pros It is "compostable" under industrial conditions. Reduced fossil fuel content (depending on loading of filler.) Cons Is more expensive than its non-biodegradable counterpart Source of starch can be problematic (competition against food use, rainforests being cleared to grow crops for bioplastics) Fossil fuels are burned and CO2 produced in the agricultural production process. Poorer mechanical strength than additive based example – filling a starch bag with wet leaves and placing it curbside can result in the bottom falling out when a haulier picks it up. Often not strong enough for use in high-speed machines Degradation in a sealed landfill takes at least 6 months. Emits CO2 in aerobic conditions and methane under anaerobic conditions Limited Shelf life. Conditions must be respected for stockage. If mixed with other plastics for recycling, the recycling process is compromised. Typical applications Carrier bag, refusal sacks, vegetable bags, food films, agricultural films, mailing films. However, these applications are still very limited compared to those of petroleum based plastic films. Additive based Additives can be added to conventional polymers to make them either oxodegradable or more hydrophilic to facilitate microbial attack. Oxodegradable These films are made by incorporating an additive within normal polymers to provide an oxidative and then a biological mechanism to degrade them. This typically takes 6 months to 1 year in the environment with adequate exposure to oxygen Degradation is a two-stage process; first the plastic is converted by reaction with oxygen (light, heat and/or stress accelerates the process but is not essential) to hydrophilic low molecular-weight materials and then these smaller oxidized molecules are biodegraded, i.e. converted into carbon dioxide, water and biomass by naturally occurring microorganisms. Commercial competitors and their trade associations allege that the process of biodegradation stops at a certain point, leaving fragments, but they have never established why or at what point. In fact Oxo-biodegradation of polymer material has been studied in depth at the Technical Research Institute of Sweden and the Swedish University of Agricultural Sciences. A peer-reviewed report of the work was published in Vol 96 of the journal of Polymer Degradation & Stability (2011) at page 919–928. It shows 91% biodegradation in a soil environment within 24 months, when tested in accordance with ISO 17556. This is similar to the breakdown of woody plant material where lignin is broken down and forms a humus component improving the soil quality. There is however a lot of controversy about these types of bags. The complete biodegradation is disputed and claimed not to take place. Many countries are now also thinking to ban this type of bags altogether Enhancing hydrophilicity of the polymer These films are inherently biodegradable over a long period of time. Enhancement of the polymer by adding in additives to change the hydrophobic nature of the resin to slightly hydrophilic allows microorganisms to consume the macromolecules of the product, these products often are confused with oxobiodegradable products, but work in a different way. Enhancing of the hydrophilicity of the polymer allows fungus and bacteria to consume the polymer at a faster rate utilizing the carbon inside the polymer chain for energy. These additives attract certain microorganisms found in nature and many tests have been completed on the mixing of synthetic and biobased materials which are inherently biodegradable for enhancing the biodegradability of synthetic polymers that are not as fast to biodegrade. Pros and cons of additive based film/bag Pros Much cheaper than starch-based plastics Can be made with normal machinery, and can be used in high speed machines, so no need to change suppliers and no loss of jobs Materials are well known Does not compete against food production These films look, act and perform just like their non-degradable counterparts, during their programmed service-life but then break down if discarded. They can be recycled with normal plastics. They are certified non-toxic, and safe for food-contact Some bags degrade at about the same rate as a leaf. In fact, when used as bin liners, bags can start degrading after three or four days of being in the bin. Cons Degradation depends on access to air Not designed to degrade in landfill, but can be safely landfilled. Will degrade if oxygen is present, but will NOT emit methane in landfill European or American (EN13432 D6400)Standards on compostable products are not appropriate, as not designed for composting. They should be tested according to ASTM D6954 or (as from 1 Jan 1010) UAE norm 5009:2009 They are not suitable for PET or PVC Precise rate of degradation/biodegradation cannot be predicted, but will be faster than nature's wastes such as straw or twigs, and much faster than normal plastic Like normal plastics they are made from a by-product of oil or natural gas If mixed with other plastics for recycling, the recycling process is compromised. Typical applications Trash Bags, Garbage Bags, Compost Bags, Carrier bag, Agricultural Film, Mulch Film, produce bags, - in fact all forms of short-life plastic film packaging See also Biodegradable plastic Bioplastic Plastic bag Plastic recycling Packaging Photodegradation References BBC News: "All Tesco bags 'to be degradable" 10 May 2006 http://news.bbc.co.uk/1/hi/uk/4758419.stm BBC News: "Degradable carrier bags launched" 2 September 2002 http://news.bbc.co.uk/1/hi/uk/2229698.stm Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Biodegradable materials
Biodegradable polythene film
Physics,Chemistry
1,979
197,819
https://en.wikipedia.org/wiki/Dollis%20Hill
Dollis Hill is an area in northwest London, which consists of the streets surrounding the Gladstone Park. It is served by a London Underground station, Dollis Hill, on the Jubilee line, providing good links to central London. It is in the London Borough of Brent, close to Willesden Green, Neasden and Cricklewood, and is in the postal districts of NW2 and NW10 The area is mainly residential (Edwardian terraced and 1920s/30s semi-detached houses) with a restaurant, greengrocer and convenience stores near the underground station. The Dollis Hill ward has the highest Irish population in London. Dollis Hill played a part in the Second World War as the code-breaking computer used at Bletchley Park was built at the Post Office Research Station in Dollis Hill and the rarely used alternative Cabinet War Room bunker for Winston Churchill's government was dug underground here. History The Dollis Hill Estate was formed in the early 19th century, when the Finch family bought up a number of farms in the area to form a single estate. Dollis Hill House itself was built in the 1820s. It was later occupied by Lord Aberdeen who often had Prime Minister William Ewart Gladstone to stay as a guest. In 1901, a new public park was created the Gladstone Park, named after the former Prime Minister. An underground station, Dollis Hill Underground station, was opened on 1 October 1909 as part of the Metropolitan line, now on the Jubilee line. Between the park and the underground station, Edwardian terraced houses were built at this time on a grid with names starting with letters in alphabetical order (with some letters missing) from Aberdeen to Normanby. The first railway in the area was the Dudding Hill Line, opened in 1875 by the Midland Railway to connect its Midland Main Line and Cricklewood goods yard in the east to other lines to the southwest. The Dudden Hill station on the line closed for passengers in 1902, but the line still carried freight. In World War I, between 1917 and 1921, the tank design team (The Mechanical Warfare Supply Department of the Ministry of Munitions) responsible for the new Anglo-American or Liberty tank, Mark VIII was located here. Early trials of some of the first military tanks were conducted in Dollis Hill. Images of the tanks in Dollis Hill are held at The Imperial War Museum, London. The Post Office Research Station was built in 1921. The code-breaking Colossus computer, used at Bletchley Park during the Second World War, was built at here by a team led by Tommy Flowers. The station was relocated to Martlesham Heath at the end of the 1970s. The Post Office Research Station building has now been converted into 62 flats and is now known as 'Chartwell Court', with an access road called 'Flowers Close'. The alternative Cabinet War Room bunker for Winston Churchill's World War II government code-named Paddock is located under a corner of the former Post Office Research Station in Brook Road. Medium-sized, semi-detached houses were built to the east of this area between 1927 and 1935. The Grunwick dispute in the late 1970s concerned trade union recognition, working conditions and employment law. It centred on the Grunwick Film Processing Laboratories in Chapter Road, Dollis Hill. The protracted dispute became a cause célèbre in the trade union movement at the time, with several acrimonious interactions between large numbers of police and mass pickets. Demographics Dollis Hill has a very diverse mix of ethnicities and nationalities. The largest single ethnic group in the Dollis Hill ward of the 2011 Census, White British, comprises 14.3% of the population. The next largest are Other White (13.7%), Indians (11.4%) and Black Africans (10.6%). 44.6% of people living in Dollis Hill were born in England in the 2011 census. The next most common countries of birth were Ireland (5.1%), India (4.3%), Pakistan (4%) and Somalia (3.9%). The main religious affiliations of Dollis Hill are Christians (43.9%), Muslims (31.3%), and Hindus (10.1%). Transport The area is served by a London Underground station, Dollis Hill, on the Jubilee line. There are regular services to Baker Street in 15 minutes and Westminster in 20 minutes. It is in Travelcard Zone 3, three stops from West Hampstead and within easy reach of Wembley Stadium. London Buses routes 226, 302 and N98 serve the area south of Gladstone Park. The area north of Gladstone Park is served by routes 182, 232, 245 and 332. There are a large number of buses that service nearby Willesden Green. Notable residents William Ewart Gladstone, the British Prime Minister, was a frequent visitor to Dollis Hill House in the late 19th century. The year after his death, 1899, Willesden Council acquired much of the Dollis Hill Estate for use as a public park, which was named Gladstone Park. Mark Twain stayed in Dollis Hill House in the summer of 1900. He wrote "Dollis Hill comes nearer to being a paradise than any other home I ever occupied." Eric Simms, the ornithologist, broadcaster and author, lived in Brook Road. His book, Birds of Town and Suburb (1975), was based on his studies of the birds in Dollis Hill. David Baddiel grew up in the area. Nihal Arthanayake, BBC Radio 5 Live DJ, resides here with his family. Mark Gottsche, London county team Gaelic footballer, lived on Chapter Road between 2012 and 2019. Ken Livingstone former MP and Mayor of London. In popular culture Next to Dollis Hill tube station was home to both The Future Sound of London's Earthbeat and 4 Hero's recording studios during the 1990s. Fictional references The fictional Dollis Hill Football Club features occasionally in the British satirical magazine Private Eye as arch-rivals to Neasden Football Club, with on at least one occasion the fictional Dollis Hill South council ward used in the irregular Those Election Results In Full mock section. George Bowling, hero of George Orwell's novel Coming Up for Air, lives in Ellesmere Road. References External links Willesden Local History Areas of London Districts of the London Borough of Brent History of computing in the United Kingdom Places formerly in Middlesex
Dollis Hill
Technology
1,322
24,949,222
https://en.wikipedia.org/wiki/C12H14N2
{{DISPLAYTITLE:C12H14N2}} The molecular formula C12H14N2 (molar mass : 186.25 g/mol) may refer to: Altinicline Azepindole Detomidine N-(1-Naphthyl)ethylenediamine PNU-181731 Tetrahydroharman Calligonine, a major alkaloid constituent of the roots of Calligonum minimum and the bark of Elaeagnus augustifolia (1S)-1-Methyl-2,3,4,9-tetrahydro-1H-pyrido-[3,4-b]-indole
C12H14N2
Chemistry
145
42,574,504
https://en.wikipedia.org/wiki/Plasmid%20partition%20system
A plasmid partition system is a mechanism that ensures the stable inheritance of plasmids during bacterial cell division. Each plasmid has its independent replication system which controls the number of copies of the plasmid in a cell. The higher the copy number, the more likely the two daughter cells will contain the plasmid. Generally, each molecule of plasmid diffuses randomly, so the probability of having a plasmid-less daughter cell is 21−N, where N is the number of copies. For instance, if there are 2 copies of a plasmid in a cell, there is 50% chance of having one plasmid-less daughter cell. However, high-copy number plasmids have a cost for the hosting cell. This metabolic burden is lower for low-copy plasmids, but those have a higher probability of plasmid loss after a few generations. To control vertical transmission of plasmids, in addition to controlled-replication systems, bacterial plasmids use different maintenance strategies, such as multimer resolution systems, post-segregational killing systems (addiction modules), and partition systems. General properties of partition systems Plasmid copies are paired around a centromere-like site and then separated in the two daughter cells. Partition systems involve three elements, organized in an auto-regulated operon: A centromere-like DNA site Centromere binding proteins (CBP) The motor protein The centromere-like DNA site is required in cis for plasmid stability. It often contains one or more inverted repeats which are recognized by multiple CBPs. This forms a nucleoprotein complex termed the partition complex. This complex recruits the motor protein, which is a nucleotide triphosphatase (NTPase). The NTPase uses energy from NTP binding and hydrolysis to directly or indirectly move and attach plasmids to specific host location (e.g. opposite bacterial cell poles). The partition systems are divided in four types, based primarily on the type of NTPases: Type I : Walker type P-loop ATPase Type II : Actin-like ATPase Type III : tubulin-like GTPase Type IV : No NTPase Type I partition system This system is also used by most bacteria for chromosome segregation. Type I partition systems are composed of an ATPase which contains Walker motifs and a CBP which is structurally distinct in type Ia and Ib. ATPases and CBP from type Ia are longer than the ones from type Ib, but both CBPs contain an arginine finger in their N-terminal part. ParA proteins from different plasmids and bacterial species show 25 to 30% of sequence identity to the protein ParA of the plasmid P1. The partition of type I system uses a "diffusion-ratchet" mechanism. This mechanism works as follows: Dimers of ParA-ATP dynamically bind to nucleoid DNA ParA in its ATP-bound state interacts with ParB bound to parS ParB bound to parS stimulates the release of ParA from the nucleoid region surrounding the plasmid The plasmid then chases the resulting ParA gradient on the perimeter of the ParA depleted region of the nucleoid The ParA that was released from the nucleoid behind the plasmid's movement redistributes to other regions of the nucleoid after a delay After plasmid replication, the sister copies segregate to opposite cell halves as they chase ParA on the nucleoid in opposite directions There are likely to be differences in the details of type I mechanisms. Type 1 partition has been mathematically modelled with variations in the mechanism described above. Type Ia The CBP of this type consists in three domains: N-terminal NTPase binding domain Central Helix-Turn-Helix (HTH) domain C-terminal dimer-domain Type Ib The CBP of this type, also known as parG is composed of: N-terminal NTPase binding domain Ribon-Helix-Helix (RHH) domain For this type, the parS site is called parC. Type II partition system This system is the best understood of the plasmid partition system. It is composed of an actin-like ATPAse, ParM, and a CBP called ParR. The centromere like site, parC contains two sets of five 11 base pair direct repeats separated by the parMR promoter. The amino-acid sequence identity can go down to 15% between ParM and other actin-like ATPase. The mechanism of partition involved here is a pushing mechanism: ParR binds to parC and pairs plasmids which form a nucleoprotein complex, or partition complex The partition complex serves as nucleation point for the polymerization of ParM; ParM-ATP complex inserts at this point and push plasmids apart The insertion leads to hydrolysis of ParM-ATP complex, leading to depolymerization of the filament At cell division, plasmids copies are at each cell extremity, and will end up in future daughter cell The filament of ParM is regulated by the polymerization allowed by the presence the partition complex (ParR-parC), and by the depolymerization controlled by the ATPase activity of ParM. Type III partition system The type III partition system is the most recently discovered partition system. It is composed of tubulin-like GTPase termed TubZ, and the CBP is termed TubR. Amino-acid sequence identity can go down to 21% for TubZ proteins. The mechanism is similar to a treadmill mechanism: Multiple TubR dimer binds to the centromere-like region stbDRs of the plasmids. Contact between TubR and filament of treadmilling TubZ polymer. TubZ subunits are lost from the - end and are added to the + end. TubR-plasmid complex is pulled along the growing polymer until it reaches the cell pole. Interaction with membrane is likely to trigger the release of the plasmid. The net result being transport of partition complex to the cell pole. Other partition systems R388 partition system The partition system of the plasmid R388 has been found within the stb operon. This operon is composed of three genes, stbA, stbB and stbC. StbA protein is a DNA-binding protein (identical to ParM) and is strictly required for the stability and intracellular positioning of plasmid R388 in E. coli. StbA binds a cis-acting sequence, the stbDRs. The StbA-stbDRs complex may be used to pair plasmid the host chromosome, using indirectly the bacterial partitioning system. StbB protein has a Walker-type ATPase motif, it favors for conjugation but is not required for plasmid stability over generations. StbC is an orphan protein of unknown function. StbC doesn't seem to be implicated in either partitioning or conjugation. StbA and StbB have opposite but connected effect related to conjugation. This system has been proposed to be the type IV partition system. It is thought to be a derivative of the type I partition system, given the similar operon organization. This system represents the first evidence for a mechanistic interplay between plasmid segregation and conjugation processes. pSK1 partition system (reviewed in ) pSK1 is a plasmid from Staphylococcus aureus. This plasmid has a partition system determined by a single gene, par, previously known as orf245. This gene does not effect the plasmid copy number nor the grow rate (excluding its implication in a post-segregational killing system). A centromere-like binding sequence is present upstream of the par gene, and is composed of seven direct repeats and one inverted repeat. References Molecular biology Mobile genetic elements Plasmids
Plasmid partition system
Chemistry,Biology
1,680
13,519,340
https://en.wikipedia.org/wiki/Magnesium%20pidolate
Magnesium pidolate, the magnesium salt of pidolic acid (also known as pyroglutamic acid), is a mineral supplement, which contains 8.6% magnesium w/w. External links Magnesium compounds Salts of carboxylic acids
Magnesium pidolate
Chemistry
55
53,736,045
https://en.wikipedia.org/wiki/ESKAPE
ESKAPE is an acronym comprising the scientific names of six highly virulent and antibiotic resistant bacterial pathogens including: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp. The acronym is sometimes extended to ESKAPEE to include Escherichia coli. This group of Gram-positive and Gram-negative bacteria can evade or 'escape' commonly used antibiotics due to their increasing multi-drug resistance (MDR). As a result, throughout the world, they are the major cause of life-threatening nosocomial or hospital-acquired infections in immunocompromised and critically ill patients who are most at risk. P. aeruginosa and S. aureus are some of the most ubiquitous pathogens in biofilms found in healthcare. P. aeruginosa is a Gram-negative, rod-shaped bacterium, commonly found in the gut flora, soil, and water that can be spread directly or indirectly to patients in healthcare settings. The pathogen can also be spread in other locations through contamination, including surfaces, equipment, and hands. The opportunistic pathogen can cause hospitalized patients to have infections in the lungs (as pneumonia), blood, urinary tract, and in other body regions after surgery. S. aureus is a Gram-positive, cocci-shaped bacterium, residing in the environment and on the skin and nose of many healthy individuals. The bacterium can cause skin and bone infections, pneumonia, and other types of potentially serious infections if it enters the body. S. aureus has also gained resistance to many antibiotic treatments, making healing difficult. Because of natural and unnatural selective pressures and factors, antibiotic resistance in bacteria usually emerges through genetic mutation or acquires antibiotic-resistant genes (ARGs) through horizontal gene transfer - a genetic exchange process by which antibiotic resistance can spread. One of the main reasons for the rise in the selection for antibiotic resistance (ABR) and MDR which led to the emergence of the ESKAPE bacteria is from the rash overuse of antibiotics not only in healthcare, but also in the animal, and agricultural sector. Other key factors include misuse and inadequate adherence to treatment guidelines. Due to these factors, fewer and fewer antibiotic treatments are effective in eradicating ABR and MDR bacterial infections, while at the same time there are now no new antibiotics being created due to lack of funding. These ESKAPE pathogens, along with other antibiotic-resistant bacteria, are an interweaved global health threat and are being addressed from a more holistic and One Health perspective. Prevalence From a global perspective, the emergence of multidrug-resistant (MDR) bacteria is responsible for about 15.5% of hospital acquired infection cases and there are currently about 0.7 million deaths from drug-resistant disease. Specifically, the opportunistic nosocomial ESKAPE pathogens correspond with the highest risk of mortality which has the majority of its isolates being MDR. Two pathogens within the ESKAPE group, Carbapenem-resistant Acinetobacter and Carbapenem-resistant Enterobacteriaceae are currently in the top five of the antibiotic resistant bacteria on the CDC's 2019 urgent threat list, and the other 4 pathogens making up the group are on the serious threat list. In addition, the World Health Organization (WHO) created a global priority pathogen list (PPL) of ABR bacteria with the goal to prioritize research and create new effective antibiotic treatments. The global PPL classifies pathogens into 3 categories, critical, high, and medium, and has 4 of the pathogens from the ESKAPE group in the critical priority list and the other 2 pathogens that make up the group in the high priority list. Characteristics ESKAPE pathogens are differentiated from other pathogens due to their increased resistance to commonly used antibiotics such as penicillin, vancomycin, carbapenems, and more. This increased resistance, combined with the clinical significance of these bacteria in the medical field, results in a necessity to understand their mechanisms of resistance and combat them with novel antibiotics. Common mechanisms for resistance include the production of enzymes that attack the structure of antibiotics (for example, β-lactamases inactivating β-lactam antibiotics), modification of the target site that the antibiotic targets so that it can no longer bind properly, efflux pumps, and biofilm production. Efflux pumps are a feature of the membrane of Gram-negative bacteria that allows them to constantly pump out foreign material, including antibiotics, so that the inside of the cell never contains a high enough concentration of the drug to have an effect. Biofilms are a mixture of diverse microbial communities and polymers that protect the bacteria from antibiotic treatment by acting as a physical barrier. Clinical threats Due to their heightened resistance to frequently used antibiotics, these pathogens pose an additional threat to the safety of the general population, particularly those who frequently interact with hospital environments, as they most commonly contribute to hospital-acquired infections (HAI). The increased antimicrobial resistance profile of these pathogens varies, however they arise from similar causes. One common cause of antibiotic resistance is due to incorrect dosing. When a sub-therapeutic dose is prescribed, or a patient chooses to use less of their prescribed antibiotic, bacteria are given the opportunity to adapt to the treatment. At lower doses, or when a course of antibiotics is not completed, certain strains of the bacteria develop drug-resistant strains through the process of natural selection. This is due to the random genetic mutations that are constantly occurring in many forms of living organisms, bacteria and humans included. Natural selection supports the persistence of strains of bacteria that have developed a certain mutation that allows them to survive. Some strains are also able to participate in inter-strain horizontal gene transfer, allowing them to pass resistance genes from one pathogen to another. This can be particularly problematic in nosocomial infections, where bacteria are constantly exposed to antibiotics and those benefiting from resistance as a result of random genetic mutations can share this resistance with bacteria in the area that have not yet developed this resistance on their own. Bacterial profiles Enterococcus faecium Enterococcus faecium is a Gram-positive sphereically shaped (coccus) bacteria that tends to occur in pairs or chains, most commonly involved in HAI in immunocompromised patients. It often exhibits a resistance to β-lactam antibiotics including penicillin and other last resort antibiotics. There has also been a rise in vancomycin resistant enterococci (VRE) strains, including an increase in E. faecium resistance to vancomycin, particularly vancomycin-A. These vancomycin-resistant strains display a profound ability to develop and share their resistance through horizontal gene transfer, as well as code for virulence factors that control phenotypes. These virulence phenotypes range from thicker biofilms to allowing them to grow in a variety of environments including medical devices such as urinary catheters and prosthetic heart valves within the body. The thicker biofilms act as a “mechanical and biochemical shield” that protects the bacteria from the antibiotics and are the most effective protective mechanism that bacteria have against treatment. Staphylococcus aureus Staphylococcus aureus is a Gram-positive round-shaped (coccus) bacteria that is commonly found as a part of the human skin microbiota and is typically not harmful in humans with non-compromised immune systems in these environments. However, S. aureus has the ability to cause infections when it enters parts of the body that it does not typically inhabit, such as wounds. Similar to E. faecium, S. aureus can also cause infections on implanted medical devices and form biofilms that make treatment with antibiotics more difficult. Additionally, approximately 25% of S. aureus strains secrete the TSST-1 exotoxin responsible for causing toxic shock syndrome. Methicillin-resistant S. aureus, or MRSA, includes strains distinct from other strains of S. aureus in the fact that they have developed resistance to β-lactam antibiotics. Some also express an exotoxin that has been known to cause “necrotic hemorrhagic pneumonia” in those with an infection. Vancomycin and similar antibiotics are typically the first choices for treatment of MRSA infections, however from this vancomycin-resistant S. aureus, or VRSA (VISA for those with intermediate resistance) strains have emerged. Klebsiella pneumoniae Klebsiella pneumoniae is a Gram-negative rod-shaped (bacillus) bacteria that is particularly adept to accepting resistance genes in horizontal gene transfer. It is commonly also resistant to phagocyte treatment due to its thick biofilm with strong adhesion to neighboring cells. Certain strains have also developed β-lactamases that allow them to be resistant many of the commonly used antibiotics, including carbapenems, which has led to the creation of carbapenem-resistant K. pneumoniae (CRKP), for which there are very few antibiotics in development that can treat infection. Acinetobacter baumannii Acinetobacter baumannii is most common in hospitals, which has allowed for the development of resistance to all known antimicrobials. The Gram-negative short-rod-shaped (coccobacillus) A. baumannii thrives in a number of unaccommodating environments due to its tolerance to a variety of temperatures, pHs, nutrient levels, as well as dry environments. The Gram-negative aspects of the membrane surface of A. baumannii, including the efflux pump and outer membrane, affords it a wider range of antibiotic resistance. Additionally, some problematic A. baumannii strains are able to acquire families of efflux pumps from other species, and are commonly first to develop new β-lactamases to improve β-lactam resistance. Pseudomonas aeruginosa The Gram-negative, rod-shaped (bacillus) bacterium Pseudomonas aeurginosa is ubiquitous hydrocarbon degrader that is able to survive in extreme environments as well as in soil and many more common environments. Because of this versatility, it survives quite well in the lungs of patients with late-stage cystic fibrosis (CF). It also benefits from the same previously mentioned Gram-negative resistance factors as A. baumannii. Mutants of P. aeruginosa with upregulated efflux pumps also exist that make finding an effective antibiotic or detergent incredibly difficult. There are also some multi-drug resistant (MDR) strains of P. aeruginosa that express β-lactamases as well as upregulated efflux pumps which can make treatment particularly difficult. Enterobacter Enterobacter encompasses a family of Gram-negative, rod-shaped (bacillus) species of bacteria. Some strains cause urinary tract (UTI) and blood infections and are resistant to multiple drug therapies, which therefore puts the human population in critical need for the development of novel and effective antibiotic treatments. Colistin and tigecycline are two of the only antibiotics currently used for treatment, and there are seemingly no other viable antibiotics in development. In some Enterobacter species, a 5–300-fold increase in minimum inhibitory concentration was observed when exposed to several gradually increasing concentrations of benzalkonium chloride (BAC). Other Gram-negative bacteria (including Enterobacter, but also Acinetobacter, Pseudomonas, Klebsiella species, and more) also displayed a similar ability to adapt to the disinfectant BAC. One Health problem The ESKAPE pathogens and ABR bacteria in general are an interconnected global health threat and a clear 'One Health' problem, meaning they can spread between and impact the environment, animal, and human sectors. As one of the largest global health challenges, combatting the highly resistant and opportunistic ESKAPE pathogens necessitates a One Health approach. One Health is a transdisciplinary approach that involves addressing health outcomes from a multifaceted and interdisciplinary perspective for humans, animals, and the environmental on a local, national, and global level. Using this framework and mindset is crucial to combat and prevent the spread and development of the ESKAPE pathogens (including the ABR in general) while addressing its importantly related socioeconomic factors, such as inadequate sanitation. New treatment alternatives for infections caused by ESKAPE are under current scientific research. References Medical terminology Antibiotic-resistant bacteria
ESKAPE
Biology
2,683
35,439,884
https://en.wikipedia.org/wiki/Bio-inspired%20robotics
Bio-inspired robotic locomotion is a subcategory of bio-inspired design. It is about learning concepts from nature and applying them to the design of real-world engineered systems. More specifically, this field is about making robots that are inspired by biological systems , including Biomimicry. Biomimicry is copying from nature while bio-inspired design is learning from nature and making a mechanism that is simpler and more effective than the system observed in nature. Biomimicry has led to the development of a different branch of robotics called soft robotics. The biological systems have been optimized for specific tasks according to their habitat. However, they are multifunctional and are not designed for only one specific functionality. Bio-inspired robotics is about studying biological systems, and looking for the mechanisms that may solve a problem in the engineering field. The designer should then try to simplify and enhance that mechanism for the specific task of interest. Bio-inspired roboticists are usually interested in biosensors (e.g. eye), bioactuators (e.g. muscle), or biomaterials (e.g. spider silk). Most of the robots have some type of locomotion system. Thus, in this article different modes of animal locomotion and few examples of the corresponding bio-inspired robots are introduced. Biolocomotion Biolocomotion or animal locomotion is usually categorized as below: Locomotion on a surface Locomotion on a surface may include terrestrial locomotion and arboreal locomotion. We will specifically discuss about terrestrial locomotion in detail in the next section. Locomotion in a fluid Locomotion in a blood stream or cell culture media swimming and flying. There are many swimming and flying robots designed and built by roboticists. Some of them use miniaturized motors or conventional MEMS actuators (such as piezoelectric, thermal, magnetic, etc), while others use animal muscle cells as motors. Behavioral classification (terrestrial locomotion) There are many animal and insects moving on land with or without legs. We will discuss legged and limbless locomotion in this section as well as climbing and jumping. Anchoring the feet is fundamental to locomotion on land. The ability to increase traction is important for slip-free motion on surfaces such as smooth rock faces and ice, and is especially critical for moving uphill. Numerous biological mechanisms exist for providing purchase: claws rely upon friction-based mechanisms; gecko feet upon van der walls forces; and some insect feet upon fluid-mediated adhesive forces. Legged locomotion Legged robots may have one, two, four, six, or many legs depending on the application. One of the main advantages of using legs instead of wheels is moving on uneven environment more effectively. Bipedal, quadrupedal, and hexapedal locomotion are among the most favorite types of legged locomotion in the field of bio-inspired robotics. Rhex, a Reliable Hexapedal robot and Cheetah are the two fastest running robots so far. is another hexapedal robot inspired by cockroach locomotion that has been developed at Stanford University. This robot can run up to 15 body length per second and can achieve speeds of up to 2.3 m/s. The original version of this robot was pneumatically driven while the new generation uses a single electric motor for locomotion. Limbless locomotion Terrain involving topography over a range of length scales can be challenging for most organisms and biomimetic robots. Such terrain are easily passed over by limbless organisms such as snakes. Several animals and insects including worms, snails, caterpillars, and snakes are capable of limbless locomotion. A review of snake-like robots is presented by Hirose et al. These robots can be categorized as robots with passive or active wheels, robots with active treads, and undulating robots using vertical waves or linear expansions. Most snake-like robots use wheels, which are high in friction when moving side to side but low in friction when rolling forward (and can be prevented from rolling backward). The majority of snake-like robots use either lateral undulation or rectilinear locomotion and have difficulty climbing vertically. Choset has recently developed a modular robot that can mimic several snake gaits, but it cannot perform concertina motion. Researchers at Georgia Tech have recently developed two snake-like robots called Scalybot. The focus of these robots is on the role of snake ventral scales on adjusting the frictional properties in different directions. These robots can actively control their scales to modify their frictional properties and move on a variety of surfaces efficiently. Researchers at CMU have developed both scaled and conventional actuated snake-like robots. Climbing Climbing is an especially difficult task because mistakes made by the climber may cause the climber to lose its grip and fall. Most robots have been built around a single functionality observed in their biological counterparts. Geckobots typically use van der waals forces that work only on smooth surfaces. Being inspired from geckos, scientists from Stanford university have artificially created the adhesive property of a gecko. Similar to seta in a gecko's leg, millions of microfibers were placed and attached to a spring. The tip of the microfiber will be sharp and pointed in usual circumstances, but upon actuation, the movement of a spring will create a stress which bends these microfibers and increase their contact area to the surface of a glass or wall. Using the same technology, gecko grippers were invented by NASA scientists for different applications in space. Stickybots use directional dry adhesives that works best on smooth surfaces. The Spinybot and RiSE robots are among the insect-like robots that use spines instead. Legged climbing robots have several limitations. They cannot handle large obstacles since they are not flexible and they require a wide space for moving. They usually cannot climb both smooth and rough surfaces or handle vertical to horizontal transitions as well. Jumping One of the tasks commonly performed by a variety of living organisms is jumping. Bharal, hares, kangaroo, grasshopper, flea, and locust are among the best jumping animals. A miniature 7g jumping robot inspired by locust has been developed at EPFL that can jump up to 138 cm. The jump event is induced by releasing the tension of a spring. The highest jumping miniature robot is inspired by the locust, weighs 23 grams with its highest jump to 365 cm is "TAUB" (Tel-Aviv University and Braude College of engineering). It uses torsion springs as energy storage and includes a wire and latch mechanism to compress and release the springs. ETH Zurich has reported a soft jumping robot based on the combustion of methane and laughing gas. The thermal gas expansion inside the soft combustion chamber drastically increases the chamber volume. This causes the 2 kg robot to jump up to 20 cm. The soft robot inspired by a roly-poly toy then reorientates itself into an upright position after landing. Behavioral classification (aquatic locomotion) Swimming (piscine) It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%. Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion. Notable examples are the Essex University Computer Science Robotic Fish G9, and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. The Aqua Penguin, designed and built by Festo of Germany, copies the streamlined shape and propulsion by front "flippers" of penguins. Festo have also built the Aqua Ray and Aqua Jelly, which emulate the locomotion of manta ray, and jellyfish, respectively. In 2014, iSplash-II was developed by PhD student Richard James Clapham and Prof. Huosheng Hu at Essex University. It was the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7 m/s). The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform. Morphological classification Modular The modular robots are typically capable of performing several tasks and are specifically useful for search and rescue or exploratory missions. Some of the featured robots in this category include a salamander inspired robot developed at EPFL that can walk and swim, a snake inspired robot developed at Carnegie-Mellon University that has four different modes of terrestrial locomotion, and a cockroach inspired robot can run and climb on a variety of complex terrain. Humanoid Humanoid robots are robots that look human-like or are inspired by the human form. There are many different types of humanoid robots for applications such as personal assistance, reception, work at industries, or companionship. These type of robots are used for research purposes as well and were originally developed to build better orthosis and prosthesis for human beings. Petman is one of the first and most advanced humanoid robots developed at Boston Dynamics. Some of the humanoid robots such as Honda Asimo are over actuated. On the other hand, there are some humanoid robots like the robot developed at Cornell University that do not have any actuators and walk passively descending a shallow slope. Swarming The collective behavior of animals has been of interest to researchers for several years. Ants can make structures like rafts to survive on the rivers. Fish can sense their environment more effectively in large groups. Swarm robotics is a fairly new field and the goal is to make robots that can work together and transfer the data, make structures as a group, etc. Soft Soft robots are robots composed entirely of soft materials and moved through pneumatic pressure, similar to an octopus or starfish. Such robots are flexible enough to move in very limited spaces (such as in the human body). The first multigait soft robots was developed in 2011 and the first fully integrated, independent soft robot (with soft batteries and control systems) was developed in 2015. See also Animal locomotion Biomimetics Biorobotics Biomechatronics Biologically inspired engineering Robotic materials Lists of types of robots References External links The Soft Robotics Toolkit Boston Dynamics Research for this Wikipedia entry was conducted as a part of a Locomotion Neuromechanics course (APPH 6232) offered in the School of Applied Physiology at Georgia Tech Research labs Poly-PEDAL Lab (Prof. Bob Full) Biomimetic Milisystems Lab (Prof. Ron Fearing) Biomimetics & Dexterous Manipulation Lab (Prof. Mark Cutkosky) Biomimetic Robotics Lab (Prof. Sangbae Kim) Harvard Microrobotics Lab (Prof. Rob Wood) Harvard Biodesign Lab (Prof. Conor Walsh) ETH Functional Material Lab (Prof. Wendelin Stark) Leg lab at MIT Center for Biologically Inspired Design at Georgia Tech Biologically Inspired Robotics Lab, Case Western Reserve University Biorobotics research group (S. Viollet/ F. Ruffier), Institute of Movement Science, CNRS/Aix-Marseille University (France) Center for Biorobotics, Tallinn University of Technology BioRob EPFL (Prof Auke Ijspeert) Robot locomotion Bionics Bioinspiration
Bio-inspired robotics
Physics,Engineering,Biology
2,421
864,438
https://en.wikipedia.org/wiki/Arrangement%20of%20hyperplanes
In geometry and combinatorics, an arrangement of hyperplanes is an arrangement of a finite set A of hyperplanes in a linear, affine, or projective space S. Questions about a hyperplane arrangement A generally concern geometrical, topological, or other properties of the complement, M(A), which is the set that remains when the hyperplanes are removed from the whole space. One may ask how these properties are related to the arrangement and its intersection semilattice. The intersection semilattice of A, written L(A), is the set of all subspaces that are obtained by intersecting some of the hyperplanes; among these subspaces are S itself, all the individual hyperplanes, all intersections of pairs of hyperplanes, etc. (excluding, in the affine case, the empty set). These intersection subspaces of A are also called the flats of A. The intersection semilattice L(A) is partially ordered by reverse inclusion. If the whole space S is 2-dimensional, the hyperplanes are lines; such an arrangement is often called an arrangement of lines. Historically, real arrangements of lines were the first arrangements investigated. If S is 3-dimensional one has an arrangement of planes. General theory The intersection semilattice and the matroid The intersection semilattice L(A) is a meet semilattice and more specifically is a geometric semilattice. If the arrangement is linear or projective, or if the intersection of all hyperplanes is nonempty, the intersection lattice is a geometric lattice. (This is why the semilattice must be ordered by reverse inclusion—rather than by inclusion, which might seem more natural but would not yield a geometric (semi)lattice.) When L(A) is a lattice, the matroid of A, written M(A), has A for its ground set and has rank function r(S) := codim(I), where S is any subset of A and I is the intersection of the hyperplanes in S. In general, when L(A) is a semilattice, there is an analogous matroid-like structure called a semimatroid, which is a generalization of a matroid (and has the same relationship to the intersection semilattice as does the matroid to the lattice in the lattice case), but is not a matroid if L(A) is not a lattice. Polynomials For a subset B of A, let us define f(B) := the intersection of the hyperplanes in B; this is S if B is empty. The characteristic polynomial of A, written pA(y), can be defined by summed over all subsets B of A except, in the affine case, subsets whose intersection is empty. (The dimension of the empty set is defined to be −1.) This polynomial helps to solve some basic questions; see below. Another polynomial associated with A is the Whitney-number polynomial wA(x, y), defined by summed over B ⊆ C ⊆ A such that f(B) is nonempty. Being a geometric lattice or semilattice, L(A) has a characteristic polynomial, pL(A)(y), which has an extensive theory (see matroid). Thus it is good to know that pA(y) = yi pL(A)(y), where i is the smallest dimension of any flat, except that in the projective case it equals yi + 1pL(A)(y). The Whitney-number polynomial of A is similarly related to that of L(A). (The empty set is excluded from the semilattice in the affine case specifically so that these relationships will be valid.) The Orlik–Solomon algebra The intersection semilattice determines another combinatorial invariant of the arrangement, the Orlik–Solomon algebra. To define it, fix a commutative subring K of the base field and form the exterior algebra E of the vector space generated by the hyperplanes. A chain complex structure is defined on E with the usual boundary operator . The Orlik–Solomon algebra is then the quotient of E by the ideal generated by elements of the form for which have empty intersection, and by boundaries of elements of the same form for which has codimension less than p. Real arrangements In real affine space, the complement is disconnected: it is made up of separate pieces called cells or regions or chambers, each of which is either a bounded region that is a convex polytope, or an unbounded region that is a convex polyhedral region which goes off to infinity. Each flat of A is also divided into pieces by the hyperplanes that do not contain the flat; these pieces are called the faces of A. The regions are faces because the whole space is a flat. The faces of codimension 1 may be called the facets of A. The face semilattice of an arrangement is the set of all faces, ordered by inclusion. Adding an extra top element to the face semilattice gives the face lattice. In two dimensions (i.e., in the real affine plane) each region is a convex polygon (if it is bounded) or a convex polygonal region which goes off to infinity. As an example, if the arrangement consists of three parallel lines, the intersection semilattice consists of the plane and the three lines, but not the empty set. There are four regions, none of them bounded. If we add a line crossing the three parallels, then the intersection semilattice consists of the plane, the four lines, and the three points of intersection. There are eight regions, still none of them bounded. If we add one more line, parallel to the last, then there are 12 regions, of which two are bounded parallelograms. Typical problems about an arrangement in n-dimensional real space are to say how many regions there are, or how many faces of dimension 4, or how many bounded regions. These questions can be answered just from the intersection semilattice. For instance, two basic theorems, from Zaslavsky (1975), are that the number of regions of an affine arrangement equals (−1)npA(−1) and the number of bounded regions equals (−1)npA(1). Similarly, the number of k-dimensional faces or bounded faces can be read off as the coefficient of xn−k in (−1)n wA (−x, −1) or (−1)nwA(−x, 1). designed a fast algorithm to determine the face of an arrangement of hyperplanes containing an input point. Another question about an arrangement in real space is to decide how many regions are simplices (the n-dimensional generalization of triangles and tetrahedra). This cannot be answered based solely on the intersection semilattice. The McMullen problem asks for the smallest arrangement of a given dimension in general position in real projective space for which there does not exist a cell touched by all hyperplanes. A real linear arrangement has, besides its face semilattice, a poset of regions, a different one for each region. This poset is formed by choosing an arbitrary base region, B0, and associating with each region R the set S(R) consisting of the hyperplanes that separate R from B. The regions are partially ordered so that R1 ≥ R2 if S(R1, R) contains S(R2, R). In the special case when the hyperplanes arise from a root system, the resulting poset is the corresponding Weyl group with the weak order. In general, the poset of regions is ranked by the number of separating hyperplanes and its Möbius function has been computed . Vadim Schechtman and Alexander Varchenko introduced a matrix indexed by the regions. The matrix element for the region and is given by the product of indeterminate variables for every hyperplane H that separates these two regions. If these variables are specialized to be all value q, then this is called the q-matrix (over the Euclidean domain ) for the arrangement and much information is contained in its Smith normal form. Complex arrangements In complex affine space (which is hard to visualize because even the complex affine plane has four real dimensions), the complement is connected (all one piece) with holes where the hyperplanes were removed. A typical problem about an arrangement in complex space is to describe the holes. The basic theorem about complex arrangements is that the cohomology of the complement M(A) is completely determined by the intersection semilattice. To be precise, the cohomology ring of M(A) (with integer coefficients) is isomorphic to the Orlik–Solomon algebra on Z. The isomorphism can be described explicitly and gives a presentation of the cohomology in terms of generators and relations, where generators are represented (in the de Rham cohomology) as logarithmic differential forms with any linear form defining the generic hyperplane of the arrangement. Technicalities Sometimes it is convenient to allow the degenerate hyperplane, which is the whole space S, to belong to an arrangement. If A contains the degenerate hyperplane, then it has no regions because the complement is empty. However, it still has flats, an intersection semilattice, and faces. The preceding discussion assumes the degenerate hyperplane is not in the arrangement. Sometimes one wants to allow repeated hyperplanes in the arrangement. We did not consider this possibility in the preceding discussion, but it makes no material difference. See also Supersolvable arrangement Oriented matroid References . . . . Discrete geometry Combinatorics Oriented matroids
Arrangement of hyperplanes
Mathematics
2,024
430,361
https://en.wikipedia.org/wiki/Governor%20%28device%29
A governor, or speed limiter or controller, is a device used to measure and regulate the speed of a machine, such as an engine. A classic example is the centrifugal governor, also known as the Watt or fly-ball governor on a reciprocating steam engine, which uses the effect of inertial force on rotating weights driven by the machine output shaft to regulate its speed by altering the input flow of steam. History Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed. It was not until the Scottish engineer James Watt introduced the rotative steam engine, for driving factory machinery, that a constant operating speed became necessary. Between the years 1775 and 1800, Watt, in partnership with industrialist Matthew Boulton, produced some 500 rotative beam engines. At the heart of these engines was Watt's self-designed "conical pendulum" governor: a set of revolving steel balls attached to a vertical spindle by link arms, where the controlling force consists of the weight of the balls. The theoretical basis for the operation of governors was described by James Clerk Maxwell in 1868 in his seminal paper 'On Governors'. Building on Watt's design was American engineer Willard Gibbs who in 1872 theoretically analyzed Watt's conical pendulum governor from a mathematical energy balance perspective. During his Graduate school years at Yale University, Gibbs observed that the operation of the device in practice was beset with the disadvantages of sluggishness and a tendency to over-correct for the changes in speed it was supposed to control. Gibbs theorized that, analogous to the equilibrium of the simple Watt governor (which depends on the balancing of two torques: one due to the weight of the "balls" and the other due to their rotation), thermodynamic equilibrium for any work producing thermodynamic system depends on the balance of two entities. The first is the heat energy supplied to the intermediate substance, and the second is the work energy performed by the intermediate substance. In this case, the intermediate substance is steam. These sorts of theoretical investigations culminated in the 1876 publication of Gibbs' famous work On the Equilibrium of Heterogeneous Substances and in the construction of the Gibbs’ governor. These formulations are ubiquitous today in the natural sciences in the form of the Gibbs' free energy equation, which is used to determine the equilibrium of chemical reactions; also known as Gibbs equilibrium. Governors were also to be found on early motor vehicles (such as the 1900 Wilson-Pilcher), where they were an alternative to a hand throttle. They were used to set the required engine speed, and the vehicle's throttle and timing were adjusted by the governor to hold the speed constant, similar to a modern cruise control. Governors were also optional on utility vehicles with engine-driven accessories such as winches or hydraulic pumps (such as Land Rovers), again to keep the engine at the required speed regardless of variations of the load being driven. Speed limiters Governors can be used to limit the top speed for vehicles, and for some classes of vehicle such devices are a legal requirement. They can more generally be used to limit the rotational speed of the internal combustion engine or protect the engine from damage due to excessive rotational speed. Cars Today, BMW, Audi, Volkswagen and Mercedes-Benz limit their production cars to . Certain Audi Sport GmbH and AMG cars, and the Mercedes/McLaren SLR are exceptions. The BMW Rolls-Royces are limited to . Jaguars, although British, also have a limiter, as do the Swedish Saab and Volvo on cars where it is necessary. German manufacturers initially started the "gentlemen's agreement", electronically limiting their vehicles to a top speed of , since such high speeds are more likely on the Autobahn. This was done to reduce the political desire to introduce a legal speed limit. In European markets, General Motors Europe sometimes choose to discount the agreement, meaning that certain high-powered Opel or Vauxhall cars can exceed the mark, whereas their Cadillacs do not. Ferrari, Lamborghini, Maserati, Porsche, Aston Martin and Bentley also do not limit their cars, at least not to . The Chrysler 300C SRT8 is limited to . Most Japanese domestic market vehicles are limited to only or . The top speed is a strong sales argument, though speeds above about are not likely reachable on public roads. Many performance cars are limited to a speed of to limit insurance costs of the vehicle, and reduce the risk of tires failing. Mopeds Mopeds in the United Kingdom have had to have a speed limiter since 1977. Most other European countries have similar rules (see the main article). Public services vehicles Public service vehicles often have a legislated top speed. Scheduled coach services in the United kingdom (and also bus services) are limited to 65 mph. Urban public buses often have speed governors which are typically set to between and . Trucks All heavy vehicles in Europe and New Zealand have law/by-law governors that limits their speeds to or . Fire engines and other emergency vehicles are exempt from this requirement. Example uses Aircraft Aircraft propellers are another application. The governor senses shaft RPM, and adjusts or controls the angle of the blades to vary the torque load on the engine. Thus as the aircraft speeds up (as in a dive) or slows (in climb) the RPM is held constant. Small engines Small engines, used to power lawn mowers, portable generators, and lawn and garden tractors, are equipped with a governor to limit fuel to the engine to a maximum safe speed when unloaded and to maintain a relatively constant speed despite changes in loading. In the case of generator applications, the engine speed must be closely controlled so the output frequency of the generator will remain reasonably constant. Small engine governors are typically one of three types: Pneumatic: the governor mechanism detects air flow from the flywheel blower used to cool an air-cooled engine. The typical design includes an air vane mounted inside the engine's blower housing and linked to the carburetor's throttle shaft. A spring pulls the throttle open and, as the engine gains speed, increased air flow from the blower forces the vane back against the spring, partially closing the throttle. Eventually, a point of equilibrium will be reached and the engine will run at a relatively constant speed. Pneumatic governors are simple in design and inexpensive to produce. They do not regulate engine speed very accurately and are affected by air density, as well as external conditions that may influence airflow. Centrifugal: a flyweight mechanism driven by the engine is linked to the throttle and works against a spring in a fashion similar to that of the pneumatic governor, resulting in essentially identical operation. A centrifugal governor is more complex to design and produce than a pneumatic governor. The centrifugal design is more sensitive to speed changes and hence is better suited to engines that experience large fluctuations in loading. Electronic: a servo motor is linked to the throttle and controlled by an electronic module that senses engine speed by counting electrical pulses emitted by the ignition system or a magnetic pickup. The frequency of these pulses varies directly with engine speed, allowing the control module to apply a proportional voltage to the servo to regulate engine speed. Due to their sensitivity and rapid response to speed changes, electronic governors are often fitted to engine-driven generators designed to power computer hardware, as the generator's output frequency must be held within narrow limits to avoid malfunction. Turbine controls In steam turbines, the steam turbine governing is the procedure of monitoring and controlling the flow rate of steam into the turbine with the objective of maintaining its speed of rotation as constant. The flow rate of steam is monitored and controlled by interposing valves between the boiler and the turbine. In water turbines, governors have been used since the mid-19th century to control their speed. A typical system would use a Flyball governor acting directly on the turbine input valve or the wicket gate to control the amount of water entering the turbine. By 1930, mechanical governors started to use PID controllers for more precise control. In the later part of the twentieth century, electronic governors and digital systems started to replace mechanical governors. Electrical generator For electrical generation on synchronous electrical grids, prime movers drive electrical generators which are electrically coupled to any other generators on the grid. With droop speed control, the frequency of the entire grid determines the fuel delivered to each generator, so that if the grid runs faster, the fuel is reduced to each generator by its governor to limit the speed. Elevator Governors are used in elevators. It acts as a stopping mechanism in case the elevator runs beyond its tripping speed (which is usually a factor of the maximum speed of the lift and is preset by the manufacturer as per the international lift safety guidelines). This device must be installed in traction elevators and roped hydraulic elevators. Music box Governors are used in some wind-up music boxes to keep the music playing at a somewhat constant speed while the tension on the spring is decreasing. See also Regulator Servomechanism Hit and miss engine Centrifugal governor References Mechanisms (engineering) Mechanical power control Articles containing video clips sv:Regulator (reglerteknik)
Governor (device)
Physics,Engineering
1,917
1,030,104
https://en.wikipedia.org/wiki/Darknet
A darknet or dark net is an overlay network within the Internet that can only be accessed with specific software, configurations, or authorization, and often uses a unique customized communication protocol. Two typical darknet types are social networks (usually used for file hosting with a peer-to-peer connection), and anonymity proxy networks such as Tor via an anonymized series of connections. The term "darknet" was popularized by major news outlets and was associated with Tor Onion services when the infamous drug bazaar Silk Road used it, despite the terminology being unofficial. Technology such as Tor, I2P, and Freenet are intended to defend digital rights by providing security, anonymity, or censorship resistance and are used for both illegal and legitimate reasons. Anonymous communication between whistle-blowers, activists, journalists and news organisations is also facilitated by darknets through use of applications such as SecureDrop. Terminology The term originally described computers on ARPANET that were hidden, programmed to receive messages but not respond to or acknowledge anything, thus remaining invisible and in the dark. Since ARPANET, the usage of dark net has expanded to include friend-to-friend networks (usually used for file sharing with a peer-to-peer connection) and privacy networks such as Tor. The reciprocal term for a darknet is a clearnet or the surface web when referring to content indexable by search engines. The term "darknet" is often used interchangeably with "dark web" because of the quantity of hidden services on Tor's darknet. Additionally, the term is often inaccurately used interchangeably with the deep web because of Tor's history as a platform that could not be search-indexed. Mixing uses of both these terms has been described as inaccurate, with some commentators recommending the terms be used in distinct fashions. Origins "Darknet" was coined in the 1970s to designate networks isolated from ARPANET (the government-founded military/academical network which evolved into the Internet), for security purposes. Darknet addresses could receive data from ARPANET but did not appear in the network lists and would not answer pings or other inquiries. The term gained public acceptance following publication of "The Darknet and the Future of Content Distribution", a 2002 paper by Peter Biddle, Paul England, Marcus Peinado, and Bryan Willman, four employees of Microsoft who argued the presence of the darknet was the primary hindrance to the development of workable digital rights management (DRM) technologies and made copyright infringement inevitable. This paper described "darknet" more generally as any type of parallel network that is encrypted or requires a specific protocol to allow a user to connect to it. Sub-cultures Journalist J. D. Lasica, in his 2005 book Darknet: Hollywood's War Against the Digital Generation, described the darknet's reach encompassing file sharing networks. Subsequently, in 2014, journalist Jamie Bartlett in his book The Dark Net used the term to describe a range of underground and emergent subcultures, including camgirls, cryptoanarchists, darknet drug markets, self harm communities, social media racists, and transhumanists. Uses Darknets in general may be used for various reasons, such as: To better protect the privacy rights of citizens from targeted and mass surveillance Computer crime (cracking, file corruption, etc.) Protecting dissidents from political reprisal File sharing (warez, personal files, pornography, confidential files, illegal or counterfeit software, etc.) Sale of restricted goods on darknet markets Whistleblowing and news leaks Purchase or sale of illicit or illegal goods or services Circumventing network censorship and content-filtering systems, or bypassing restrictive firewall policies Software All darknets require specific software installed or network configurations made to access them, such as Tor, which can be accessed via a customized browser from Vidalia (aka the Tor browser bundle), or alternatively via a proxy configured to perform the same function. Active Tor is the most popular instance of a darknet, and it is often mistakenly thought to be the only online tool that facilitates access to darknets. Alphabetical list: anoNet is a decentralized friend-to-friend network built using VPN and software BGP routers. Decentralized network 42 (not for anonymity but research purposes). Freenet is a popular DHT file hosting darknet platform. It supports friend-to-friend and opennet modes. GNUnet can be utilized as a darknet if the "F2F (network) topology" option is enabled. I2P (Invisible Internet Project) is an overlay proxy network that features hidden services called "Eepsites". IPFS has a browser extension that may backup popular webpages. RetroShare is a friend-to-friend messenger communication and file transfer platform. It may be used as a darknet if DHT and Discovery features are disabled. Riffle is a government, client-server darknet system that simultaneously provides secure anonymity (as long as at least one server remains uncompromised), efficient computation, and minimal bandwidth burden. Secure Scuttlebutt is a peer-to peer communication protocol, mesh network, and self-hosted social media ecosystem Syndie is software used to publish distributed forums over the anonymous networks of I2P, Tor and Freenet. Tor (The onion router) is an anonymity network that also features a darknet – via its onion services. Tribler is an anonymous BitTorrent client with built in search engine, and non-web, worldwide publishing through channels. Urbit is a federated system of personal servers in a peer-to-peer overlay network. Zeronet is a DHT Web 2.0 hosting with Tor users. No longer supported StealthNet (discontinued) WASTE Defunct AllPeers Turtle F2F See also BlackBook (social network) Crypto-anarchism Cryptocurrency Darknet market Dark web Deep web Private peer-to-peer (P2P) Sneakernet Virtual private network (VPN) References File sharing Virtual private networks Darknet markets Cyberspace Internet terminology Dark web Network architecture Distributed computing architecture 1970s neologisms Internet architecture
Darknet
Technology,Engineering
1,284
34,967,553
https://en.wikipedia.org/wiki/Kepler-33
Kepler-33 is a star about in the constellation of Cygnus, with a system of five known planets. Having just begun to evolve off from the main sequence, its radius and mass are difficult to ascertain, although data available in 2020 shows its best-fit mass of 1.3 and radius of 1.6 are compatible with a model of a subgiant star. Planetary system The first detections of the candidate four-body planetary system were reported in February 2011. On January 26, 2012, the planetary system around the star was confirmed, including a fifth planet. However, unlike some other planets confirmed via Kepler, their masses were initially not known, as Doppler spectroscopy measurements were not done before the announcement. Judging by their radii, b may be a large super-Earth or small hot Neptune while the other four are all likely to be the latter. , the masses of planets e & f have been measured, with upper limits on the masses of planets c & d. These mass measurements confirm Kepler-33 d, e & f to be low-density, gaseous planets. Planets b and c may actually be in a 7:3 resonance, as there is a 0.05 day discrepancy; there is also a small 0.18 day discrepancy between a 5:3 resonance between planets c and d. The other planets do not seem to be in any resonances, though near resonances are 3d:2e and 4e:3f. The planetary system in its current configuration is highly susceptible to perturbations, therefore assuming stability, no additional giant planets can be located within 30 AU from the parent star. See also 55 Cancri Kepler-11 Kepler-20 References Cygnus (constellation) G-type subgiants 707 Planetary transit variables Planetary systems with five confirmed planets J19161861+4600187
Kepler-33
Astronomy
390
29,259,940
https://en.wikipedia.org/wiki/LatencyTOP
LatencyTOP is a Linux application for identifying operating system latency within the kernel and find out the operations/actions which cause the latency. LatencyTOP is a tool for software developers to visualize system latencies. Based on these observations, the source code of the application or kernel can be modified to reduce latency. It was released by Intel in 2008 under the GPLv2 license. It works for Intel, AMD and ARM processors. As of 2021, the project appears inactive with the last commit to the source code in October 2009. See also Green computing PowerTOP top (software) References External links Linux process- and task-management-related software Computers and the environment
LatencyTOP
Technology
141
1,464,827
https://en.wikipedia.org/wiki/Plateau%20%28mathematics%29
A plateau of a function is a part of its domain where the function has constant value. More formally, let U, V be topological spaces. A plateau for a function f: U → V is a path-connected set of points P of U such that for some y we have f (p) = y for all p in P. Examples Plateaus can be observed in mathematical models as well as natural systems. In nature, plateaus can be observed in physical, chemical and biological systems. An example of an observed plateau in the natural world is in the tabulation of biodiversity of life through time. See also Level set Contour line Minimal surface References Topology
Plateau (mathematics)
Physics,Mathematics
135
207,833
https://en.wikipedia.org/wiki/Radial%20velocity
The radial velocity or line-of-sight velocity of a target with respect to an observer is the rate of change of the vector displacement between the two points. It is formulated as the vector projection of the target-observer relative velocity onto the relative direction or line-of-sight (LOS) connecting the two points. The radial speed or range rate is the temporal rate of the distance or range between the two points. It is a signed scalar quantity, formulated as the scalar projection of the relative velocity vector onto the LOS direction. Equivalently, radial speed equals the norm of the radial velocity, modulo the sign. In astronomy, the point is usually taken to be the observer on Earth, so the radial velocity then denotes the speed with which the object moves away from the Earth (or approaches it, for a negative radial velocity). Formulation Given a differentiable vector defining the instantaneous relative position of a target with respect to an observer. Let the instantaneous relative velocity of the target with respect to the observer be The magnitude of the position vector is defined as in terms of the inner product The quantity range rate is the time derivative of the magnitude (norm) of , expressed as Substituting () into () Evaluating the derivative of the right-hand-side by the chain rule using () the expression becomes By reciprocity, . Defining the unit relative position vector (or LOS direction), the range rate is simply expressed as i.e., the projection of the relative velocity vector onto the LOS direction. Further defining the velocity direction , with the relative speed , we have: where the inner product is either +1 or -1, for parallel and antiparallel vectors, respectively. A singularity exists for coincident observer target, i.e., ; in this case, range rate is undefined. Applications in astronomy In astronomy, radial velocity is often measured to the first order of approximation by Doppler spectroscopy. The quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity. However, due to relativistic and cosmological effects over the great distances that light typically travels to reach the observer from an astronomical object, this measure cannot be accurately transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer. By contrast, astrometric radial velocity is determined by astrometric observations (for example, a secular change in the annual parallax). Spectroscopic radial velocity Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect, so the frequency of the light decreases for objects that were receding (redshift) and increases for objects that were approaching (blueshift). The radial velocity of a star or other luminous distant objects can be measured accurately by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. A positive radial velocity indicates the distance between the objects is or was increasing; a negative radial velocity indicates the distance between the source and observer is or was decreasing. William Huggins ventured in 1868 to estimate the radial velocity of Sirius with respect to the Sun, based on observed redshift of the star's light. In many binary stars, the orbital motion usually causes radial velocity variations of several kilometres per second (km/s). As the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries. Radial velocity can be used to estimate the ratio of the masses of the stars, and some orbital elements, such as eccentricity and semimajor axis. The same method has also been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function. Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a very high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit. Detection of exoplanets The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. By regularly looking at the spectrum of a star—and so, measuring its velocity—it can be determined if it moves periodically due to the influence of an exoplanet companion. Data reduction From the instrumental perspective, velocities are measured relative to the telescope's motion. So an important first step of the data reduction is to remove the contributions of the Earth's elliptic motion around the Sun at approximately ± 30 km/s, a monthly rotation of ± 13 m/s of the Earth around the center of gravity of the Earth-Moon system, the daily rotation of the telescope with the Earth crust around the Earth axis, which is up to ±460 m/s at the equator and proportional to the cosine of the telescope's geographic latitude, small contributions from the Earth polar motion at the level of mm/s, contributions of 230 km/s from the motion around the Galactic Center and associated proper motions. in the case of spectroscopic measurements corrections of the order of ±20 cm/s with respect to aberration. Sin i degeneracy is the impact caused by not being in the plane of the motion. See also Bistatic range rate Doppler effect Inner product Orbit determination Lp space Notes References Further reading Renze, John; Stover, Christopher; and Weisstein, Eric W. "Inner Product." From MathWorld—A Wolfram Web Resource.http://mathworld.wolfram.com/InnerProduct.html External links The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method ) Astrometry Concepts in astronomy Orbits Velocity
Radial velocity
Physics,Astronomy
1,304
50,960,270
https://en.wikipedia.org/wiki/Shiji%20Seit%C5%8D%20Nashi
The is a Japanese political party. It was founded by Hidemitsu Sano in July 2013. See also None of the above References External links No Party to Support 2013 establishments in Japan E-democracy Political parties established in 2013 Political parties in Japan
Shiji Seitō Nashi
Technology
51
58,760,117
https://en.wikipedia.org/wiki/Studio%20Alchimia
Studio Alchimia was a post-radical avant-garde group founded in Milan in 1976 by Alessandro Guerriero and his sister Adriana with the stated mission of "materializing a non-existent thing into being." History Studio Alchimia was an interdisciplinary and multiform group whose activities included seminars, production of experimental video, clothing design, theatrical set design, product design, decorative arts, performance art, and architecture. Alchimia presented their first group of furniture in the exhibition "Bau.Haus uno" in 1978. "Bau.Haus uno" included work by Ettore Sottsass Jr., Alessandro Mendini, Andrea Branzi, Trix & Robert Haussmann, U.F.O., Michele De Lucchi and Paola Navone. Alchimia's 1980 exhibition "Bau.Haus due" included textile work by Daniela Puppa. Studio Alchimia's core membership included the Designers Alessandro Guerriero, Alessandro Mendini, Bruno Gregori Giorgio Gregori, Arturo Reboldi, Pier Carlo Bontempi and Carla Ceccariglia. The Studio was organized by Adriana Guerriero-Reali, Tina Corti and Donatella Biffi. On the website Alchimia Milano you can find a list of all people who worked at or with Alchimia. Exhibitions Studio Alchimia exhibited their work at the Milan Triennial, The Musée d'Art Moderne de la Ville de Paris, Galleria d'Arte Moderna, Milan, and the 1980 Venice Biennale. In 2011 their work was featured in the "Postmodernism: Style and Subversion 1970-1990" exhibition at the Victoria and Albert Museum in London. Awards Studio Alchimia was awarded the Compasso d'Oro in the Design Studio category in 1981 for design research References Further reading External links Alchimia Milano – Official Website for Alchimia Alchimia del verbo - Official Website for poesia and alchimia Official Website for Alessandro Guerriero Santa Alchimia. Production: Metamorphosi. YouTube, 56:17 min. Al Diavolo Alchimia Exhibition in Lecce, Italy, 23 June till 31 October 2017. Production: Metamorphosi. YouTube, 25:11 min. Italian designers Product design
Studio Alchimia
Engineering
481
58,623,011
https://en.wikipedia.org/wiki/Aspergillus%20filifer
Aspergillus filifer is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2008. It has been reported to produce shamixanthones and varitriol. References filifer Fungi described in 2008 Fungus species
Aspergillus filifer
Biology
67
3,968,794
https://en.wikipedia.org/wiki/Schmidt%20hammer
A Schmidt hammer, also known as a Swiss hammer or a rebound hammer or concrete hammer test, is a device to measure the elastic properties or strength of concrete or rock, mainly surface hardness and penetration resistance. It was invented by Ernst Heinrich Wilhelm Schmidt, a Swiss engineer. The hammer measures the rebound of a spring-loaded mass impacting against the surface of a sample. The test hammer hits the concrete at a defined energy. Its rebound is dependent on the hardness of the concrete and is measured by the test equipment. By reference to a conversion chart, the rebound value can be used to determine the concrete's compressive strength. When conducting the test, the hammer should be held at right angles to the surface, which in turn should be flat and smooth. The rebound reading will be affected by the orientation of the hammer: when used oriented upward (for example, on the underside of a suspended slab), gravity will increase the rebound distance of the mass, and vice versa for a test conducted on a floor slab. Schmidt hammer measurements are on an arbitrary scale ranging from 10 to 100. Schmidt hammers are available from manufacturers in several different energy ranges, including (i) Type L-0.735 Nm impact energy, (ii) Type N-2.207 Nm impact energy, and (iii) Type M-29.43 Nm impact energy. The test is also sensitive to other factors: Local variation in the sample. To minimize this, it is recommended to take a selection of readings and take an average or median value. Water content of the sample; a saturated material will give different results from a dry one. Prior to testing, the Schmidt hammer should be calibrated using a calibration test anvil supplied by the manufacturer. Twelve readings should be taken, dropping the highest and lowest, and then taking the average of the ten remaining. This method of testing is classed as indirect as it does not give a direct measurement of the strength of the material. It simply gives an indication based on surface properties, and as such is suitable only for making comparisons between samples. This method for testing concrete is governed by ASTM C805. A European standard for testing concrete in structures is EN 12504-2. ASTM D5873 describes the procedure for testing of rock. References External links Screening Eagle Technologies Schmidt Hammer Savoie Maintenance Service revendeur et réparateur SCHMIDT Hammer Concrete Geological tools Geomorphology Hardness instruments Nondestructive testing
Schmidt hammer
Materials_science,Technology,Engineering
502
18,223,985
https://en.wikipedia.org/wiki/Spiral%20separator
The term spiral separator can refer to either a device for separating slurry components by density (wet spiral separators), or for a device for sorting particles by shape (dry spiral separators). Wet spiral separators Spiral separators of the wet type, also called spiral concentrators, are devices to separate solid components in a slurry, based upon a combination of the solid particle density as well as the particle's hydrodynamic properties (e.g. drag). The device consists of a tower, around which is wound a sluice, from which slots or channels are placed in the base of the sluice to extract solid particles that have come out of suspension. As larger and heavier particles sink to the bottom of the sluice faster and experience more drag from the bottom, they travel slower, and so move towards the center of the spiral. Conversely, light particles stay towards the outside of the spiral, with the water, and quickly reach the bottom. At the bottom, a "cut" is made with a set of adjustable bars, channels, or slots, separating the low and high density parts. Efficiency Typical spiral concentrators will use a slurry from about 20%-40% solids by weight, with a particle size somewhere between 0.75—1.5mm (17-340 mesh), though somewhat larger particle sizes are sometimes used. The spiral separator is less efficient at the particle sizes of 0.1—0.074mm however. For efficient separation, the density difference between the heavy minerals and the light minerals in the feedstock should be at least 1 g/cm3; and because the separation is dependent upon size and density, spiral separators are most effective at purifying ore if its particles are of uniform size and shape. A spiral separator may process a couple tons per hour of ore, per flight, and multiple flights may be stacked in the same space as one, to improve capacity. Many things can be done to improve the separation efficiency, including: changing the rate of material feed changing the grain size of the material changing the slurry mass percentage adjusting the cutter bar positions running the output of one spiral separator (often, a third, intermediate, cut) through a second. adding washwater inlets along the length of the spiral, to aid in separating light minerals adding multiple outlets along the length, to improve the ability of the spiral to remove heavy contaminants adding ridges on the sluice at an angle to the direction of flow. Dry spiral separators Dry spiral separators, capable of distinguishing round particles from nonrounds, are used to sort the feed by shape. The device consists of a tower, around which is wound an inwardly inclined flight. A catchment funnel is placed around this inner flight. Round particles roll at a higher speed than other objects, and so are flung off the inner flight and into the collection funnel. Shapes which are not round enough are collected at the bottom of the flight. Separators of this type may be used for removing weed seeds from the intended harvest, or to remove deformed lead shot. See also Sieve Screw conveyor Cyclone (separator) Mineral processing Mechanical screening References Further reading External links Screw Conveyor separator Chemical equipment Separation processes
Spiral separator
Chemistry,Engineering
687
40,750,190
https://en.wikipedia.org/wiki/Meta-cold%20dark%20matter
Meta-cold dark matter, also known as ''m''CDM, is a form of cold dark matter proposed to solve the cuspy halo problem. It consists of particles "that emerge relatively late in cosmic time (z ≲ 1000) and are born non-relativistic from the decays of cold particles". Notes Dark matter
Meta-cold dark matter
Physics,Astronomy
74
297,466
https://en.wikipedia.org/wiki/Spontaneous%20symmetry%20breaking
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. Overview The spontaneous symmetry breaking cannot happen in quantum mechanics that describes finite dimensional systems, due to Stone-von Neumann theorem (that states the uniqueness of Heisenberg commutation relations in finite dimensions). So spontaneous symmetry breaking can be observed only in infinite dimensional theories, as quantum field theories. By definition, spontaneous symmetry breaking requires the existence of physical laws which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. For example if measurements of an observable at any two different positions have the same probability distribution, the observable has translational symmetry. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Conversely, in explicit symmetry breaking, if two outcomes are considered, the probability distributions of a pair of outcomes can be different. For example in an electric field, the forces on a charged particle are different in different directions, so the rotational symmetry is explicitly broken by the electric field which does not have this symmetry. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect. Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. For example the density, compressibility, coefficient of thermal expansion, and specific heat will be expected to change when a liquid becomes a solid. Examples Sombrero potential Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the center axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not. In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory. The relevant Lagrangian of a scalar field , which essentially dictates how a system behaves, can be split up into kinetic and potential terms, It is in this potential term that the symmetry breaking is triggered. An example of a potential, due to Jeffrey Goldstone is illustrated in the graph at the left. This potential has an infinite number of possible minima (vacuum states) given by for any real θ between 0 and 2π. The system also has an unstable vacuum state corresponding to . This state has a U(1) symmetry. However, once the system falls into a specific stable vacuum state (amounting to a choice of θ), this symmetry will appear to be lost, or "spontaneously broken". In fact, any other choice of θ would have exactly the same energy, and the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks the symmetry, implying the existence of a massless Nambu–Goldstone boson, the mode running around the circle at the minimum of this potential, and indicating there is some memory of the original symmetry in the Lagrangian. Other examples For ferromagnetic materials, the underlying laws are invariant under spatial rotations. Here, the order parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the order parameter is zero, which is spatially invariant, and there is no symmetry breaking. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, which points in a certain direction (in the idealized situation where we have full equilibrium; otherwise, translational symmetry gets broken as well). The residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken. The laws describing a solid are invariant under the full Euclidean group, but the solid itself spontaneously breaks this group down to a space group. The displacement and the orientation are the order parameters. General relativity has a Lorentz symmetry, but in FRW cosmological models, the mean 4-velocity field defined by averaging over the velocities of the galaxies (the galaxies act like gas particles at cosmological scales) acts as an order parameter breaking this symmetry. Similar comments can be made about the cosmic microwave background. For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification. In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetry. Take a thin cylindrical plastic rod and push both ends together. Before buckling, the system is symmetric under rotation, and so visibly cylindrically symmetric. But after buckling, it looks different, and asymmetric. Nevertheless, features of the cylindrical symmetry are still there: ignoring friction, it would take no force to freely spin the rod around, displacing the ground state in time, and amounting to an oscillation of vanishing frequency, unlike the radial oscillations in the direction of the buckle. This spinning mode is effectively the requisite Nambu–Goldstone boson. Consider a uniform layer of fluid over an infinite horizontal plane. This system has all the symmetries of the Euclidean plane. But now heat the bottom surface uniformly so that it becomes much hotter than the upper surface. When the temperature gradient becomes large enough, convection cells will form, breaking the Euclidean symmetry. Consider a bead on a circular hoop that is rotated about a vertical diameter. As the rotational velocity is increased gradually from rest, the bead will initially stay at its initial equilibrium point at the bottom of the hoop (intuitively stable, lowest gravitational potential). At a certain critical rotational velocity, this point will become unstable and the bead will jump to one of two other newly created equilibria, equidistant from the center. Initially, the system is symmetric with respect to the diameter, yet after passing the critical velocity, the bead ends up in one of the two new equilibrium points, thus breaking the symmetry. The two-balloon experiment is an example of spontaneous symmetry breaking when both balloons are initially inflated to the local maximum pressure. When some air flows from one balloon into the other, the pressure in both balloons will drop, making the system more stable in the asymmetric state. In particle physics In particle physics, the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. "Hidden" is a better term than "broken", because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking (SSB) because nothing (that we know of) breaks the symmetry in the equations. By the nature of spontaneous symmetry breaking, different portions of the early Universe would break symmetry in different directions, leading to topological defects, such as two-dimensional domain walls, one-dimensional cosmic strings, zero-dimensional monopoles, and/or textures, depending on the relevant homotopy group and the dynamics of the theory. For example, Higgs symmetry breaking may have created primordial cosmic strings as a byproduct. Hypothetical GUT symmetry-breaking generically produces monopoles, creating difficulties for GUT unless monopoles (along with any GUT domain walls) are expelled from our observable Universe through cosmic inflation. Chiral symmetry Chiral symmetry breaking is an example of spontaneous symmetry breaking affecting the chiral symmetry of the strong interactions in particle physics. It is a property of quantum chromodynamics, the quantum field theory describing these interactions, and is responsible for the bulk of the mass (over 99%) of the nucleons, and thus of all common matter, as it converts very light bound quarks into 100 times heavier constituents of baryons. The approximate Nambu–Goldstone bosons in this spontaneous symmetry breaking process are the pions, whose mass is an order of magnitude lighter than the mass of the nucleons. It served as the prototype and significant ingredient of the Higgs mechanism underlying the electroweak symmetry breaking. Higgs mechanism The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries, which is a redundancy in the description of the symmetry. The Higgs mechanism, the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics. The term "spontaneous symmetry breaking" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. Rather, after gauge fixing, the global symmetry (or redundancy) can be broken in a manner formally resembling spontaneous symmetry breaking. One important consequence of the distinction between true symmetries and gauge symmetries, is that the massless Nambu–Goldstone resulting from spontaneous breaking of a gauge symmetry are absorbed in the description of the gauge vector field, providing massive vector field modes, like the plasma mode in a superconductor, or the Higgs mode observed in particle physics. In the standard model of particle physics, spontaneous symmetry breaking of the gauge symmetry associated with the electro-weak force generates masses for several particles, and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction, while the photon mediates the electromagnetic interaction. At energies much greater than 100 GeV, all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge. In addition, fermions develop mass consistently. Without spontaneous symmetry breaking, the Standard Model of elementary particle interactions requires the existence of a number of particles. However, some particles (the W and Z bosons) would then be predicted to be massless, when, in reality, they are observed to have mass. To overcome this, spontaneous symmetry breaking is augmented by the Higgs mechanism to give these particles mass. It also suggests the presence of a new particle, the Higgs boson, detected in 2012. Superconductivity of metals is a condensed-matter analog of the Higgs phenomena, in which a condensate of Cooper pairs of electrons spontaneously breaks the U(1) gauge symmetry associated with light and electromagnetism. Dynamical symmetry breaking Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian). Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, which happens not at the (classical) tree level (i.e., at the level of the bare action), but due to quantum corrections (i.e., at the level of the effective action). Dynamical breaking of a gauge symmetry is subtler. In conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (i.e, electroweak interactions.) In dynamical gauge symmetry breaking, however, no unstable Higgs particle operates in the theory, but the bound states of the system itself provide the unstable fields that render the phase transition. For example, Bardeen, Hill, and Lindner published a paper that attempts to replace the conventional Higgs mechanism in the standard model by a DSB that is driven by a bound state of top-antitop quarks. (Such models, in which a composite particle plays the role of the Higgs boson, are often referred to as "Composite Higgs models".) Dynamical breaking of gauge symmetries is often due to creation of a fermionic condensate — e.g., the quark condensate, which is connected to the dynamical breaking of chiral symmetry in quantum chromodynamics. Conventional superconductivity is the paradigmatic example from the condensed matter side, where phonon-mediated attractions lead electrons to become bound in pairs and then condense, thereby breaking the electromagnetic gauge symmetry. In condensed matter physics Most phases of matter can be understood through the lens of spontaneous symmetry breaking. For example, crystals are periodic arrays of atoms that are not invariant under all translations (only under a small subset of translations by a lattice vector). Magnets have north and south poles that are oriented in a specific direction, breaking rotational symmetry. In addition to these examples, there are a whole host of other symmetry-breaking phases of matter — including nematic phases of liquid crystals, charge- and spin-density waves, superfluids, and many others. There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter, such as fractional quantum Hall liquids, and spin-liquids. These states do not break any symmetry, but are distinct phases of matter. Unlike the case of spontaneous symmetry breaking, there is not a general framework for describing such states. Continuous symmetry The ferromagnet is the canonical system that spontaneously breaks the continuous symmetry of the spins below the Curie temperature and at , where h is the external magnetic field. Below the Curie temperature, the energy of the system is invariant under inversion of the magnetization m(x) such that . The symmetry is spontaneously broken as when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization. Spontaneous breaking of a continuous symmetry is inevitably accompanied by gapless (meaning that these modes do not cost any energy to excite) Nambu–Goldstone modes associated with slow, long-wavelength fluctuations of the order parameter. For example, vibrational modes in a crystal, known as phonons, are associated with slow density fluctuations of the crystal's atoms. The associated Goldstone mode for magnets are oscillating waves of spin known as spin-waves. For symmetry-breaking states, whose order parameter is not a conserved quantity, Nambu–Goldstone modes are typically massless and propagate at a constant velocity. An important theorem, due to Mermin and Wagner, states that, at finite temperature, thermally activated fluctuations of Nambu–Goldstone modes destroy the long-range order, and prevent spontaneous symmetry breaking in one- and two-dimensional systems. Similarly, quantum fluctuations of the order parameter prevent most types of continuous symmetry breaking in one-dimensional systems even at zero temperature. (An important exception is ferromagnets, whose order parameter, magnetization, is an exactly conserved quantity and does not have any quantum fluctuations.) Other long-range interacting systems, such as cylindrical curved surfaces interacting via the Coulomb potential or Yukawa potential, have been shown to break translational and rotational symmetries. It was shown, in the presence of a symmetric Hamiltonian, and in the limit of infinite volume, the system spontaneously adopts a chiral configuration — i.e., breaks mirror plane symmetry. Generalisation and technical usage For spontaneous symmetry breaking to occur, there must be a system in which there are several equally likely outcomes. The system as a whole is therefore symmetric with respect to these outcomes. However, if the system is sampled (i.e. if the system is actually used or interacted with in any way), a specific outcome must occur. Though the system as a whole is symmetric, it is never encountered with this symmetry, but only in one specific asymmetric state. Hence, the symmetry is said to be spontaneously broken in that theory. Nevertheless, the fact that each outcome is equally likely is a reflection of the underlying symmetry, which is thus often dubbed "hidden symmetry", and has crucial formal consequences. (See the article on the Goldstone boson.) When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. The theory must not dictate which member is distinct, only that one is. From this point on, the theory can be treated as if this element actually is distinct, with the proviso that any results found in this way must be resymmetrized, by taking the average of each of the elements of the group being the distinct one. The crucial concept in physics theories is the order parameter. If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value) which is not invariant under the symmetry in question, we say that the system is in the ordered phase, and the symmetry is spontaneously broken. This is because other subsystems interact with the order parameter, which specifies a "frame of reference" to be measured against. In that case, the vacuum state does not obey the initial symmetry (which would keep it invariant, in the linearly realized Wigner mode in which it would be a singlet), and, instead changes under the (hidden) symmetry, now implemented in the (nonlinear) Nambu–Goldstone mode. Normally, in the absence of the Higgs mechanism, massless Goldstone bosons arise. The symmetry group can be discrete, such as the space group of a crystal, or continuous (e.g., a Lie group), such as the rotational symmetry of space. However, if the system contains only a single spatial dimension, then only discrete symmetries may be broken in a vacuum state of the full quantum theory, although a classical solution may break a continuous symmetry. Nobel Prize On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a "just so" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon. See also Autocatalytic reactions and order creation Catastrophe theory Chiral symmetry breaking CP-violation Fermi ball Gauge gravitation theory Goldstone boson Grand unified theory Higgs mechanism Higgs boson Higgs field (classical) Irreversibility Magnetic catalysis of chiral symmetry breaking Mermin–Wagner theorem Norton's dome Second-order phase transition Spontaneous absolute asymmetric synthesis in chemistry Symmetry breaking Tachyon condensation 1964 PRL symmetry breaking papers Notes Note that (as in fundamental Higgs driven spontaneous gauge symmetry breaking) the term "symmetry breaking" is a misnomer when applied to gauge symmetries. References External links For a pedagogic introduction to electroweak symmetry breaking with step by step derivations, not found in texts, of many key relations, see http://www.quantumfieldtheory.info/Electroweak_Sym_breaking.pdf Spontaneous symmetry breaking Physical Review Letters – 50th Anniversary Milestone Papers In CERN Courier, Steven Weinberg reflects on spontaneous symmetry breaking Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia History of Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles International Journal of Modern Physics A: The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles Guralnik, G S; Hagen, C R and Kibble, T W B (1967). Broken Symmetries and the Goldstone Theorem. Advances in Physics, vol. 2 Interscience Publishers, New York. pp. 567–708 Spontaneous Symmetry Breaking in Gauge Theories: a Historical Survey The Royal Society Publishing: Spontaneous symmetry breaking in gauge theories University of Cambridge, David Tong: Lectures on Quantum Field Theory for masters level students. Theoretical physics Quantum field theory Standard Model Quantum chromodynamics Symmetry Quantum phases
Spontaneous symmetry breaking
Physics,Chemistry,Materials_science,Mathematics
4,533
41,765,877
https://en.wikipedia.org/wiki/Open%20Game%20Engine%20Exchange
The Open Game Engine Exchange (OpenGEX) is a format that aids the application-agnostic transferring of complex scene data between 3D graphics apps including game engines and 3D modelling apps. It uses Open Data Description Language for data storage, a method for arbitrary data storage that maintains human readability. The OpenGEX file format is registered with the Internet Assigned Numbers Authority (IANA) as the model/vnd.opengex media type. The OpenGEX format is defined by the Open Game Engine Exchange Specification, which is available on the official website opengex.org. Export plugins that write the OpenGEX format are available for Autodesk Maya and 3D Studio Max, with an unofficial plugin available for Blender. Format At the most basic level, an OpenGEX file consists of a node hierarchy, a set of objects, a set of materials, and some additional information about global units and axis orientation. The various node, object, and material structures contain all of the details such as geometric data and animation tracks within a hierarchy of additional types of structures defined by OpenGEX. The following types of data can appear in an OpenGEX file: Hierarchical scene organization (node trees). Node and object transforms (4×4 matrices, translations, rotations, and scales). Geometry objects, light objects, and camera objects. Meshes composed of vertex attribute arrays and index arrays for multiple levels of detail. Skinned meshes (skeleton, bind-pose transforms, bone influence weighting data). Multiple morph targets for meshes and animated morph weights. Keyframe animation with linear, Bézier, and TCB animation curves. Material colors and textures (diffuse, specular, normal, emission, opacity, transparency). Example A very simple example of a complete OpenGEX file describing a green cube is shown in the listing below. It begins with a group of Metric structures that define the units of measurement and the global up direction. Those are followed by a single GeometryNode structure that provides the name and transform for the cube. The geometric data for the cube is stored in the GeometryObject structure that is referenced by the geometry node. The geometry object structure contains a single mesh of triangle primitives that includes per-vertex positions, normals, and texture coordinates. Finally, the Material structure at the end of the file contains the green diffuse reflection color. Metric (key = "distance") {float {0.01}} Metric (key = "up") {string {"z"}} GeometryNode $node1 { Name {string {"Cube"}} ObjectRef {ref {$geometry1}} MaterialRef {ref {$material1}} Transform { float[12] { {1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 50.0, 50.0, 0.0} } } } GeometryObject $geometry1 // Cube { Mesh (primitive = "triangles") { VertexArray (attrib = "position") { float[3] // 24 { {-50.0, -50.0, 0.0}, {-50.0, 50.0, 0.0}, {50.0, 50.0, 0.0}, {50.0, -50.0, 0.0}, {-50.0, -50.0, 100.0}, {50.0, -50.0, 100.0}, {50.0, 50.0, 100.0}, {-50.0, 50.0, 100.0}, {-50.0, -50.0, 0.0}, {50.0, -50.0, 0.0}, {50.0, -50.0, 100.0}, {-50.0, -50.0, 100.0}, {50.0, -50.0, 0.0}, {50.0, 50.0, 0.0}, {50.0, 50.0, 100.0}, {50.0, -50.0, 100.0}, {50.0, 50.0, 0.0}, {-50.0, 50.0, 0.0}, {-50.0, 50.0, 100.0}, {50.0, 50.0, 100.0}, {-50.0, 50.0, 0.0}, {-50.0, -50.0, 0.0}, {-50.0, -50.0, 100.0}, {-50.0, 50.0, 100.0} } } VertexArray (attrib = "normal") { float[3] // 24 { {0.0, 0.0, -1.0}, {0.0, 0.0, -1.0}, {0.0, 0.0, -1.0}, {0.0, 0.0, -1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, -1.0, 0.0}, {0.0, -1.0, 0.0}, {0.0, -1.0, 0.0}, {0.0, -1.0, 0.0}, {1.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {0.0, 1.0, 0.0}, {0.0, 1.0, 0.0}, {0.0, 1.0, 0.0}, {0.0, 1.0, 0.0}, {-1.0, 0.0, 0.0}, {-1.0, 0.0, 0.0}, {-1.0, 0.0, 0.0}, {-1.0, 0.0, 0.0} } } VertexArray (attrib = "texcoord") { float[2] // 24 { {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0}, {0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}, {0.0, 1.0} } } IndexArray { uint32[3] // 12 { {0, 1, 2}, {2, 3, 0}, {4, 5, 6}, {6, 7, 4}, {8, 9, 10}, {10, 11, 8}, {12, 13, 14}, {14, 15, 12}, {16, 17, 18}, {18, 19, 16}, {20, 21, 22}, {22, 23, 20} } } } } Material $material1 { Name {string {"Green"}} Color (attrib = "diffuse") {float[3] {{0, 1, 0}}} } History The development of the OpenGEX format was funded by a crowd-sourcing campaign that ended on May 8, 2013. As the format was being designed, the Open Data Description Language was also created as a generic base language upon which OpenGEX was built. Support for the OpenGEX format was originally implemented in C4 Engine version 3.5. See also glTF - a Khronos Group file format for 3D Scenes and models. References External links 3D graphics file formats 3D graphics software Graphics standards Open formats
Open Game Engine Exchange
Technology
1,913
10,571,206
https://en.wikipedia.org/wiki/Stem%20mixing%20and%20mastering
Stem-mixing is a method of mixing audio material based on creating groups of audio tracks and processing them separately prior to combining them into a final master mix. Stems are also sometimes referred to as submixes, subgroups, or buses. The distinction between a stem and a separation is rather unclear. Some consider stem manipulation to be the same as separation mastering, although others consider stems to be sub-mixes to be used along with separation mastering. It depends on how many separate channels of input are available for mixing and/or at which stage they are on the way towards reducing them to a final stereo mix. The technique originated in the 1960s, with the introduction of mixing boards equipped with the capability to assign individual inputs to sub-group faders and to work with each sub-group (stem mix) independently from the others. The approach is widely used in recording studios to control, process and manipulate entire groups of instruments such as drums, strings, or backup vocals, in order to streamline and simplify the mixing process. Additionally, as each stem-bus usually has its own inserts, sends and returns, the stem-mix (sub-mix) can be routed independently through its own signal processing chain, to achieve a different effect for each group of instruments. A similar method is also utilised with digital audio workstations (DAWs), where separate groups of audio tracks may be digitally processed and manipulated through discrete chains of plugins. Stem-mastering is a technique derived from stem mixing. Just as in stem-mixing, the individual audio tracks are grouped together, to allow for independent control and signal processing of each stem, and can be manipulated independently from each other. Most of the mastering engineers require music producers to have at least -3db headroom at each individual track before starting stem mastering process. The reason for this is to leave more space in the mix to make the mastered version sound cleaner and louder. Even though it is not commonly practiced by mastering studios, it does have its proponents. Stem In audio production, a stem is a group of audio sources mixed together, usually by one person, to be dealt with downstream as one unit. A single stem may be delivered in mono, stereo, or in multiple tracks for surround sound. In sound mixing for film, the preparation of stems is a common stratagem to facilitate the final mix. Dialogue, music and sound effects, called "D-M-E", are brought to the final mix as separate stems. Using stem mixing, the dialogue can easily be replaced by a foreign-language version, the effects can easily be adapted to different mono, stereo and surround systems, and the music can be changed to fit the desired emotional response. If the music and effects stems are sent to another production facility for foreign dialogue replacement, these non-dialogue stems are called "M&E". The dialogue stem is used by itself when editing various scenes together to construct a trailer of the film; after this some music and effects are mixed in to form a cohesive sequence. In music mixing for recordings and for live sound, stems are subgroups of similar sound sources. When a large project uses more than one person mixing, stems can facilitate the job of the final mix engineer. Such stems may consist of all of the string instruments, a full orchestra, just background vocals, only the percussion instruments, a single drum set, or any other grouping that may ease the task of the final mix. Stems prepared in this fashion may be blended together later in time, as for a recording project or for consumer listening, or they may be mixed simultaneously, as in a live sound performance with multiple elements. For instance, when Barbra Streisand toured in 2006 and 2007, the audio production crew used three people to run three mixing consoles: one to mix strings, one to mix brass, reeds and percussion, and one under main engineer Bruce Jackson's control out in the audience, containing Streisand's microphone inputs and stems from the other two consoles. Stems may be supplied to a musician in the recording studio so that the musician can adjust a headphones monitor mix by varying the levels of other instruments and vocals relative to the musician's own input. Stems may also be delivered to the consumer so they can listen to a piece of music with a custom blend of the separate elements. See also List of musical works released in a stem format References Further reading US Patent 6772127, Aug 3, 2004 referring to independent processing of vocals and audio program during digital mastering. US Patent 5930375, May 16, 1996, Audio mixing console. The text of this patent refers to "grouped side chain processing". Audio engineering Audio mixing
Stem mixing and mastering
Engineering
953
51,331,174
https://en.wikipedia.org/wiki/EMBO%20Membership
Membership of the European Molecular Biology Organization (EMBO) is an award granted by the European Molecular Biology Organization (EMBO) in recognition of "research excellence and the outstanding achievements made by a life scientist". , 88 EMBO Members and Associate Members have been awarded Nobel Prizes in either Physiology or Medicine, Chemistry or Physics. See :Category:Members of the European Molecular Biology Organization for examples of EMBO members. Nomination and election of new members Elections for membership are held annually with candidates for membership being nominated and elected exclusively by existing EMBO members, membership cannot be applied for directly. Three types of membership exist: EMBO Member, for scientists living (or who have lived) in a European Molecular Biology Conference (EMBC) Member State EMBO Associate Member, for scientists living outside of the EMBC Member States EMBO Young Investigator See also List of biology awards References Biology awards European Molecular Biology Organization Molecular biology
EMBO Membership
Chemistry,Technology,Biology
184
7,656,339
https://en.wikipedia.org/wiki/Star%20diagonal
A star diagonal, erecting lens or diagonal mirror is an angled mirror or prism used in telescopes that allows viewing from a direction that is perpendicular to the usual eyepiece axis. It allows more convenient and comfortable viewing when the telescope is pointed at, or near the zenith (i.e. directly overhead). Also, the resulting image is right side up, but is reversed from left to right. Types of diagonals Star diagonals are available in 0.965", 1.25", and 2" diameters. The 2" diagonals allow longer-focal length, low-power 2" barrel eyepieces for a wider field of view. Star diagonals come in all price ranges, from as low as a few dollars up to hundreds of dollars. Mirror (reflective) diagonals These diagonals (often called star diagonals) use a mirror set at a 45° angle inside the diagonal to turn the telescope's image at a 90° angle to the rear cell. Mirror diagonals produce an image in the eyepiece that is correctly oriented vertically, but is reversed left-to-right horizontally. This causes image reversal, the view in the eyepiece is flipped left-right. The major advantage to mirror diagonals is that they cost less to produce to a high degree of optical accuracy compared to a prism and that they do not introduce any color errors to the image. The major disadvantage of mirror diagonals is that unless the reflective coating is properly applied they can scatter light rendering lower image contrast compared to a 90° prism. Also they deteriorate with age as the reflective surface oxidizes. The newer Dielectric mirrors have largely solved the deterioration problem, and if properly made the Dielectric mirrors scatter less light compared to conventional mirrors. With short-focal length instruments, a mirror diagonal is preferred over a prism. Prism diagonals A prism diagonal uses a simple 90°-angle prism, pentaprism, or an Amici roof prism rather than a mirror to bend the optical path. On telescopes with a longer focal ratios, a well-made 90° prism diagonal is the optimum choice to deliver the highest image contrast short of using the telescope without a diagonal entirely. However, prisms seem to be falling out of favor probably due to marketing forces that have been favoring short-focal length instruments, which tend to function better with a mirror diagonal. In some special cases however, the color dispersion effects of a prism diagonal can be used to advantage to improve the performance of undercorrected refractor objectives (regardless of focal length) by shifting the spherical and color correction of the objective closer to the design optimum. The natural color dispersion properties (overcorrection) of the prism works to lessen or nullify the undercorrection of the objective lens. On the other hand, a well-made conventional 90° prism star diagonal can transmit as much or more light as a mirror, and do so with higher image contrast since there is no possibility of light scattering from a reflective metallic surface as in a mirror diagonal. Also a prism will never degrade over time as a mirror will since there is no reflective metal coating to degrade from oxidation. However, prism diagonals may introduce chromatic aberration when used with short focal-length scopes although this is not a problem with the popular Schmidt-Cassegrain and Maksutov-Cassegrain telescopes, which have long focal lengths. Pentaprism A pentaprism provides the same inverted image orientation as viewing without a diagonal would. A simple 90°-angle prism provides the same "flipped" or mirror reversed image as a mirror diagonal. Pentaprism diagonals are extremely difficult to find. Amici prism An Amici prism is a type of roof prism which splits the image in two parts and thus allows an upright image without left-right mirroring. This means that what is seen in the eyepiece is the same as what is seen when looking at the sky, or a star chart or lunar map. The disadvantage of typical "correct image" Amici roof prism diagonals is that because the light path bounces around through a piece of glass, the total amount of light transmitted is less and the multiple reflections required can introduce optical aberrations. At higher magnifications (>100×), brighter objects have a bright line through the object viewed. Therefore, most Amici roof prisms are more appropriate for low-power viewing or in spotting scopes for terrestrial rather than astronomical use. But with low-power usage with a rich field, the field can easily be compared with star charts, as it is no mirror image. They are available in two types: with a 90º angle (like an ordinary star diagonal) and with a 45º angle. Such prisms are often used in spotting scopes for terrestrial viewing, mostly with a 45º angle. Such telescopes rarely use magnifications over 60×. Alignment Even an expensive star diagonal will deliver poor performance if it is not in alignment with the optical axis of the telescope. A telescope in perfect collimation will be thrown out of collimation by a misaligned star diagonal and often this misalignment will determine the image quality of the telescope to a larger extent than the surface accuracy of the prism or mirror. Since the mirror or prism of the star diagonal is located nearly at the focal plane of the instrument, surface accuracy of greater that 1/4 wave is more in the line of advertising than any increase in optical performance. A 1/10 wave mirror or prism star diagonal that throws off the collimation of the telescope will perform worse than a 1/2 wave star diagonal that is in proper alignment. See also Herschel Wedge Amici prism List of telescope parts and construction References Optical components
Star diagonal
Materials_science,Technology,Engineering
1,187
14,660,345
https://en.wikipedia.org/wiki/Realistic%20conflict%20theory
Realistic conflict theory (RCT), also known as realistic group conflict theory (RGCT), is a social psychological model of intergroup conflict. The theory explains how intergroup hostility can arise as a result of conflicting goals and competition over limited resources, and it also offers an explanation for the feelings of prejudice and discrimination toward the outgroup that accompany the intergroup hostility. Groups may be in competition for a real or perceived scarcity of resources such as money, political power, military protection, or social status. Feelings of resentment can arise in the situation that the groups see the competition over resources as having a zero-sums fate, in which only one group is the winner (obtained the needed or wanted resources) and the other loses (unable to obtain the limited resource due to the "winning" group achieving the limited resource first). The length and severity of the conflict is based upon the perceived value and shortage of the given resource. According to RCT, positive relations can only be restored if superordinate goals are in place. Concept History The theory was officially named by Donald Campbell, but has been articulated by others since the middle of the 20th century. In the 1960s, this theory developed from Campbell's recognition of social psychologists' tendency to reduce all human behavior to hedonistic goals. He criticized psychologists like John Thibaut, Harold Kelley, and George Homans, who emphasized theories that place food, sex, and pain avoidance as central to all human processes. According to Campbell, hedonistic assumptions do not adequately explain intergroup relations. Campbell believed that these social exchange theorists oversimplified human behavior by likening interpersonal interaction to animal behavior. Similar to the ideas of Campbell, other researchers also began recognizing a problem in the psychological understanding of intergroup behavior. These researchers noted that prior to Campbell, social exchange theorists ignored the essence of social psychology and the importance of interchanges between groups. To the contrary of prior theories, RCT takes into account the sources of conflict between groups, which include, incompatible goals and competition over limited resources. Robbers Cave study The 1954 Robbers Cave experiment (or Robbers Cave study) by Muzafer Sherif and Carolyn Wood Sherif represents one of the most widely known demonstrations of RCT. The Sherifs' study was conducted over three weeks in a 200-acre summer camp in Robbers Cave State Park, Oklahoma, focusing on intergroup behavior. In this study, researchers posed as camp personnel, observing 22 eleven- and twelve-year-old boys who had never previously met and had comparable backgrounds (each subject was a white eleven to twelve-year-old boy of average to slightly above average intelligence from a Protestant, middle-class, two-parent home). The experiments were conducted within the framework of regular camp activities and games. The experiment was divided into three stages. The first stage being "in-group formation", in which upon arrival the boys were housed together in one large bunkhouse. The boys quickly formed particular friendships. After a few days the boys were split randomly into two approximately equal groups. Each group was unaware of the other group's presence. The second stage was the "friction phase", wherein the groups were entered into competition with one another in various camp games. Valued prizes were awarded to the winners. This caused both groups to develop negative attitudes and behaviors towards the outgroup. At this stage 93% of the boys' friendship was within their in-group. The third and final stage was the "integration stage". During this stage, tensions between the groups were reduced through teamwork-driven tasks that required intergroup cooperation. The Sherifs made several conclusions based on the three-stage Robbers Cave experiment. From the study, they determined that because the groups were created to be approximately equal, individual differences are not necessary or responsible for intergroup conflict to occur. As seen in the study when the boys were competing in camp games for valued prizes, the Sherifs noted that hostile and aggressive attitudes toward an outgroup arise when groups compete for resources that only one group can attain. The Sherifs also established that contact with an outgroup is insufficient, by itself, to reduce negative attitudes. Finally, they concluded that friction between groups can be reduced and positive intergroup relations can be maintained, only in the presence of superordinate goals that promote united, cooperative action. However a further review of the Robbers Cave experiments, which were in fact a series of three separate experiments carried out by the Sherifs and colleagues, reveals additional deliberations. In two earlier studies the boys ganged up on a common enemy, and in fact on occasion ganged up on the experimenters themselves showing an awareness of being manipulated. In addition, Michael Billig argues that the experimenters themselves constitute a third group, and one that is arguably the most powerful of the three, and that they in fact become the outgroup in the aforementioned experiment. Lutfy Diab repeated the experiment with 18 boys from Beirut. The 'Blue Ghost' and 'Red Genies' groups each contained 5 Christians and 4 Muslims. Fighting soon broke out, not between the Christians and Muslims but between the Red and Blue groups. Extensions and applications Implications for diversity and integration RCT offers an explanation for negative attitudes toward racial integration and efforts to promote diversity. This is illustrated in the data collected from the Michigan National Election Studies survey. According to the survey, most whites held negative attitudes toward school districts' attempts to integrate schools via school busing in the 1970s. In these surveys, there was a general perceived threat that whites had of African Americans. It can be concluded that, contempt towards racial integration was due to a perception of blacks as a danger to valued lifestyles, goals, and resources, rather than symbolic racism or prejudice attitudes formulated during childhood. RCT can also provide an explanation for why competition over limited resources in communities can present potentially harmful consequences in establishing successful organizational diversity. In the workplace, this is depicted by the concept that increased racial heterogeneity among employees is associated with job dissatisfaction among majority members. Since organizations are affixed in the communities to which their employees belong, the racial makeup of employees' communities affect attitudes toward diversity in the workplace. As racial heterogeneity increases in a white community, white employees are less accepting of workplace diversity. RCT provides an explanation of this pattern because in communities of mixed races, members of minority groups are seen as competing for economic security, power, and prestige with the majority group. RCT can help explain discrimination against different ethnic and racial groups. An example of this is shown in cross-cultural studies that determined that violence between different groups escalates in relationship to shortages in resources. When a group has a notion that resources are limited and only available for possession by one group, this leads to attempts to remove the source of competition. Groups can attempt to remove their competition by increasing their group's capabilities (e.g., skill training), decreasing the abilities of the outgroup's competition (e.g., expressing negative attitudes or applying punitive tariffs), or by decreasing proximity to the outgroup (e.g., denying immigrant access). An extension to unequal groups Realistic conflict theory originally only described the results of competition between two groups of equal status. John Duckitt suggests that the theory be expanded to include competition between groups of unequal status. To demonstrate this, Duckitt created a scheme of types of realistic conflict with groups of unequal status and their resulting correlation with prejudice. Duckitt concluded that there are at least two types of conflict based on ingroups competition with an outgroup. The first is 'competition with an equal group' and is explained by realistic conflict theory. Thus being, group-based threat that leads ingroup members to feel hostile towards the outgroup which can lead to conflict as the ingroup focuses on acquiring the threatened resource. The second type of conflict is 'domination of the outgroup by the ingroup'. This occurs when the ingroup and outgroup do not have equal status. If domination occurs, there are two responses the subordinate group may have. One is stable oppression, in which the subordinate group accepts the dominating group's attitudes on some focal issue and sometimes, the dominant group's deeper values to avoid further conflict. The second response that may occur is unstable oppression. This occurs when the subordinate group rejects the lower status forced upon them, and sees the dominating group as oppressive. The dominant group then may view the subordinates' challenge as either justified or unjustified. If it is seen as unjustified, the dominant group will likely respond to the subordinates' rebellion with hostility. If the subordinates' rebellion is viewed as justified, the subordinates are given the power to demand change. An example of this would be the eventual recognition of the civil rights movement in the 1960s in the United States. An extension to nations When group conflict extends to nations or tribes, Regality Theory argues that the collective danger leads citizens to start having strong feelings of national or tribal identity, preferring strong, hierarchical political system, adopting strict discipline and punishment of deviants, and expressing xenophobia and strict religious and sexual morality. See also Amity-enmity complex Discrimination Group conflict Group threat theory Intergroup relations Minimal group paradigm Prejudice Social psychology Stereotypes References Group processes Conflict (process) Psychological theories
Realistic conflict theory
Biology
1,900
965,606
https://en.wikipedia.org/wiki/Standpipe%20%28firefighting%29
A standpipe or riser is a type of rigid water piping which is built into multi-story buildings in a vertical position, or into bridges in a horizontal position, to which fire hoses can be connected, allowing manual application of water to the fire. Within the context of a building or bridge, a standpipe serves the same purpose as a fire hydrant. Dry standpipe When standpipes are fixed into buildings, the pipe is in place permanently with an intake usually located near a road or driveway, so that a fire engine can supply water to the system. The standpipe extends into the building to supply fire fighting water to the interior of the structure via hose outlets, often located between each pair of floors in stairwells in high rise buildings. Dry standpipes are not filled with water until needed in fire fighting. Fire fighters often bring hoses in with them and attach them to standpipe outlets located along the pipe throughout the structure. This type of standpipe may also be installed horizontally on bridges. Wet standpipe A "wet" standpipe is filled with water and is pressurized at all times. In contrast to dry standpipes, which can be used only by firefighters, wet standpipes can be used by building occupants. Wet standpipes generally already come with hoses so that building occupants may fight fires quickly. This type of standpipe may also be installed horizontally on bridges. Advantages Laying a firehose up a stairwell takes time, and this time is saved by having fixed hose outlets already in place. There is also a tendency for heavy wet hoses to slide downward when placed on an incline (such as the incline seen in a stairwell), whereas standpipes do not move. The use of standpipes keeps stairwells clear and is safer for exiting occupants. Standpipes go in a direct up and down direction rather than looping around the stairwell, greatly reducing the length and thus the loss of water pressure due to friction loss. Additionally, standpipes are rigid and do not kink, which can occur when a firehose is improperly laid on a stairwell. Standpipe systems also provide a level of redundancy, should the main water distribution system within a building fail or be otherwise compromised by a fire or explosion. Disadvantages Standpipes are not fail-safe systems and there have been many instances where fire operations have been compromised by standpipe systems which were damaged or otherwise not working properly. Firefighters must take precautions to flush the standpipe before use to clear out debris and ensure that water is available. See also Fire Equipment Manufacturers' Association Fire sprinkler References Essentials of Fire Fighting, Fourth Edition, copyright 1998 by the Board of Regents, Oklahoma State University Firefighting equipment Fire suppression Piping
Standpipe (firefighting)
Chemistry,Engineering
561
20,185,928
https://en.wikipedia.org/wiki/Aging%20of%20wine
The aging of wine is potentially able to improve the quality of wine. This distinguishes wine from most other consumable goods. While wine is perishable and capable of deteriorating, complex chemical reactions involving a wine's sugars, acids and phenolic compounds (such as tannins) can alter the aroma, color, mouthfeel and taste of the wine in a way that may be more pleasing to the taster. The ability of a wine to age is influenced by many factors including grape variety, vintage, viticultural practices, wine region and winemaking style. The condition that the wine is kept in after bottling can also influence how well a wine ages and may require significant time and financial investment. The quality of an aged wine varies significantly bottle-by-bottle, depending on the conditions under which it was stored, and the condition of the bottle and cork, and thus it is said that rather than good old vintages, there are good old bottles. There is a significant mystique around the aging of wine, as its chemistry was not understood for a long time, and old wines are often sold for extraordinary prices. However, the vast majority of wine is not aged, and even wine that is aged is rarely aged for long; it is estimated that 90% of wine is meant to be consumed within a year of production, and 99% of wine within 5 years. History The Ancient Greeks and Romans were aware of the potential of aged wines. In Greece, early examples of dried "straw wines" were noted for their ability to age due to their high sugar contents. These wines were stored in sealed earthenware amphorae and kept for many years. In Rome, the most sought after wines – Falernian and Surrentine – were prized for their ability to age for decades. In the Book of Luke, it is noted that "old wine" was valued over "new wine" (). The Greek physician Galen wrote that the "taste" of aged wine was desirable and that this could be accomplished by heating or smoking the wine, though, in Galen's opinion, these artificially aged wines were not as healthy to consume as naturally aged wines. Following the Fall of the Roman Empire, appreciation for aged wine was virtually non-existent. Most of the wines produced in northern Europe were light bodied, pale in color and with low alcohol. These wines did not have much aging potential and barely lasted a few months before they rapidly deteriorated into vinegar. The older a wine got the cheaper its price became as merchants eagerly sought to rid themselves of aging wine. By the 16th century, sweeter and more alcoholic wines (like Malmsey and Sack) were being made in the Mediterranean and gaining attention for their aging ability. Similarly, Riesling from Germany with its combination of acidity and sugar were also demonstrating their ability to age. In the 17th century, two innovations occurred that radically changed the wine industry's view on aging. One was the development of the cork and bottle which again allowed producers to package and store wine in a virtually air-tight environment. The second was the growing popularity of fortified wines such as Port, Madeira and Sherries. The added alcohol was found to act as a preservative, allowing wines to survive long sea voyages to England, The Americas and the East Indies. The English, in particular, were growing in their appreciation of aged wines like Port and Claret from Bordeaux. Demand for matured wines had a pronounced effect on the wine trade. For producers, the cost and space of storing barrels or bottles of wine was prohibitive so a merchant class evolved with warehouses and the finances to facilitate aging wines for a longer period of time. In regions like Bordeaux, Oporto and Burgundy, this situation dramatically increased the balance of power towards the merchant classes. Aging potential There is a widespread misconception that wine always improves with age, or that wine improves with extended aging, or that aging potential is an indicator of good wine. Some authorities state that more wine is consumed too old than too young. Aging changes wine, but does not categorically improve it or worsen it. Fruitiness deteriorates rapidly, decreasing markedly after only 6 months in the bottle. Due to the cost of storage, it is not economical to age cheap wines, but many varieties of wine do not benefit from aging, regardless of the quality. Experts vary on precise numbers, but typically state that only 5–10% of wine improves after 1 year, and only 1% improves after 5–10 years. In general, wines with a low pH (such as pinot noir and Sangiovese) have a greater capability of aging. With red wines, a high level of flavor compounds, such as phenolics (most notably tannins), will increase the likelihood that a wine will be able to age. Wines with high levels of phenols include Cabernet Sauvignon, Nebbiolo and Syrah. The white wines with the longest aging potential tend to be those with a high amount of extract and acidity (such as Riesling). The acidity in white wines, acting as a preservative, has a role similar to that of tannins in red wines. The process of making white wines, which includes little to no skin contact, means that white wines have a significantly lower amount of phenolic compounds, though barrel fermentation and oak aging can impart some phenols. Similarly, the minimal skin contact with rosé wine limits their aging potential. After aging at the winery most wood-aged ports, sherries, vins doux naturels, vins de liqueur, basic level ice wines, and sparkling wines are bottled when the producer feels that they are ready to be consumed. These wines are ready to drink upon release and will not benefit much from aging. Vintage ports and other bottled-aged ports and sherries will benefit from some additional aging. Champagne and other sparkling wines are infrequently aged, and frequently have no vintage year (no vintage, NV), but vintage champagne may be aged. Aged champagne has traditionally been a peculiarly British affectation, and thus has been referred to as "the English taste", though this term also refers to a level of champagne sweetness. In principle champagne has aging potential, due to the acidity, and aged champagne has increased in popularity in the United States since the 1996 vintage. A few French winemakers have advocated aging champagne, most notably René Collard (1921–2009). In 2009, a 184-year-old bottle of Perrier-Jouët was opened and tasted, still drinkable, with notes of "truffles and caramel", according to the experts. Little to no aging potential A guideline provided by Master of Wine Jancis Robinson German QBAs (Qualitätswein bestimmter Anbaugebiete) Asti and Moscato Spumante Rosé and blush wines like White Zinfandel Branded wines like Yellow Tail, Mouton Cadet, etc. European table wine American jug & box wine Inexpensive varietals (with the possible exception of Cabernet Sauvignon) The majority of Vin de pays All Nouveau wines Vermouth Basic Sherry Tawny Ports Kit wines made from mostly concentrated grape juice Good aging potential Master of Wine Jancis Robinson provides the following general guidelines on aging wines. Note that vintage, wine region and winemaking style can influence a wine's aging potential, so Robinson's guidelines are general estimates for the most common examples of these wines. Botrytized wines (5–25 yrs) Chardonnay (2–6 yrs) Riesling (2–30 yrs) Hungarian Furmint (3–25 yrs) Loire Valley Chenin blanc (4–30 yrs) Hunter Valley Sémillon (6–15 yrs) Cabernet Sauvignon (4–20 yrs) Merlot (2–10 yrs) Nebbiolo (4–20 yrs) Pinot noir (2–8 yrs) Sangiovese (2–8 yrs) Syrah (4–16 yrs) Zinfandel (2–6 yrs) Classified Bordeaux (8–25 yrs) Grand Cru Burgundy (8–25 yrs) Aglianico from Taurasi (4–15 yrs) Baga from Bairrada (4–8 yrs) Hungarian Kadarka (3–7 yrs) Bulgarian Melnik (3–7 yrs) Croatian Plavac Mali (4–8 yrs) Georgian Saperavi (3–10 yrs) Madiran Tannat (4–12 yrs) Spanish Tempranillo (2–8 yrs) Greek Xynomavro (4–10 yrs) Vintage Ports (20–50 yrs) Bosnia - Herzegovinian Trnjak (5–10 yrs) Bosnia - Herzegovinian Blatina (4–8 yrs) Factors and influences Wine constituents The ratio of sugars, acids and phenolics to water is a key determination of how well a wine can age. The less water in the grapes prior to harvest, the more likely the resulting wine will have some aging potential. Grape variety, climate, vintage and viticultural practice come into play here. Grape varieties with thicker skins, from a dry growing season where little irrigation was used and yields were kept low will have less water and a higher ratio of sugar, acids and phenolics. The process of making Eisweins, where water is removed from the grape during pressing as frozen ice crystals, has a similar effect of decreasing the amount of water and increasing aging potential. In winemaking, the duration of maceration or skin contact will influence how much phenolic compounds are leached from skins into the wine. Pigmented tannins, anthocyanins, colloids, tannin-polysaccharides and tannin-proteins not only influence a wine's resulting color but also act as preservatives. During fermentation adjustment to a wine's acid levels can be made with wines with lower pH having more aging potential. Exposure to oak either during fermentation or after (during barrel aging) will introduce more phenolic compounds to the wines. Prior to bottling, excessive fining or filtering of the wine could strip the wine of some phenolic solids and may lessen a wine's ability to age. Storage factors The storage condition of the bottled wine will influence a wine's aging. Vibrations and heat fluctuations can hasten a wine's deterioration and cause adverse effect on the wines. In general, a wine has a greater potential to develop complexity and more aromatic bouquet if it is allowed to age slowly in a relatively cool environment. The lower the temperature, the more slowly a wine develops. On average, the rate of chemical reactions in wine double with each 18 °F (10 °C) increase in temperature. Wine expert Karen MacNeil recommends keeping wine intended for aging in a cool area with a constant temperature around 55 °F (13 °C). Wine can be stored at temperatures as high as 69 °F (20 °C) without long term negative effect. Professor Cornelius Ough of the University of California, Davis believes that wine could be exposed to temperatures as high as 120 °F (49 °C) for a few hours and not be damaged. However, most experts believe that extreme temperature fluctuations (such as repeated transferring of a wine from a warm room to a cool refrigerator) would be detrimental to the wine. The ultra-violet rays of direct sunlight should also be avoided because of the free radicals that can develop in the wine and result in premature oxidation. Wines packaged in large format bottles, such as magnums and 3 liter Jeroboams, seem to age more slowly than wines packaged in regular 750 ml bottles or half bottles. This may be because of the greater proportion of oxygen exposed to the wine during the bottle process. The advent of alternative wine closures to cork, such as screw caps and synthetic corks have opened up recent discussions on the aging potential of wines sealed with these alternative closures. Currently there are no conclusive results and the topic is the subject of ongoing research. Bottling factors Bottle shock One of the short-term aging needs of wine is a period where the wine is considered "sick" due to the trauma and volatility of the bottling experience. During bottling the wine is exposed to some oxygen which causes a domino effect of chemical reactions with various components of the wine. The time it takes for the wine to settle down and have the oxygen fully dissolve and integrate with the wine is considered its period of "bottle shock". During this time the wine could taste drastically different from how it did prior to bottling or how it will taste after the wine has settled. While many modern bottling lines try to treat the wine as gently as possible and utilize inert gases to minimize the amount of oxygen exposure, all wine goes through some period of bottle shock. The length of this period will vary with each individual wine. Cork taint The transfer of off-flavours in the cork used to bottle a wine during prolonged aging can be detrimental to the quality of the bottle. The formation of cork taint is a complex process which may result from a wide range of factors ranging from the growing conditions of the cork oak, the processing of the cork into stoppers, or the molds growing on the cork itself. Dumb phase During the course of aging, a wine may slip into a "dumb phase" where its aromas and flavors are very muted. In Bordeaux this phase is called the age ingrat or "difficult age" and is likened to a teenager going through adolescence. The cause or length of time that this "dumb phase" will last is not yet fully understood and seems to vary from bottle to bottle. Effects on wine As red wine ages, the harsh tannins of its youth gradually give way to a softer mouthfeel. An inky dark color will eventually lose its depth of color and begin to appear orange at the edges, and eventually turn brown. These changes occur due to the complex chemical reactions of the phenolic compounds of the wine. In processes that begin during fermentation and continue after bottling, these compounds bind together and aggregate. Eventually these particles reach a certain size where they are too large to stay suspended in the solution and precipitate out. The presence of visible sediment in a bottle will usually indicate a mature wine. The resulting wine, with this loss of tannins and pigment, will have a paler color and taste softer, less astringent. The sediment, while harmless, can have an unpleasant taste and is often separated from the wine by decanting. During the aging process, the perception of a wine's acidity may change even though the total measurable amount of acidity is more or less constant throughout a wine's life. This is due to the esterification of the acids, combining with alcohols in complex array to form esters. In addition to making a wine taste less acidic, these esters introduce a range of possible aromas. Eventually the wine may age to a point where other components of the wine (such as a tannins and fruit) are less noticeable themselves, which will then bring back a heightened perception of wine acidity. Other chemical processes that occur during aging include the hydrolysis of flavor precursors which detach themselves from glucose molecules and introduce new flavor notes in the older wine and aldehydes become oxidized. The interaction of certain phenolics develops what is known as tertiary aromas which are different from the primary aromas that are derived from the grape and during fermentation. As a wine starts to mature, its bouquet will become more developed and multi-layered. While a taster may be able to pick out a few fruit notes in a young wine, a more complex wine will have several distinct fruit, floral, earthy, mineral and oak derived notes. The lingering finish of a wine will lengthen. Eventually the wine will reach a point of maturity, when it is said to be at its "peak". This is the point when the wine has the maximum amount of complexity, most pleasing mouthfeel and softening of tannins and has not yet started to decay. When this point will occur is not yet predictable and can vary from bottle to bottle. If a wine is aged for too long, it will start to descend into decrepitude where the fruit tastes hollow and weak while the wine's acidity becomes dominant. The natural esterification that takes place in wines and other alcoholic beverages during the aging process is an example of acid-catalysed esterification. Over time, the acidity of the acetic acid and tannins in an aging wine will catalytically protonate other organic acids (including acetic acid itself), encouraging ethanol to react as a nucleophile. As a result, ethyl acetate – the ester of ethanol and acetic acid – is the most abundant ester in wines. Other combinations of organic alcohols (such as phenol-containing compounds) and organic acids lead to a variety of different esters in wines, contributing to their different flavours, smells and tastes. Of course, when compared to sulfuric acid conditions, the acid conditions in a wine are mild, so yield is low (often in tenths or hundredths of a percentage point by volume) and take years for ester to accumulate. Coates’ Law of Maturity Coates’ Law of Maturity is a principle used in wine tasting relating to the aging ability of wine. Developed by the British Master of Wine, Clive Coates, the principle states that a wine will remain at its peak (or optimal) drinking quality for a duration of time that is equal to the time of maturation required to reach its optimal quality. During the aging of a wine certain flavors, aromas and textures appear and fade. Rather than developing and fading in unison, these traits each operate on a unique path and time line. The principle allows for the subjectivity of individual tastes because it follows the logic that positive traits that appeal to one particular wine taster will continue to persist along the principle's guideline while for another taster these traits might not be positive and therefore not applicable to the guideline. Wine expert Tom Stevenson has noted that there is logic in Coates' principle and that he has yet to encounter an anomaly or wine that debunks it. Example An example of the principle in practice would be a wine that someone acquires when it is 9 years of age, but finds dull. A year later the drinker finds this wine very pleasing in texture, aroma and mouthfeel. Under the Coates Law of Maturity the wine will continue to be drunk at an optimal maturation for that drinker until it has reached 20 years of age at which time those positive traits that the drinker perceives will start to fade. Artificial aging There is a long history of using artificial means to try to accelerate the natural aging process. In Ancient Rome a smoke chamber known as a fumarium was used to enhance the flavor of wine through artificial aging. Amphorae were placed in the chamber, which was built on top of a heated hearth, in order to impart a smoky flavor in the wine that also seemed to sharpen the acidity. The wine would sometimes come out of the fumarium with a paler color just like aged wine. Modern winemaking techniques like micro-oxygenation can have the side effect of artificially aging the wine. In the production of Madeira and rancio wines, the wines are deliberately exposed to excessive temperatures to accelerate the maturation of the wine. Other techniques used to artificially age wine (with inconclusive results on their effectiveness) include shaking the wine, exposing it to radiation, magnetism or ultra-sonic waves. More recently, experiments with artificial aging through high-voltage electricity have produced results above the remaining techniques, as assessed by a panel of wine tasters. Some artificial wine-aging gadgets include the "Clef du Vin", which is a metallic object that is dipped into wine and purportedly ages the wine one year for every second of dipping. The product has received mixed reviews from wine commentators. Several wineries have begun aging finished wine bottles undersea; ocean aging is thought to accelerate natural aging reactions as a function of depth (pressure). See also Ullage Barrel-aged beer References Further reading Suriano, Matthew, "A Fresh Reading for 'Aged Wine' in the Samaria Ostraca," Palestine Exploration Quarterly, 139,1 (2007), 27–33. External links Wine Wine chemistry
Aging of wine
Chemistry
4,272
21,019,434
https://en.wikipedia.org/wiki/Nutlin
Nutlins are cis-imidazoline analogs which inhibit the interaction between mdm2 and tumor suppressor p53, and which were discovered by screening a chemical library by Vassilev et al. Nutlin-1, nutlin-2, and nutlin-3 were all identified in the same screen; however, Nutlin-3 is the compound most commonly used in anti-cancer studies. Nutlin small molecules occupy p53 binding pocket of MDM2 and effectively disrupt the p53–MDM2 interaction that leads to activation of the p53 pathway in p53 wild-type cells. Inhibiting the interaction between mdm2 and p53 stabilizes p53, and is thought to selectively induce a growth-inhibiting state called senescence in cancer cells. These compounds are therefore thought to work best on tumors that contain normal or "wild-type" p53. Nutlin-3 has been shown to affect the production of p53 within minutes. The more potent of the two enantiomers, nutlin-3a ((–)-nutlin-3), can be synthesized in a highly enantioselective fashion. Several derivatives of nutlin, such as RG7112 and RG7388 (Idasanutlin) have been developed and progressed into human studies. Imidazoline core based on the methoxyphenyl substituents also stabilizes p53. References Imidazolines
Nutlin
Chemistry
303
40,596,183
https://en.wikipedia.org/wiki/Tannosome
Tannosomes are organelles found in plant cells of vascular plants. Formation and functions Tannosomes are formed when the chloroplast membrane forms pockets filled with tannin. Slowly, the pockets break off as tiny vacuoles that carry tannins to the large vacuole filled with acidic fluid. Tannins are then released into the vacuole and stored inside as tannin accretions. They are responsible for synthesizing and producing condensed tannins and polyphenols. Tannosomes condense tannins in chlorophyllous organs, providing defenses against herbivores and pathogens, and protection against UV radiation. Discovery Tannosomes were discovered in September 2013 by French and Hungarian researchers. The discovery of tannosomes showed how to get enough tannins to change the flavour of wine, tea, chocolate, and other food or snacks. See also Chloroplast Leucoplast Plastid References Organelles Plant cells Plant physiology Condensed tannins
Tannosome
Biology
215
3,199,778
https://en.wikipedia.org/wiki/Blind%20deconvolution
In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption. In image processing In image processing, blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of "blurred" images in the presence of a poorly determined or unknown point spread function (PSF). Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Researchers have been studying blind deconvolution methods for several decades, and have approached the problem from different directions. Most of the work on blind deconvolution started in early 1970s. Blind deconvolution is used in astronomical imaging and medical imaging. Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation-maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary. Examples of non-iterative techniques include SeDDaRA, the cepstrum transform and APEX. The cepstrum transform and APEX methods assume that the PSF has a specific shape, and one must estimate the width of the shape. For SeDDaRA, the information about the scene is provided in the form of a reference image. The algorithm estimates the PSF by comparing the spatial frequency information in the blurred image to that of the target image. Examples Any blurred image can be given as input to blind deconvolution algorithm, it can deblur the image, but essential condition for working of this algorithm must not be violated as discussed above. In the first example (picture of shapes), recovered image was very fine, exactly similar to original image because L > K + N. In the second example (picture of a girl), L < K + N, so essential condition is violated, hence recovered image is far different from original image. In signal processing Seismic data In the case of deconvolution of seismic data, the original unknown signal is made of spikes hence is possible to characterize with sparsity constraints or regularizations such as l1 norm/l2 norm norm ratios, suggested by W. C. Gray in 1978. Audio deconvolution Audio deconvolution (often referred to as dereverberation) is a reverberation reduction in audio mixtures. It is part of audio processing of recordings in ill-posed cases such as the cocktail party effect. One possibility is to use ICA. In general Suppose we have a signal transmitted through a channel. The channel can usually be modeled as a linear shift-invariant system, so the receptor receives a convolution of the original signal with the impulse response of the channel. If we want to reverse the effect of the channel, to obtain the original signal, we must process the received signal by a second linear system, inverting the response of the channel. This system is called an equalizer. If we are given the original signal, we can use a supervising technique, such as finding a Wiener filter, but without it, we can still explore what we do know about it to attempt its recovery. For example, we can filter the received signal to obtain the desired spectral power density. This is what happens, for example, when the original signal is known to have no auto correlation, and we "whiten" the received signal. Whitening usually leaves some phase distortion in the results. Most blind deconvolution techniques use higher-order statistics of the signals, and permit the correction of such phase distortions. We can optimize the equalizer to obtain a signal with a PSF approximating what we know about the original PSF. High-order statistics Blind deconvolution algorithms often make use of high-order statistics, with moments higher than two. This can be implicit or explicit. See also Channel model Inverse problem Regularization (mathematics) Blind equalization Maximum a posteriori estimation Maximum likelihood External links ImageJ plugin for deconvolution References Signal processing
Blind deconvolution
Technology,Engineering
1,000
7,248,022
https://en.wikipedia.org/wiki/French%20aircraft%20carrier%20Verdun
Verdun was an aircraft carrier under development in France in the 1950s which was cancelled before design was completed. History With the Clemenceau class carriers soon to enter service, the French Navy launched an effort to build a larger carrier specifically with the nuclear strike role in mind. Construction of the carrier was considered in 1958 but due to cost the program was cancelled in 1961. For more than 30 years, France would rely on the Clemenceau class to provide fixed wing aviation. These two ships were modified in the 1980s to accommodate AN-52 nuclear bombs, taking part of the role of the cancelled Verdun. France built a new carrier finally in the form of the Charles de Gaulle at the end of the 1990s. See also List of aircraft carriers of France References Proposed aircraft carriers Verdun Verdun
French aircraft carrier Verdun
Engineering
166
25,885,554
https://en.wikipedia.org/wiki/Kepler%20Input%20Catalog
The Kepler Input Catalog (or KIC) is a publicly searchable database of roughly 13.2 million targets used for the Kepler Spectral Classification Program (SCP) and the Kepler space telescope. Overview The Kepler SCP targets were observed by the 2MASS project as well as Sloan filters, such as the griz filters. The catalog alone is not used for finding Kepler targets, because only a portion (about 1/3 of the catalog) can be observed by the spacecraft. The full catalog includes up to 21 magnitude, giving 13.2 million targets, but of these only about 6.5 to 4.5 million fall on Kepler's sensors. KIC is one of the few comprehensive star catalogs for a spacecraft's field of view. The KIC was created because no catalog of sufficient depth and information existed for target selection at that time. The catalog includes "mass, radius, effective temperature, log (g), metallicity, and reddening extinction". An example of a KIC catalog entry is KIC #10227020. Having had transit signals detected for this star, it has become a Kepler Object of Interest, with the designation KOI-730. The planets around the star are confirmed, so the star has the Kepler catalog designation Kepler-223. Not all star Kepler Input Catalog stars with confirmed planets get a Kepler Object of Interest designation. The reason is that sometimes transit signals are detected by observations that were not made by the Kepler team. An example of one of these objects is Kepler-78b. Notable objects KIC 8462852 is a binary star whose primary shows a mysterious transit profile. The origin of this profile is uncertain, with proposed explanations ranging from an uneven dust ring to a Dyson swarm or similar alien megastructure. KIC 9832227 is a contact binary and an eclipsing binary with a period of about 11 hours. KIC 11026764 is a G-type subgiant star whose asteroseismology has been studied extensively by Kepler. It shows weak variability with a period of about 1100 seconds. is an eclipsing binary system consisting of two red giants. The primary component of the system has a radius of a mass of , and a temperature of , while the secondary component has a radius of , a mass of and the same temperature. Both stars orbit each other at a distance of , completing one orbit every 171 days. KIC 11145123 is one of the more interesting non-KOI objects in the list. An A-type main-sequence star with unusually slow rotation for its high mass, it is currently believed to be the roundest natural object. See also Kepler object of interest (KOI) Hubble Guide Star Catalog Tabby's Star References External links Kepler Input Catalog (SAO) Kepler space telescope Astronomical catalogues of stars
Kepler Input Catalog
Astronomy
583
48,480,782
https://en.wikipedia.org/wiki/Kepler-24c
Kepler-24c is an exoplanet orbiting the star Kepler-24, located in the constellation Lyra. It was discovered by the Kepler telescope in January 2012. It orbits its parent star at only 0.106 astronomical units away, and at its distance it completes an orbit once every 12.3335 days. References Transiting exoplanets Kepler-24 Exoplanets discovered by the Kepler space telescope Exoplanets discovered in 2012 Lyra
Kepler-24c
Astronomy
96
29,779,240
https://en.wikipedia.org/wiki/Tego%20film
Tego film is an adhesive sheet used in the manufacture of waterproof plywood. It is applied dry and cured by heat, which allows for high-quality laminates that are free from internal voids and warping. Tego film plywood products were used in aircraft manufacture in Germany during World War II, and the loss of the plant during a 1943 bombing raid was a serious blow to several aircraft projects. Tego film was an invention of the Essen, Germany, firm of Th. Goldschmidt AG later Evonik Industries ) . Development and use for plywood Tego film was developed as a glue for waterproof plywood. It comprised a paper sheet impregnated with a resole phenolic resin. heated, assembled between wood veneers and then compressed, a strong and waterproof laminated plywood was formed. Most plywood at this time used other adhesives, such as casein. These adhesives were generally applied as aqueous solutions, which caused warping of thin veneers and made achieving a solid laminate without risk of voids difficult. As Tego film was used dry, it gave a high-integrity result, solid, and without risk of hidden weaknesses. This became an important factor in time, when it was applied to use in aircraft construction. This adhesive was manufactured by Th. Goldschmidt AG Essen at their Wuppertal subsidiary. By 1932, it was being exported worldwide. Use for aircraft The de Havilland Albatross airliner of 1936 had a fuselage of wooden sandwich construction; wafers of birch plywood were spaced apart by a balsa sheet and glued by a casein adhesive. This same construction, but with Aerolite, a urea-formaldehyde adhesive, achieved fame with its wartime use in the DH.98 Mosquito fast bomber. As well as being a construction of light weight and high performance, it also avoided the use of aluminium, a strategic material during wartime, and could use the skills of woodworkers, rather than those of specialised aircraft metalworkers. When Germany attempted to emulate this aircraft with the Ta 154 Moskito, it used Tego film. Flight testing and development of the first Tego film-bonded prototypes from summer 1943 to 1944 was highly successful, but RAF bombing of Wuppertal in February 1943 (incidentally one of the first Oboe-equipped Pathfinder squadron bombing raids) had already destroyed the only Th. Goldschmidt AG factory producing Tego film. RAF squadron records show a second raid of 719 aircraft on the night of 29/30th May 1943 on Wuppertal with the primary objective being the Goldschmidt Tego film plant. For the production aircraft, an ersatz cold adhesive was used, produced by Dynamit AG of Leverkusen. During a test flight on 28 June 1944, one of the two aircraft broke up in flight. Investigation showed that the glue left an acidic residue after curing, that in turn damaged the structure of the timber. Mass production of the aircraft never took place after this. Another case of an aircraft design whose existence was threatened by the acidic replacement adhesive was the largely wood-structured Heinkel He 162 Spatz, the winner of Nazi Germany's last significant aircraft design competition, for a Volksjäger, or "people's fighter" single-engined, jet-powered fighter-interceptor as part of the Jägernotprogramm in the concluding months of the war. Modern use Today, four factories produce Tego film: Surfactor in Germany, AICA in Japan, Kotkamills Imprex in Finland, and Dongwha in South Korea. Dongwha has bought Kotkamills Imprex. The bonding material is exported to plywood manufacturers all over the world. See also Aerolite (adhesive) Redux (adhesive) Duramold References Woodworking adhesives Aerospace materials Plywood Materials
Tego film
Physics,Engineering
815
41,066,878
https://en.wikipedia.org/wiki/Niche%20picking
Niche picking is a psychological theory that people choose environments that complement their heredity. For example, extroverts may deliberately engage with others like themselves. Niche picking is a component of gene-environment correlation. Scarr and McCartney's model In 1983, psychology professors Sandra Scarr and Kathleen McCartney proposed that genes affect the environments individuals choose to interact with, and that phenotypes influence individuals’ exchanges with people, places, and situations. The model states that genotypes can determine an individual's response to a certain environment, and that these genotype-environment pairs can affect human development. Scarr and McCartney, influenced by Robert Plomin's findings, recognized three types of gene-environment correlations. As humans develop, they enter each of these stages in succession, and each is more influential than the last. Passive During infancy, individuals' environments are provided by their parents. The rearing environment reflects the parents' genes, so it is genetically suitable for the child. Evocative Environments respond to individuals based on the genes they express (phenotype). Infants and adolescents evoke social and physical responses from their environments through this interaction. Experiences, and therefore development, are more influenced by evocation than by the passive environment. However, the influence of evocation declines over time. Active Individuals selectively attend to aspects of their environment that correlate to their specific genotypes and autonomously choose environments to interact with. Their selections are based on motivational, personality, and intellectual aspects of their genotype. Therefore, environmental interactions are person specific and can vary greatly. Since these environments are chosen rather than encountered, they have a greater effect on experience and development. Role Scarr and McCartney defined niche picking as a mechanism used to select environments suitable for one's genotype. Therefore, an individual’s temperament often affects the type of niche selected, since environment reflects one’s general disposition. An individual's niche can change over time, as explained in Emilie Snell-Rood's theory of behavioral plasticity and evolution. Snell-Rood argues that one element of developmental behavioral plasticity is the change in a gene’s expressed phenotype as a result of a change in environment. Expressed behaviors reflect the environment one welcomes, and these behaviors change as a result of that environment. If an individual has encountered an environment before, their behavioral change can be attributed to learning, allowing the production of different responses. With respect to niche picking, this suggests that individuals' process of selecting environments evolves, as does their method of response and level of responsiveness. Examples The genotype-environment model states that as siblings and fraternal twins age, their phenotypes grow apart. This is due to their respective mastery of the passive, evocative, and active interactions. When the siblings are infants, the environments their parents provide are similar. But as they age and begin to evoke responses from their environments, the social and physical elements they encounter start to vary. The personal characteristics that encourage environmental responses, such as appearance, personality, and intellect, are not the same between siblings and fraternal twins because of gene variations. Once siblings can actively interact with their environment and select environments they like, differences between their niches become clear. This process is evident in families where one child is outgoing and lively while the other is timid and cautious. According to Frank Sulloway, a social researcher, most characteristic differences between siblings result from personality variation and non-shared environments, both of which are influenced by: parental investment the tendency for siblings to differentiate themselves from each other, often by assuming opposite dispositions birth order, personality, and gender roles. Together, these elements give siblings different evocative and active environmental experiences that reflect their individual niches. In identical twins, this process is different. When siblings are the same age and have the same appearance, some people respond to them identically, despite their different personalities. Twins encounter the same social and physical influences from their environments, whether they have been reared separately or together. Often, this causes them to develop similar niches, though it does not guarantee that they will. Contemporary applications Scarr and McCartney's model provides a framework for examining the role that children's genotypes play in determining environmental interactions. Two major topics associated with this research are public policies to promote children's education and the heritability of political beliefs. Implications for policy makers In a 1996 study, Scarr examined the implications of genotype-environment interaction for public policy, specifically in education. She advised policy makers to be cautious when using programs such as Head Start to encourage intellectual development in children, arguing that genotype-environment interactions gave all children (except those raised in particularly abusive or neglectful homes) "good-enough opportunities" to develop without the aid of such programs. Policy makers might expect to see a jump in children's intellectual abilities from programs such as Head Start, which are designed to introduce children to a school setting, create a stable environment, and further their cognitive talents. Scarr, however, suggests that they cannot fully recreate what intelligent parents and a nurturing environment provide. She instead advocates a varied and stimulating environment that lets children use various types of niche expression. Scarr conducted an experiment in 1997 on the impact of out-of-home day care on children. From this and previous studies, she concluded that the quality of day care had only small, temporary effects on intellectual development. She noted that for children from good homes, genotype-environment interaction usually provides most of the intellectual development they need. Parental and environmental support allows these children to explore the niches most suited to their intellectual desires and abilities. These findings suggest that children from low-income families can benefit from programs that offer the same kind of support. Rather than a narrow environment that focuses on assimilating children into educational institutions, a stimulating environment would provide the most benefit to children who do not get adequate levels of genotype-environment interaction at home. References Developmental psychology Psychological theories
Niche picking
Biology
1,231
4,558,361
https://en.wikipedia.org/wiki/Screening%20%28economics%29
Screening in economics refers to a strategy of combating adverse selection – one of the potential decision-making complications in cases of asymmetric information – by the agent(s) with less information. For the purposes of screening, asymmetric information cases assume two economic agents, with agents attempting to engage in some sort of transaction. There often exists a long-term relationship between the two agents, though that qualifier is not necessary. Fundamentally, the strategy involved with screening comprises the “screener” (the agent with less information) attempting to gain further insight or knowledge into private information that the other economic agent possesses which is initially unknown to the screener before the transaction takes place. In gathering such information, the information asymmetry between the two agents is reduced, meaning that the screening agent can then make more informed decisions when partaking in the transaction. Industries that utilise screening are able to filter out useful information from false information in order to get a clearer picture of the informed party. This is important when addressing problems such as adverse selection and moral hazard. Moreover, screening allows for efficiency as it enhances the flow of information between agents as typically asymmetric information causes inefficiency. Screening is applied in a number of industries and markets. The exact type of information intended to be revealed by the screener ranges widely; the actual screening process implemented depends on the nature of the transaction taking place. Often it is closely connected with the future relationship between the two agents. Both economic agents can benefit through the notion of screening, for example in job markets, when employers screen future employees through the job interview, they are able to identify the areas the employee needs further training on. This benefits both parties as it allows for the employer to maximise from employing the individual and the individual benefits from furthering their skill set. The concept of screening was first developed by Michael Spence (1973). It should be distinguished from signalling – a strategy of combating adverse selection undertaken by the agent(s) with more information. Examples Labour market Screening techniques are employed within the labour market during the hiring and recruitment stage of a job application process. In brief, the hiring party (agent with less information) attempts to reveal more about the characteristics of potential job candidates (agents with more information) so as to make the most optimal choice in recruiting a worker for the role. Screening techniques include: Application review – the hiring party initially screens applicants by undertaking a review of their application submission and any responses received, including an evaluation of their resume and cover letter to reveal education, experience and fit for the role Aptitude testing and assessment – the hiring party may require applicants to undertake a range of testing exercises (either online or in-person) to reveal academic or practical abilities Interviews – candidates are often required to undertake an interview with a representative(s) from the hiring party to reveal a range of factors such as personality traits, verbal communication ability and confidence level Insurance market The process of screening customers is highly applicable in the market for insurance. In general, parties providing insurance perform such activities to reveal the overall risk level of a customer, and as such, the likelihood that they will file for a claim. When in possession of this information, the insuring party can ensure a suitable form of cover (i.e. commensurate with the customer’s risk level) is provided. In particular, Michael Rothschild and Joseph Stiglitz conducted research on the insurance market and how individuals can improve their position in the market when presented with asymmetric information. Rothschild and Stiglitz found that individuals (uninformed party) are able to initiate action by extracting information through screening in order to better position themselves in the market. Insurance companies (uninformed party) had lacked information on the risk level of consumers (informed party). Through screening, insurance companies were able to gain information on the risk level of their consumers, this had been done by offering incentives to policyholders in order to disclose such information on customers. This allowed insurance companies to create a range of risk classes in which their consumers were allocated. Moreover, this allowed insurance companies to create policy contracts for higher deductibles in exchange for lower premiums. Screening techniques include: Background check – the party providing insurance obtains information about the customer such as their criminal history, credit rating and previous employment to reveal past behaviors Provision of demographic information – the party providing insurance obtains information about the customer such as their age, gender and ethnicity to reveal their type. For example, a young male has a higher risk of being in a car accident than a middle-aged woman Other information gathered by insurance parties during a screening process is usually specific to the type of insurance the customer is seeking. For example, car insurance will require provision of accident history, health insurance will require provision of health condition and previous illnesses, and so on. Moral hazard: Moral hazard take place when one party engages in actions that harm the other party. The chance of moral hazard can occur especially in insurance companies, in which one party takes part in risky behaviour as they have insurance coverage and therefore will benefit from being compensated by the insurance company. In this case, the insurance company is the uninformed party, however, through screening processes such as historic behaviour, therefore, insurance companies are able to identify those individuals in order to offer a different insurance plan. Product market Businesses apply screening techniques when generating and adapting a new product idea. Once businesses have developed product ideas, screening processes are used in order to determine how well the product will do in the market. In this scenario, businesses are the uninformed party whilst consumers are the informed party, however, in order to understand what consumers are looking for in products, businesses deploy screening techniques to get a detailed idea. Screening techniques include: Research and development - businesses take feedback from consumers based on prior products or products similar to one currently being developed to find what areas to improve on as well as how to create a point of difference to establish an innovative product that yields high return. Moreover, this allows businesses to identify consumer needs, the profitability of the idea and where the product fits in the market. Test marketing - the party providing the product obtains information from a group of individuals that represent the product market in order to understand how well the product will do in the market as well as how much individuals value the product. This screening process allows businesses to further understand how to market the product to appeal to individuals as well as gain information on the product market. Product launch - product launching is a screening process as it allows businesses to gain further information on how the product will do in the market as the product launch stage is the beginning of the product life cycle. Moreover, based on how the product does in the market as well as the feedback provided by consumers, businesses are able to gain further information on what areas of the product need to be improved. Other techniques Second-degree price discrimination is also an example of screening, whereby a seller offers a menu of options and the buyer's choice reveals their private information. Specifically, such a strategy attempts to reveal more information about a buyer’s willingness to pay. For example, an airline offering economy, premium economy, business and first class tickets reveals information regarding the amount the customer is willing to spend on their airfare. With such information, firms can capture a greater portion of total market surplus. Incorrect Screening One downfall of deploying screening techniques is the information gathered may be incorrect, this can therefore lead to inefficiency. For example, an unproductive employee may perform well in screening exams such as aptitude testing. However, as the employer is the uninformed party, they will not be able to notice these aspects until the individual has been employed, and therefore, the time and effort put into the employee causes inefficiency. Hence, it is important for industries to understand the biases involved when utilising screening techniques. Incorrect Screening in the Insurance Market Typical screening processes in the insurance market involve looking at historic data and demographic information, however, these screening processes may lead to incorrect conclusions. For example, a young male would typically be seen as high risk however, this may not truly be reflected as they could be a safe driver. Therefore, insurance companies need to ensure that further information is gathered prior to concluding what category individuals suit. Contract theory In contract theory, the terms "screening models" and "adverse selection models" are often used interchangeably. An agent has private information about his type (e.g., his costs or his valuation of a good) before the principal makes a contract offer. The principal will then offer a menu of contracts in order to separate the different types. Typically, the best type will trade the same amount as in the first-best benchmark solution (which would be attained under complete information), a property known as "no distortion at the top". All other types typically trade less than in the first-best solution (i.e., there is a "downward distortion" of the trade level). Optimal auction design (more generally known as Bayesian mechanism design) can be seen as a multi-agent version of the basic screening model. Contract-theoretic screening models have been pioneered by Roger Myerson and Eric Maskin. They have been extended in various directions. For example, it has been shown that, in the context of patent licensing, optimal screening contracts may actually yield too much trade compared to the first-best solution. Applications of screening models include regulation, public procurement, and monopolistic price discrimination. Contract-theoretic screening models have been successfully tested in laboratory experiments and using field data. See also Adverse selection Information asymmetry Joseph E. Stiglitz References Asymmetric information Education economics
Screening (economics)
Physics
1,971
27,487,812
https://en.wikipedia.org/wiki/Semilinear%20response
Semi-linear response theory (SLRT) is an extension of linear response theory (LRT) for mesoscopic circumstances: LRT applies if the driven transitions are much weaker/slower than the environmental relaxation/dephasing effect, while SLRT assumes the opposite conditions. SLRT uses a resistor network analogy (see illustration) in order to calculate the rate of energy absorption: The driving induces transitions between energy levels, and connected sequences of transitions are essential in order to have a non-vanishing result, as in the theory of percolation. Applications The original motivation for introducing SLRT was the study of mesosopic conductance . The term SLRT has been coined in where it has been applied to the calculation of energy absorption by metallic grains. Later the theory has been applied for analysing the rate of heating of atoms in vibrating traps . Definition of semilinear response Consider a system that is driven by a source that has a power spectrum . The latter is defined as the Fourier transform of . In linear response theory (LRT) the driving source induces a steady state which is only slightly different from the equilibrium state. In such circumstances the response () is a linear functional of the power spectrum: In the traditional LRT context represents the rate of heating, and can be defined as the absorption coefficient. Whenever such relation applies If the driving is very strong the response becomes non-linear, meaning that both properties [A] and [B] do not hold. But there is a class of systems whose response becomes semi-linear, i.e. the first property [A] still holds, but not [B]. Resistor network modeling SLRT applies whenever the driving is strong enough such that relaxation to the steady state is slow compared with the driven dynamics. Yet one assumes that the system can be modeled as a resistor network, mathematically expressed as . The notation stands for the usual electrical engineering calculation of a two terminal conductance of a given resistor network. For example, parallel connections imply , while serial connections imply . Resistor network calculation is manifestly semi-linear because it satisfies , but in general . Fermi golden rule picture In the quantum mechanical calculation of energy absorption, the represent Fermi-golden-rule transition rates between energy levels. If only neighboring levels are coupled, serial addition implies which is manifestly semi-linear. Results for sparse networks, that are encountered in the analysis of weakly chaotic driven systems, are more interesting and can be obtained using a generalized variable range hopping (VRH) scheme. References Quantum mechanics Statistical mechanics Non-equilibrium thermodynamics
Semilinear response
Physics,Mathematics
533
7,085,075
https://en.wikipedia.org/wiki/Oswald%20Veblen%20Prize%20in%20Geometry
The Oswald Veblen Prize in Geometry is an award granted by the American Mathematical Society for notable research in geometry or topology. It was funded in 1961 in memory of Oswald Veblen and first issued in 1964. The Veblen Prize is now worth US$5000, and is awarded every three years. The first seven prize winners were awarded for works in topology. James Harris Simons and William Thurston were the first ones to receive it for works in geometry (for some distinctions, see geometry and topology). As of 2022, there have been thirty-seven prize recipients. List of recipients 1964 Christos Papakyriakopoulos, for: "On Solid Tori", Proceedings of the London Mathematical Society "On Dehn's lemma and the asphericity of knots", Annals of Mathematics 1964 Raoul Bott, for: "The space of loops on a Lie group", Michigan Math. J. "The stable homotopy of the classical groups", Annals of Mathematics 1966 Stephen Smale 1966 Morton Brown and Barry Mazur 1971 Robion Kirby, for: "Stable homeomorphisms and the annulus conjecture", Proc. Amer. Math. Soc 1971 Dennis Sullivan 1976 William Thurston 1976 James Harris Simons 1981 Mikhail Gromov for: "Manifolds of negative curvature." Journal of Differential Geometry 13 (1978), no. 2, 223–230. "Almost flat manifolds." Journal of Differential Geometry 13 (1978), no. 2, 231–241. "Curvature, diameter and Betti numbers." Comment. Math. Helv. 56 (1981), no. 2, 179–195. "Groups of polynomial growth and expanding maps." Inst. Hautes Études Sci. Publ. Math. 53 (1981), 53–73. "Volume and bounded cohomology." Inst. Hautes Études Sci. Publ. Math. 56 (1982), 5–99 1981 Shing-Tung Yau for: "On the regularity of the solution of the n-dimensional Minkowski problem." Comm. Pure Appl. Math. 29 (1976), no. 5, 495–516. (with Shiu-Yuen Cheng) "On the regularity of the Monge-Ampère equation" . Comm. Pure Appl. Math. 30 (1977), no. 1, 41–68. (with Shiu-Yuen Cheng) "Calabi's conjecture and some new results in algebraic geometry." Proc. Natl. Acad. Sci. U.S.A. 74 (1977), no. 5, 1798–1799. "On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I." Comm. Pure Appl. Math. 31 (1978), no. 3, 339–411. "On the proof of the positive mass conjecture in general relativity." Comm. Math. Phys. 65 (1979), no. 1, 45–76. (with Richard Schoen) "Topology of three-dimensional manifolds and the embedding problems in minimal surface theory." Ann. of Math. (2) 112 (1980), no. 3, 441–484. (with William Meeks) 1986 Michael Freedman for: The topology of four-dimensional manifolds. Journal of Differential Geometry 17 (1982), no. 3, 357–453. 1991 Andrew Casson for: his work on the topology of low dimensional manifolds and specifically for the discovery of an integer valued invariant of homology three spheres whose reduction mod(2) is the invariant of Rohlin. 1991 Clifford Taubes for: Self-dual Yang-Mills connections on non-self-dual 4-manifolds. Journal of Differential Geometry 17 (1982), no. 1, 139–170. Gauge theory on asymptotically periodic 4-manifolds. J. Differential Geom. 25 (1987), no. 3, 363–430. Casson's invariant and gauge theory. J. Differential Geom. 31 (1990), no. 2, 547–599. 1996 Richard S. Hamilton for: The formation of singularities in the Ricci flow. Surveys in differential geometry, Vol. II (Cambridge, MA, 1993), 7–136, Int. Press, Cambridge, MA, 1995. Four-manifolds with positive isotropic curvature. Comm. Anal. Geom. 5 (1997), no. 1, 1–92. 1996 Gang Tian for: On Calabi's conjecture for complex surfaces with positive first Chern class. Invent. Math. 101 (1990), no. 1, 101–172. Compactness theorems for Kähler-Einstein manifolds of dimension 3 and up. J. Differential Geom. 35 (1992), no. 3, 535–558. A mathematical theory of quantum cohomology. J. Differential Geom. 42 (1995), no. 2, 259–367. (with Yongbin Ruan) Kähler-Einstein metrics with positive scalar curvature. Invent. Math. 130 (1997), no. 1, 1–37. 2001 Jeff Cheeger for: Families index for manifolds with boundary, superconnections, and cones. I. Families of manifolds with boundary and Dirac operators. J. Funct. Anal. 89 (1990), no. 2, 313–363. (with Jean-Michel Bismut) Families index for manifolds with boundary, superconnections and cones. II. The Chern character. J. Funct. Anal. 90 (1990), no. 2, 306–354. (with Jean-Michel Bismut) Lower bounds on Ricci curvature and the almost rigidity of warped products. Ann. of Math. (2) 144 (1996), no. 1, 189–237. (with Tobias Colding) On the structure of spaces with Ricci curvature bounded below. I. J. Differential Geom. 46 (1997), no. 3, 406–480. (with Tobias Colding) 2001 Yakov Eliashberg for: Combinatorial methods in symplectic geometry. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), 531–539, Amer. Math. Soc., Providence, RI, 1987. Classification of overtwisted contact structures on 3-manifolds. Invent. Math. 98 (1989), no. 3, 623–637. 2001 Michael J. Hopkins for: Nilpotence and stable homotopy theory. I. Ann. of Math. (2) 128 (1988), no. 2, 207–241. (with Ethan Devinatz and Jeffrey Smith) The rigid analytic period mapping, Lubin-Tate space, and stable homotopy theory. Bull. Amer. Math. Soc. (N.S.) 30 (1994), no. 1, 76–86. (with Benedict Gross) Equivariant vector bundles on the Lubin-Tate moduli space. Topology and representation theory (Evanston, IL, 1992), 23–88, Contemp. Math., 158, Amer. Math. Soc., Providence, RI, 1994. (with Benedict Gross) Elliptic spectra, the Witten genus and the theorem of the cube. Invent. Math. 146 (2001), no. 3, 595–687. (with Matthew Ando and Neil Strickland) Nilpotence and stable homotopy theory. II. Ann. of Math. (2) 148 (1998), no. 1, 1–49. (with Jeffrey Smith) 2004 David Gabai 2007 Peter Kronheimer and Tomasz Mrowka for: The genus of embedded surfaces in the projective plane. Math. Res. Lett. 1 (1994), no. 6, 797–808. Embedded surfaces and the structure of Donaldson's polynomial invariants. J. Differential Geom. 41 (1995), no. 3, 573–734. Witten's conjecture and property P. Geom. Topol. 8 (2004), 295–310. 2007 Peter Ozsváth and Zoltán Szabó for: Holomorphic disks and topological invariants for closed three-manifolds. Ann. of Math. (2) 159 (2004), no. 3, 1027–1158. Holomorphic disks and three-manifold invariants: properties and applications. Ann. of Math. (2) 159 (2004), no. 3, 1159–1245. Holomorphic disks and genus bounds. Geom. Topol. 8 (2004), 311–334. 2010 Tobias Colding and William Minicozzi II for: The space of embedded minimal surfaces of fixed genus in a 3-manifold. I. Estimates off the axis for disks. Ann. of Math. (2) 160 (2004), no. 1, 27–68. The space of embedded minimal surfaces of fixed genus in a 3-manifold. II. Multi-valued graphs in disks. Ann. of Math. (2) 160 (2004), no. 1, 69–92. The space of embedded minimal surfaces of fixed genus in a 3-manifold. III. Planar domains. Ann. of Math. (2) 160 (2004), no. 2, 523–572. The space of embedded minimal surfaces of fixed genus in a 3-manifold. IV. Locally simply connected. Ann. of Math. (2) 160 (2004), no. 2, 573–615. The Calabi-Yau conjectures for embedded surfaces. Ann. of Math. (2) 167 (2008), no. 1, 211–243. 2010 Paul Seidel for: A long exact sequence for symplectic Floer cohomology. Topology 42 (2003), no. 5, 1003–1063. The symplectic topology of Ramanujam's surface. Comment. Math. Helv. 80 (2005), no. 4, 859–881. (with Ivan Smith) Fukaya categories and Picard-Lefschetz theory. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2008. viii+326 pp. Exact Lagrangian submanifolds in simply-connected cotangent bundles. Invent. Math. 172 (2008), no. 1, 1–27. (with Kenji Fukaya and Ivan Smith) 2013 Ian Agol for: Lower bounds on volumes of hyperbolic Haken 3-manifolds. With an appendix by Nathan Dunfield. J. Amer. Math. Soc. 20 (2007), no. 4, 1053–1077. (with Peter Storm and William Thurston) Criteria for virtual fibering. J. Topol. 1 (2008), no. 2, 269–284. Residual finiteness, QCERF and fillings of hyperbolic groups. Geom. Topol. 13 (2009), no. 2, 1043–1073. (with Daniel Groves and Jason Fox Manning) 2013 Daniel Wise for: Subgroup separability of graphs of free groups with cyclic edge groups. Q. J. Math. 51 (2000), no. 1, 107–129. The residual finiteness of negatively curved polygons of finite groups. Invent. Math. 149 (2002), no. 3, 579–617. Special cube complexes. Geom. Funct. Anal. 17 (2008), no. 5, 1551–1620. (with Frédéric Haglund) A combination theorem for special cube complexes. Ann. of Math. (2) 176 (2012), no. 3, 1427–1482. (with Frédéric Haglund) 2016 Fernando Codá Marques and André Neves for: Min-max theory and the Willmore conjecture. Ann. of Math. (2) 179 (2014), no. 2, 683–782. Min-max theory and the energy of links. J. Amer. Math. Soc. 29 (2016), no. 2, 561–578. (with Ian Agol) Existence of infinitely many minimal hypersurfaces in positive Ricci curvature. Invent. Math. 209 (2017), no. 2, 577–616. 2019 Xiuxiong Chen, Simon Donaldson and Song Sun for: Kähler-Einstein metrics on Fano manifolds. I: Approximation of metrics with cone singularities. J. Amer. Math. Soc. 28 (2015), no. 1, 183–197. Kähler-Einstein metrics on Fano manifolds. II: Limits with cone angle less than 2π. J. Amer. Math. Soc. 28 (2015), no. 1, 199–234. Kähler-Einstein metrics on Fano manifolds. III: Limits as cone angle approaches 2π and completion of the main proof. J. Amer. Math. Soc. 28 (2015), no. 1, 235–278. 2022 Michael A. Hill, Michael J. Hopkins, and Douglas Ravenel for: On the nonexistence of elements of Kervaire invariant one. Annals of Mathematics SECOND SERIES, Vol. 184, No. 1 (July, 2016), pp. 1-262 2025 Soheyla Feyzbakhsh and Richard Thomas for: Curve counting and S-duality, Épijournal de Géométrie Algébrique - arXiv:2007.03037 Rank r DT theory from rank 0, Duke Mathematical Journal - arXiv:2103.02915 Rank r DT theory from rank 1, Journal of the American Mathematical Society - arXiv:2108.02828 See also List of mathematics awards References External links Veblen prize home page Awards of the American Mathematical Society Awards established in 1964 Triennial events Geometry Topology 1964 establishments in the United States
Oswald Veblen Prize in Geometry
Physics,Mathematics
2,981
68,531,530
https://en.wikipedia.org/wiki/Ocean%20optics
Ocean optics is the study of how light interacts with water and the materials in water. Although research often focuses on the sea, the field broadly includes rivers, lakes, inland waters, coastal waters, and large ocean basins. How light acts in water is critical to how ecosystems function underwater. Knowledge of ocean optics is needed in aquatic remote sensing research in order to understand what information can be extracted from the color of the water as it appears from satellite sensors in space. The color of the water as seen by satellites is known as ocean color. While ocean color is a key theme of ocean optics, optics is a broader term that also includes the development of underwater sensors using optical methods to study much more than just color, including ocean chemistry, particle size, imaging of microscopic plants and animals, and more. Key terminology Optically deep Where waters are “optically deep,” the bottom does not reflect incoming sunlight, and the seafloor cannot be seen by humans or satellites. The vast majority of the world's oceans by area are optically deep. Optically deep water can still be relatively shallow water in terms of total physical depth, as long as the water is very turbid, such as in estuaries. Optically shallow Where waters are “optically shallow,” the bottom reflects light and often can be seen by humans and satellites. Here, ocean optics can also be used to study what is under the water. Based on what color they appear to sensors, researchers can map habitat types, including macroalgae, corals, seagrass beds, and more. Mapping shallow-water environments requires knowledge of ocean optics because the color of the water must be accounted for when looking at the color of the seabed environment below. Inherent optical properties (IOPs) Inherent optical properties (IOPs) depend on what is in the water. These properties stay the same no matter what the incoming light is doing (daytime or nighttime, low sun angle or high sun angle). Absorption Water with large amounts of dissolved substances, such as lakes with large amounts of colored dissolved organic matter (CDOM), experience high light absorption. Phytoplankton and other particles also absorb light. Scattering Areas with sea ice, estuaries with large amounts of suspended sediments, and lakes with large amounts of glacial flour are examples of water bodies with high light scattering. All particles scatter light to some extent, including plankton, minerals, and detritus. Particle size effects how much scattering happens at different colors; for example, very small particles scatter light exponentially more in the blue colors than other colors, which is why the ocean and the sky are generally blue (called Rayleigh scattering). Without scattering, light would not “go” anywhere (outside of a direct beam from the sun or other source) and we would not be able to see the world around us. Attenuation Attenuation in water, also called beam attenuation or the beam attenuation coefficient, is the sum of all absorption and scattering. Attenuation of a light beam in one specific direction can be measured with an instrument called a transmissometer. Apparent optical properties (AOPs) Apparent optical properties (AOPs) depend on what is in the water (IOPs) and what is going on with the incoming light from the Sun. AOPs depend most strongly on IOPs and only depend somewhat on incoming light aka the “light field.” Characteristics of the light field that can affect AOP measurements include the angle at which light hits the water surface (high in the sky vs. low in the sky, and from which compass direction) and the weather and sky conditions (clouds, atmospheric haze, fog, or sea state aka roughness of the surface of the water). Remote sensing reflectance (Rrs) Remote sensing reflectance (Rrs) is a measure of light radiating out from beneath the ocean surface at all colors, normalized by incoming sunlight at all colors. Because Rrs is a ratio, it is slightly less sensitive to what is going on with the light field (such as the angle of the sun or atmospheric haziness). Rrs is measured using two paired spectroradiometers that simultaneously measure light coming in from the sky and light coming up from the water below at many wavelengths. Since it is a measurement of a light-to-light ratio, the energy units cancel out, and Rrs has the units of per steradian (sr-1) due to the angular nature of the measurement (upwelling light is measured at a specific angle, and incoming light is measured on a flat plane from a half-hemispherical area above the water surface). Light attenuation coefficient (Kd) Kd is the diffuse (or downwelling) coefficient of light attenuation (Kd), also called simply light attenuation, the vertical extinction coefficient, or the extinction coefficient. Kd describes the rate of decrease of light with depth in water, in units of per meter (m−1). The “d” stands for downwelling light, which is light coming from above the sensor in a half-hemispherical shape (aka half of a basketball). Scientists sometimes use Kd to describe the decrease in the total visible light available for plants in terms of photosynthetically active radiation (PAR) – called “Kd(PAR).” In other cases, Kd can describe the decrease in light with depth over a spectrum of colors or wavelengths, usually written as “Kd(λ).” At one color (one wavelength) Kd can describe the decrease in light with depth of one color, such as the decrease in blue light at the wavelength 490 nm, written as “Kd(490).” In general, Kd is calculated using Beer's Law and a series of light measurements collected from just under the water surface down through the water at many depths. Closure “Closure” refers to how optical oceanographers measure the consistency of models and measurements. Models refer to anything that is not explicitly measured in the water, including satellite-derived variables that are estimated using empirical relationships (for example, satellite-derived chlorophyll-a concentration is estimated from the ratios between green and blue remote sensing reflectance using an empirical relationship). Closure includes measurement closure, model closure, model-data closure, and scale closure. Where model-data closure experiments show misalignment between data and models, the cause of the misalignment may be due to measurement error, issues with the model, both, or some other external factor. Focus areas Ocean optics has been applied to study topics like primary production, phytoplankton, zooplankton, shallow-water habitats like seagrass beds and coral reefs, marine biogeochemistry, heating of the upper ocean, and carbon export to deep waters by way of the ocean biological pump. The portion of the electromagnetic spectrum usually involved in ocean optics is ultraviolet through infrared, about 300 nm to less than 2000 nm wavelengths. Common optical sensors used in oceanography The most widely used optical oceanographic sensors are PAR sensors, chlorophyll-a fluorescence sensors (fluorometers), and transmissometers. These three instruments are frequently mounted on CTD(conductivity-temperature-depth)-rosette samplers. These instruments have been used for many years on CTD-rosettes in global repeat oceanographic surveys like the CLIVAR GO-SHIP campaign. Particle size in the ocean Optical instruments are often used to measure the size spectrum of particles in the ocean. For example, phytoplankton organisms can range in size from a few microns (micrometers, μm) to hundreds of microns. The size of particles is often measured to estimate how quickly particles will sink, and therefore how efficiently plants can sequester carbon in the ocean's biological pump. Imaging of ocean particles and organisms Scientists study individual tiny objects such as plankton and detritus particles using flow cytometry and in situ camera systems. Flow cytometers measure sizes and take photographs of individual particles flowing through a tube system; one such instrument is the Imaging FlowCytoBot (IFCB). In situ camera systems are deployed over the side of a research vessel, alone or attached to other equipment, and they capture photographs of the water itself to image the particles present in the water; one such instrument is the Underwater Vision Profiler (UVP). Other imaging technologies used in the ocean include holography and particle imaging velocimetry (PIV), which uses 3D video footage to track the movement of underwater particles. Research in support of satellite remote sensing Ocean optics research done “in situ” (from research vessels, small boats, or on docks and piers) supports research that uses satellite data. In situ optical measurements provide a way to: 1) calibrate satellite sensors when they are just beginning to collect data, 2) develop algorithms to derive products or variables like chlorophyll-a concentration, and 3) validate data products derived from satellites. Using satellite data, researchers estimate things like particle size, carbon, water quality, water clarity, and bottom type based on the color profile as seen by satellite; all of these estimations (aka models) must be validated by comparing them to optical measurements made in situ. In situ data are often available from publicly accessible data libraries like the SeaBASS data archive. Major contributing scientists Oceanographers, physicists, and other scientists who have made major contributions to the field of ocean optics include (incomplete list): David Antoine, Marcel Babin, Paula Bontempi, Emmanuel Boss, Annick Bricaud, Kendall Carder, Ivona Cetinic, Edward Fry, Heidi Dierssen, David Doxaran, Gene Carl Feldman, Howard Gordon, Chuanmin Hu, Nils Gunnar Jerlov, George Kattawar, John Kirk, ZhongPing Lee, Hubert Loisel, Stephane Maritorena, Michael Mishchenko, Curtis Mobley, Bruce Monger, Andre Morel, Michael Morris, Norm Nelson, Mary Jane Perry, Rudolph Preisendorfer, Louis Prieur, Chandrasekhara Raman, Collin Roesler, Rüdiger Röttgers, David Siegel, Raymond Smith, Heidi Sosik, Dariusz Stramski, Michael Twardowski, Talbot Waterman, Jeremy Werdell, Ken Voss, Charles Yentsch, and Ronald Zaneveld. Education While ocean optics is an interdisciplinary field of study applies to a wide range of topics, it is not often taught as a course in graduate programs for marine science and oceanography. Two summer-term courses have been developed for graduate students from many different institutions. First, there is a summer lecture series operated by the International Ocean Colour Coordinating Group (IOCCG) which usually takes place in France. Second, there is an ongoing course in the United States called the “Optical Oceanography Class” or “Ocean Optics Class” in Washington State and later in Maine, which has been running continuously since 1985. For independent learning, Curt Mobley, Collin Roesler, and Emmanuel Boss wrote the Ocean Optics Web Book as an open-access online guide. See also Related fields and topics: Atmospheric optics Color of water Electromagnetic spectrum History of optics Oceanography Ocean color Optical depth Spectral color Transparency and translucency Visible spectrum Water clarity Water remote sensing Water quality Inherent and apparent optical properties and in-water methods: Absorption (electromagnetic radiation) Argo (oceanography) Attenuation coefficient Beer-Lambert Law Marine optical buoy Scattering Secchi disk Remote sensing and radiometric methods: Albedo Atmospheric correction NASA Earth Science Spectralon References Further reading Ocean Optics Web Book Oceanography Applied and interdisciplinary physics Scattering, absorption and radiative transfer (optics) Optics Marine biology Aquatic ecology Biological oceanography Water Earth sciences Earth observation in-situ sensors
Ocean optics
Physics,Chemistry,Biology,Environmental_science
2,443
69,377,164
https://en.wikipedia.org/wiki/Nordic%20Rheology%20Society
The Nordic Rheology Society (NRS) is a professional organization that promotes and propagates rheology in the Nordic countries and beyond. The NRS provides a forum for academic and industrial researchers to discuss their ideas and to present their research. History The predecessor of the NRS, Swedish Society of Rheology, was founded in 1956 as a part of the Swedish National Committee for Mechanics. Erik Forslind was elected as the first president, Hilding Faxén as vice-president and Josef Kubát as secretary. The Swedish Society of Rheology became a full member of the International Committee on Rheology (ICR) in 1969, and it organized the VIIth International Congress on Rheology in Gothenburg in 1976. The name Swedish Society of Rheology was changed to Nordic Rheology Society in 1992 with the aim of increased Nordic cooperation. The first president of the NRS was Carl Klason. Since 1992, the NRS has annually organized the Nordic Rheology Conference. In addition, the NRS has hosted the Annual European Rheology Conference (AERC) in 2010 (Gothenburg), 2017 (Copenhagen) and 2021 (online). During the COVID-19 pandemic, the NRS pioneered the use of avatar-based virtual event platforms in scientific conferences. Conferences and publications The annual scientific meeting of the NRS, Nordic Rheology Conference (NRC), circulates between the Nordic countries. It typically features scientific presentations from various fields of rheology, a technical exhibition, a rheology short course, as well as social program. The Annual Transactions of the Nordic Rheology Society is the official publication of the NRS and it features papers presented at NRCs. Furthermore, the NRS occasionally organizes local rheology seminars in the Nordic countries. Awards The NRS presents two awards for outstanding rheologists who are active in the Nordic countries: The Carl Klason Rheology Award The Young Rheologists Rheology Award References External links NRS website Rheology Scientific organizations established in 1992 1992 establishments in Sweden Scientific organizations based in Sweden
Nordic Rheology Society
Chemistry
435
23,893,565
https://en.wikipedia.org/wiki/Electronic%20Journal%20of%20Human%20Sexuality
The Electronic Journal of Human Sexuality was a peer-reviewed academic journal that published articles, dissertations, theses, posters, and other academic materials about all aspects of human sexuality. It was published by David S. Hall for the Institute for Advanced Study of Human Sexuality, beginning in 1998 until 2014, with 17 volumes in total. The founder and original editor-in-chief of the journal was David S. Hall (Institute for Advanced Study of Human Sexuality), who later was succeeded as editor by Peter B. Anderson Walden University). Abstracting and indexing The journal was abstracted and indexed in CINAHL and Scopus. References External links Sexology journals Institute for Advanced Study of Human Sexuality Academic journals established in 1998 Publications disestablished in 2014 English-language journals Continuous journals
Electronic Journal of Human Sexuality
Biology
160
21,889,230
https://en.wikipedia.org/wiki/Chris%20Pascoe
Christopher Paul Pascoe (born 26 April 1966) is an English writer of humorous books, and magazine columnist. Career Books His first two books, A Cat Called Birmingham (Hodder & Stoughton, 2005) and You Can Take the Cat Out of Slough (Hodder & Stoughton, 2007) tell the story of a disaster-prone cat named Birmingham. A Cat Called Birmingham has since been translated into French and Chinese. In France, the book is titled Monsieur Chatastrophe. The book caused controversy in Birmingham because it was seen as a slur on the city by a London-based writer. You Can Take the Cat Out of Slough has also been released in France (October 2009), titled Le Journal de Monsieur Chatastrophe. A Cat Called Birmingham and You Can Take the Cat Out of Slough have featured in Kindle's Top-Ten Cat books, and A Cat Called Birmingham is now in its tenth U.K. edition. You Can Take the Cat Out of Slough was re-released in paperback in 2015. In 2009, Pascoe signed with Anova Books, and Death, Destruction and a Packet of Peanuts, a humorous factual/historical tour of the English Civil War battlefields and their pubs, was released on Anova's Portico imprint in July 2010. Confessions of a Cat Sitter, based on the long-running Your Cat magazine series, was released in January 2016. The World's Daftest Rabbit, a collection of his My Weekly magazine columns, was released by My Weekly in September 2017, and The World's Craziest Cats in September 2018. Magazine writing Pascoe is now a writer with various U.K. and U.S. magazines, and is a columnist for the U.K. national magazines My Weekly and Your Cat. See also List of English writers List of humorists References External links , Pascoe's official website Profile Miacis Date of birth missing (living people) 1966 births Living people 20th-century English male writers 20th-century English journalists 21st-century English male writers 21st-century English journalists Animal writers British magazine writers English humorists English male journalists
Chris Pascoe
Biology
440
370,753
https://en.wikipedia.org/wiki/Stereoscope
A stereoscope is a device for viewing a stereoscopic pair of separate images, depicting left-eye and right-eye views of the same scene, as a single three-dimensional image. A typical stereoscope provides each eye with a lens that makes the image seen through it appear larger and more distant and usually also shifts its apparent horizontal position, so that for a person with normal binocular depth perception the edges of the two images seemingly fuse into one "stereo window". In current practice, the images are prepared so that the scene appears to be beyond this virtual window, through which objects are sometimes allowed to protrude, but this was not always the custom. A divider or other view-limiting feature is usually provided to prevent each eye from being distracted by also seeing the image intended for the other eye. Most people can, with practice and some effort, view stereoscopic image pairs in 3D without the aid of a stereoscope, but the physiological depth cues resulting from the unnatural combination of eye convergence and focus required will be unlike those experienced when actually viewing the scene in reality, making an accurate simulation of the natural viewing experience impossible and tending to cause eye strain and fatigue. Although more recent devices such as Realist-format 3D slide viewers, the View-Master, or virtual reality headsets are also stereoscopes, the word is now most commonly associated with viewers designed for the standard-format stereo cards that enjoyed several waves of popularity from the 1850s to the 1930s as a home entertainment medium. Devices such as polarized, anaglyph and shutter glasses which are used to view two actually superimposed or intermingled images, rather than two physically separate images, are not categorized as stereoscopes. History Wheatstone stereoscope The earliest stereoscopes, "both with reflecting mirrors and with refracting prisms", were invented by Sir Charles Wheatstone and constructed for him by optician R. Murray in 1832. Herbert Mayo shortly described Wheatstone's discovery in his book Outlines of Human Physiology (1833) and claimed that Wheatstone was about to publish an essay about it. It was only one of many projects of Wheatstone's and he first presented his findings on 21 June 1838 to the Royal College of London. In this presentation he used a pair of mirrors at 45 degree angles to the user's eyes, each reflecting a picture located off to the side. It demonstrated the importance of binocular depth perception by showing that when two pictures simulating left-eye and right-eye views of the same object are presented so that each eye sees only the image designed for it, but apparently in the same location, the brain will fuse the two and accept them as a view of one solid three-dimensional object. Wheatstone's stereoscope was introduced in the year before the first practical photographic processes became available, so initially drawings were used. The mirror type of stereoscope has the advantage that the two pictures can be very large if desired. Brewster stereoscope Contrary to a common assertion, David Brewster did not invent the stereoscope, as he himself was often at pains to make clear. A rival of Wheatstone, Brewster credited the invention of the device to a Mr. Elliot, a "Teacher of Mathematics" from Edinburgh, who, according to Brewster, conceived of the idea as early as 1823 and, in 1839, constructed "a simple stereoscope without lenses or mirrors", consisting of a wooden box long, wide, and high, which was used to view drawn landscape transparencies, since photography had yet to become widespread. Brewster's personal contribution was the suggestion to use lenses for uniting the dissimilar pictures in 1849; and accordingly the lenticular stereoscope (lens-based) may fairly be said to be his invention. This allowed a reduction in size, creating hand-held devices, which became known as Brewster Stereoscopes, much admired by Queen Victoria when they were demonstrated at the Great Exhibition of 1851. Brewster was unable to find in Britain an instrument maker capable of working with his design, so he took it to France, where the stereoscope was improved by Jules Duboscq who made stereoscopes and stereoscopic daguerreotypes, and a famous picture of Queen Victoria that was displayed at The Great Exhibition. Almost overnight a 3D industry developed and 250,000 stereoscopes were produced and a great number of stereoviews, stereo cards, stereo pairs, or stereographs were sold in a short time. Stereographers were sent throughout the world to capture views for the new medium and feed the demand for 3D images. Cards were printed with these views often with explanatory text when the cards were looked at through the double-lensed viewer, sometimes also called a stereopticon, a common misnomer. Holmes stereoscope In 1861 Oliver Wendell Holmes created and deliberately did not patent a handheld, streamlined, much more economical viewer than had been available before. The stereoscope, which dates from the 1850s, consisted of two prismatic lenses and a wooden stand to hold the stereo card. This type of stereoscope remained in production for a century and there are still companies making them in limited production currently. Multiple view stereoscope Multiple view stereoscopes allow viewing multiple stereoscopic images in sequence by turning a knob, crank, or pushing down a lever. The first design was patented by Antoine Claudet in 1855, but the design of Alexander Beckers from 1857 formed the basis for many revolving stereoscopes that were manufactured from the 1860s. The images are placed in holders that are attached to a rotating belt. The belt can usually hold 50 paper card or glass stereoviews, but there are also large floor standing models for 100 or 200 views. A more advanced multiple view stereoscope is only intended for glass slides and was especially popular in France, as the printing of stereo images on glass was a French specialty and popular until the 1930s. Most devices were manufactured in France, but also in Germany by ICA and Ernemann. The glass slides are placed in a bakelite or wooden tray. Turning a crank (or pushing down a lever) will lift a slide from the tray and brings it into viewing position. Turning further will place the slide back in the tray and moves the tray over a rail to select the next slide. The most sophisticated and well known design was the Taxiphote by Jules Richard, patented in 1899. Modern use In the mid-20th century the View-Master stereoscope (patented 1939), with its rotating cardboard disks containing image pairs, was popular first for 'virtual tourism' and then as a toy. In 2010, Hasbro started producing a stereoscope designed to hold an iPhone or iPod Touch, called the My3D. In 2014, Google released the template for a papercraft stereoscope called Google Cardboard. Apps on the mobile phone substitute for stereo cards; these apps can also sense rotation and expand the stereoscope's capacity into that of a full-fledged virtual reality device. The underlying technology is otherwise unchanged from earlier stereoscopes. Several fine arts photographers and graphic artists have and continue to produce original artwork to be viewed using stereoscopes. Principles A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view. The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. This recreates the way which in natural vision, each eye is seeing the object from a slightly different angle, since they are separated by several inches, which is what gives humans natural depth perception. Each picture is focused by a separate lens, and by showing each eye a photograph taken several inches apart from each other and focused on the same point, it recreates the natural effect of seeing things in three dimensions. A moving image extension of the stereoscope has a large vertically mounted drum containing a wheel upon which are mounted a series of stereographic cards which form a moving picture. The cards are restrained by a gate and when sufficient force is available to bend the card it slips past the gate and into view, obscuring the preceding picture. These coin-enabled devices were found in arcades in the late 19th and early 20th century and were operated by the viewer using a hand crank. These devices can still be seen and operated in some museums specializing in arcade equipment. The stereoscope offers several advantages: Using positive curvature (magnifying) lenses, the focus point of the image is changed from its short distance (about 30 to 40 cm) to a virtual distance at infinity. This allows the focus of the eyes to be consistent with the parallel lines of sight, greatly reducing eye strain. The card image is magnified, offering a wider field of view and the ability to examine the detail of the photograph. The viewer provides a partition between the images, avoiding a potential distraction to the user. A stereo transparency viewer is a type of stereoscope that offers similar advantages, e.g. the View-Master. Disadvantages of stereo cards, slides or any other hard copy or print are that the two images are likely to receive differing wear, scratches and other decay. This results in stereo artifacts when the images are viewed. These artifacts compete in the mind resulting in a distraction from the 3D effect, eye strain and headaches. See also Stereo slide viewer Precursors of film Les Diableries View-Master Kinematoscope Haploscope Zograscope Virtual reality headset References External links University of Washington Libraries Digital Collections Stereocard Collection New York Public Library Robert N. Dennis Collection of Stereoscopic Views 20th Century Stereo Viewer Reference site Volkan Yuksel's Extraordinary Cross Eyed 3D Stereo Pair Collection from Planet Earth American University in Cairo Rare Books and Special Collections Digital Library Underwood & Underwood Egypt Stereoviews Collection Panama Canal Stereographs Rees Stereograph Collection from the Digital Library of Georgia Stereographic Views of Louisville and Beyond, 1850s-1930 from the University of Louisville Archives & Special Collections Audiovisual introductions in 1838 Optical devices Optical illusions Optical toys Photography equipment 3D imaging English inventions Stereoscopy he:סטריאוסקופ
Stereoscope
Physics,Materials_science,Engineering
2,088
302,771
https://en.wikipedia.org/wiki/Art%20glass
Art glass is a subset of glass art, this latter covering the whole range of art made from glass. Art glass normally refers only to pieces made since the mid-19th century, and typically to those purely made as sculpture or decorative art, with no main utilitarian function, such as serving as a drinking vessel, though of course stained glass keeps the weather out, and bowls may still be useful. The term is most used of American glass, where the style is "the logical outcome of the American demand for novelty during the 19th century and was characterized by elaborate form and exotic finish", but not always the highest quality of execution. There was a great interest in complex colour effects and painted enamelled glass. For art historians the "art glass" phase replaced the "Brilliant Period" of High-Victorian heavy decoration, and was in turn was replaced around 1900 by Art Nouveau glass, but the term may still be used for marketing purposes to refer to contemporary products. In fact the "Brilliant Period" style, which relied on deeply cut glass, continued to be made until about 1915, and sometimes thereafter. Glass is sometimes combined with other materials. Techniques include glass that has been placed into a kiln so that it will mould into a shape, glassblowing, sandblasted glass, copper-foil glasswork, painted and engraved glass. In general the term is restricted to relatively modern pieces made by people who see themselves as artists who have chosen to work in the medium of glass and both design and make their own pieces as fine art, rather than traditional glassworker craftsmen, who often produce pieces designed by others, though their pieces certainly may form part of art. Studio glass is another term often used for modern glass made for artistic purposes. Art glass has grown in popularity in recent years with many artists becoming famous for their work; and, as a result, more colleges are offering courses in glass work. During the early 20th-century art glass was generally made by teams of factory workers, taking glass from furnaces containing a thousand or more pounds. This form of art glass, of which Tiffany and Steuben in the U.S., Gallé in France and Hoya Crystal in Japan, Royal Leerdam Crystal in the Netherlands and Orrefors and Kosta Boda in Sweden are perhaps the best known, grew out of the factory system in which all glass objects were hand or mould blown by teams. Most antique art glass was made in factories, particularly in the UK, the United States, and Bohemia, where items were made to a standard, or "pattern". This would seem contrary to the idea that art glass is distinctive and shows individual skill. However, the importance of decoration – in the Victorian era in particular – meant that much of the artistry lay with the decorator. Any assumption today that factory-made items were necessarily made by machine is incorrect. Up to about 1940, most of the processes involved in making decorative art glass were performed by hand. Factory differentiation and distinctiveness Manufacturers got around the problem of an inherent similarity in their products in various ways. First, they would frequently change designs according to demand. This was especially so in the export-dependent factories of Bohemia where salesmen would report sales trends back to the factory during each trip. Second, the decoration for mid- and lower-market items, often done by contracted "piece" workers, was often a variation on a theme. Such was the skill of these subcontractors that a reasonable standard of quality and a high rate of output were generally maintained. Finally, a high degree of differentiation could be gained from the multiplication of shapes, colours, and decorative designs, yielding many different combinations. Concurrently, from the same factories came distinctive, artistic items produced in more limited quantities for the upper-market consumer. These were decorated in-house where decorators could work closely with designers and management in order to produce a piece that was profitable. Usable art glass Many items that are now considered art glass were originally intended for use. Often that use has ceased to be relevant, but even if not, in the Victorian era and for some decades beyond useful items were often decorated to such a high degree that they are now appreciated for their artistic or design merits. Some art glass retains its original purpose but has come to be appreciated more for its art than for its use. Collectors of antique perfume bottles, for example, tend to display their items empty. As items of packaging, these bottles would originally have been used and thus would not ordinarily have been considered art glass. However, because of fashion trends, then as now, producers supplied goods in beautiful packaging. Lalique's Art Nouveau glass and Art Deco designs by Josef Hoffmann among others have come to be considered art glass due to their stylish and highly original decorative designs. Moulded art glass There has been a growing recognition that moulded, mass-produced glass with little or no decoration but high artistic and fabrication quality such as that produced by Lalique should be considered art glass. Decorating techniques Colour: Various colours inter-mixed or otherwise incorporated Texture: Frosting, satinizing, glue-chip, overshot and sandblasting Surfaces: Overlays, cameo, cut-back, cutting and engraving Refined glassware Up-market refined glassware, usually lead crystal, is highly decorated and is revered for its high quality of workmanship, the purity of the metal (molten glass mixture), and the decorative techniques used, most often cutting and gilding. Both techniques continue to be used in the decoration of many pieces made from lead crystal, and nowadays these pieces are regarded as art glass. Cut glass Cut glass is most often produced by hand, but automation is now becoming more common. Some designs show artistic flair, but most tend to be regular, geometric, and repetitious. Occasionally, the design can be considered a "pattern" to be replicated as exactly as possible, with the main purpose being to accentuate the refractive qualities, or "sparkle", of the crystal. Art cut A clear exception could be made for the highly distinctive cut crystal designs which were produced in limited quantities by designers of note. Examples are the designs of Keith Murray for Steven & Williams and those of Clyne Farquharson for John Walsh Walsh. A relatively new term is coming into use for this genre: "Art Cut". See also Caneworking Lead glass (Crystal) Murrine Studio glass American Fancy Glass history Moss agate glass Glass art Vitreography (art form) Notes References Osborne, Harold (ed), The Oxford Companion to the Decorative Arts, 1975, OUP, Glass production Glass art
Art glass
Materials_science,Engineering
1,351