id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
41,331,583
https://en.wikipedia.org/wiki/Erbium%20tetraboride
Erbium tetraboride is a boride of the lanthanide metal erbium. It is hard and has a high melting point. Industrial applications of erbium boride include use in semiconductors, the blades of gas turbines, and the nozzles of rocket engines. References Borides Erbium compounds
Erbium tetraboride
Chemistry
70
41,060,071
https://en.wikipedia.org/wiki/Pandora%20%28fungus%29
Pandora is a genus of fungi within the order Entomophthorales. This has been supported by molecular phylogenetic analysis (Gryganskyi et al. 2012). It was initially formed by Polish mycologist Andrzej Batko (1933-1997), as a subgenus of Zoophthora. Then American mycologist Richard A. Humber raised it to the genus level. The genus name of Pandora is derived from the Latin word pando which means “to become curved” or “to sag” and the generic suffix “ra” thus describing conidia, which are often with weakly outlined bilateral symmetry. They are on one side (abdominal) slightly flattened and on the opposite (dorsal) side, more convex, on the third (lateral) side, they are somewhat curved towards the abdominal side and slightly asymmetrical. It has a cosmopolitan distribution. It is best known by its representative Pandora neoaphidis, which acts as an obligate pathogen in various species of aphids. It is a widespread species that is often found to be the most common fungal insect pathogen on the local aphid community (e.g. in surveys from Argentina, Slovakia, and China.). It has therefore been the subject of study for biological control. Including usage on the green peach aphid, Myzus persicae (Homoptera: Aphididae) which predates on spinach (Spinacea oleracea ) in Arkansas, America. Up to 95 species of the aphid (world-wide) have been found to be infected by the fungus. From places such as France (Rabasse et al. 1983), Mexico (Remaudiere and Hennebert, 1980), Portugal and Spain (Humber, 1986) and also Japan (Kobayashi et al.,1984). Panicum miliaceum or broomcorn millets were trialled in 2003 as a production base (within labs) for the fungus. However, difficulty with mass production of infectious spores in vitro and the viable formulation and storage into an easily applicable commercial product has halted their direct use as a biological control in 2012. There is limited evidence that the ladybird Harmonia axyridis, which is invasive in America and Europe, has an advantage over native ladybird species because it feeds more on Pandora-infested aphid cadavers. Pandora formicae is a rare example of the entomophthoralean fungus that has adapted to exclusively infect social insects, such as the wood ant Formica polyctena. The proportion of dead ant bodies with resting spores increased from late summer throughout autumn, which suggests that these fungal spores are the main overwintering fungal structures. Pandora sp. nov. inedit. (ARSEF13372) is a recently isolated fungus species with high potential for usage in psyllid pest control. Experiments in biomass production are being studied for usefulness. Species As accepted by Species Fungorum; Pandora aleurodis Pandora bibionis Pandora blunckii Pandora borea Pandora brahminae Pandora bullata Pandora dacnusae Pandora delphacis Pandora dipterigena Pandora echinospora Pandora formicae Pandora gloeospora Pandora guangdongensis Pandora heteropterae Pandora kondoiensis Pandora lipae Pandora longissima Pandora minutispora Pandora muscivora Pandora myrmecophaga Pandora neoaphidis Pandora nouryi Pandora phalangicida Pandora philonthi Pandora phyllobii Pandora poloniae-majoris Pandora psocopterae Pandora sciarae Pandora shaanxiensis Pandora terrestris Pandora uroleuconii Former species; P. americana = Furia americana, Entomophthoraceae P. athaliae = Zoophthora athaliae, Entomophthoraceae P. calliphorae = Entomophthora calliphorae, Entomophthoraceae P. chironomi = Erynia chironomi, Entomophthoraceae P. cicadellis = Erynia cicadellis, Entomophthoraceae P. suturalis = Zoophthora suturalis, Entomophthoraceae References Entomophthorales Zygomycota genera
Pandora (fungus)
Biology
891
22,387,613
https://en.wikipedia.org/wiki/List%20of%20highly%20toxic%20gases
Many gases have toxic properties, which are often assessed using the LC50 (median lethal concentration) measure. In the United States, many of these gases have been assigned an NFPA 704 health rating of 4 (may be fatal) or 3 (may cause serious or permanent injury), and/or exposure limits (TLV, TWA/PEL, STEL, or REL) determined by the ACGIH professional association. Some, but by no means all, toxic gases are detectable by odor, which can serve as a warning. Among the best known toxic gases are carbon monoxide, chlorine, nitrogen dioxide and phosgene. Definitions Toxic: a chemical that has a median lethal concentration (LC50) in air of more than 200 parts per million (ppm) but not more than 2,000 parts per million by volume of gas or vapor, or more than 2 milligrams per liter but not more than 20 milligrams per liter of mist, fume or dust, when administered by continuous inhalation for 1 hour (or less if death occurs within 1 hour) to albino rats weighing between 200 and 300 grams each. Highly Toxic: a gas that has a LC50 in air of 200 ppm or less. NFPA 704: Materials that, under emergency conditions, can cause serious or permanent injury are given a Health Hazard rating of 3. Their acute inhalation toxicity corresponds to those vapors or gases having LC50 values greater than 1,000 ppm but less than or equal to 3,000 ppm. Materials that, under emergency conditions, can be lethal are given a Health Hazard rating of 4. Their acute inhalation toxicity corresponds to those vapors or gases having LC50 values less than or equal to 1,000 ppm. List See also List of Schedule 1 substances (CWC) EPA list of extremely hazardous substances List of gases Notes References External links OSHA Limits for Air Contaminants OSHA Permissible Exposure Limits California Department of Industrial Relations Permissible Exposure Limits for Chemicals Chemical safety Highly toxic gases G G
List of highly toxic gases
Chemistry,Environmental_science
433
43,667,672
https://en.wikipedia.org/wiki/Ewald%E2%80%93Oseen%20extinction%20theorem
In optics, the Ewald–Oseen extinction theorem, sometimes referred to as just the extinction theorem, is a theorem that underlies the common understanding of scattering (as well as refraction, reflection, and diffraction). It is named after Paul Peter Ewald and Carl Wilhelm Oseen, who proved the theorem in crystalline and isotropic media, respectively, in 1916 and 1915. Originally, the theorem applied to scattering by an isotropic dielectric objects in free space. The scope of the theorem was greatly extended to encompass a wide variety of bianisotropic media. Overview An important part of optical physics theory is starting with microscopic physics—the behavior of atoms and electrons—and using it to derive the familiar, macroscopic, laws of optics. In particular, there is a derivation of how the refractive index works and where it comes from, starting from microscopic physics. The Ewald–Oseen extinction theorem is one part of that derivation (as is the Lorentz–Lorenz equation etc.). When light traveling in vacuum enters a transparent medium like glass, the light slows down, as described by the index of refraction. Although this fact is famous and familiar, it is actually quite strange and surprising when you think about it microscopically. After all, according to the superposition principle, the light in the glass is a superposition of: The original light wave, and The light waves emitted by oscillating electrons in the glass. (Light is an oscillating electromagnetic field that pushes electrons back and forth, emitting dipole radiation.) Individually, each of these waves travels at the speed of light in vacuum, not at the (slower) speed of light in glass. Yet when the waves are added up, they surprisingly create only a wave that travels at the slower speed. The Ewald–Oseen extinction theorem says that the light emitted by the atoms has a component traveling at the speed of light in vacuum, which exactly cancels out ("extinguishes") the original light wave. Additionally, the light emitted by the atoms has a component which looks like a wave traveling at the slower speed of light in glass. Altogether, the only wave in the glass is the slow wave, consistent with what we expect from basic optics. A more complete description can be found in Classical Optics and its Applications, by Masud Mansuripur. A proof of the classical theorem can be found in Principles of Optics, by Born and Wolf., and that of its extension has been presented by Akhlesh Lakhtakia. Derivation from Maxwell's equations Introduction When an electromagnetic wave enters a dielectric medium, it excites (resonates) the material's electrons whether they are free or bound, setting them into a vibratory state with the same frequency as the wave. These electrons will in turn radiate their own electromagnetic fields as a result of their oscillation (EM fields of oscillating charges). Due to the linearity of Maxwell equations, one expects the total field at any point in space to be the sum of the original field and the field produced by oscillating electrons. This result is, however, counterintuitive to the practical wave one observes in the dielectric moving at a speed of c/n, where n is the medium index of refraction. The Ewald–Oseen extinction theorem seek to address the disconnect by demonstrating how the superposition of these two waves reproduces the familiar result of a wave that moves at a speed of c/n. Derivation The following is a derivation based on a work by Ballenegger and Weber. Let's consider a simplified situation in which a monochromatic electromagnetic wave is normally incident on a medium filling half the space in the region z>0 as shown in Figure 1. The electric field at a point in space is the sum of the electric fields due to all the various sources. In our case, we separate the fields in two categories based on their generating sources. We denote the incident field and the sum of the fields generated by the oscillating electrons in the medium The total field at any point z in space is then given by the superposition of the two contributions, To match what we already observe, has this form. However, we already know that inside the medium, z>0, we will only observe what we call the transmitted E-field which travels through the material at speed c/n. Therefore in this formalism, This to say that the radiated field cancels out the incident field and creates a transmitted field traveling within the medium at speed c/n. Using the same logic, outside the medium the radiated field produces the effect of a reflected field traveling at speed c in the opposite direction to the incident field. assume that the wavelength is much larger than the average separation of atoms so that the medium can be considered continuous. We use the usual macroscopic E and B fields and take the medium to be nonmagnetic and neutral so that Maxwell's equations read both the total electric and magnetic fields the set of Maxwell equations inside the dielectric where includes the true and polarization current induced in the material by the outside electric field. We assume a linear relationship between the current and the electric field, hence The set of Maxwell equations outside the dielectric has no current density term The two sets of Maxwell equations are coupled since the vacuum electric field appears in the current density term. For a monochromatic wave at normal incidence, the vacuum electric field has the form with . Now to solve for , we take the curl of the third equation in the first set of Maxwell equation and combine it with the fourth. We simplify the double curl in a couple of steps using Einstein summation. Hence we obtain, Then substituting by , using the fact that we obtain, Realizing that all the fields have the same time dependence , the time derivatives are straightforward and we obtain the following inhomogeneous wave equation with particular solution For the complete solution, we add to the particular solution the general solution of the homogeneous equation which is a superposition of plane waves traveling in arbitrary directions where is found from the homogeneous equation to be Note that we have taken the solution as a coherent superposition of plane waves. Because of symmetry, we expect the fields to be the same in a plane perpendicular to the axis. Hence where is a displacement perpendicular to . Since there are no boundaries in the region , we expect a wave traveling to the right. The solution to the homogeneous equation becomes, Adding this to the particular solution, we get the radiated wave inside the medium () The total field at any position is the sum of the incident and radiated fields at that position. Adding the two components inside the medium, we get the total field This wave travels inside the dielectric at speed We can simplify the above to a familiar form of the index of refraction of a linear isotropic dielectric. To do so, we remember that in a linear dielectric an applied electric field induces a polarization proportional to the electric field . When the electric field changes, the induced charges move and produces a current density given by . Since the time dependence of the electric field is , we get which implies that the conductivity Then substituting the conductivity in the equation of , gives which is a more familiar form. For the region , one imposes the condition of a wave traveling to the left. By setting the conductivity in this region , we obtain the reflected wave traveling at the speed of light. Note that the coefficients nomenclature, and , are only adopted to match what we already expect. Hertz vector approach The following is a derivation based on a work by Wangsness and a similar derivation found in chapter 20 of Zangwill's text, Modern Electrodynamics. The setup is as follows, let the infinite half-space be vacuum and the infinite half-space be a uniform, isotropic, dielectric material with electric susceptibility, The inhomogeneous electromagnetic wave equation for the electric field can be written in terms of the electric Hertz Potential, , in the Lorenz gauge as The electric field in terms of the Hertz vectors is given as but the magnetic Hertz vector is 0 since the material is assumed to be non-magnetizable and there is no external magnetic field. Therefore the electric field simplifies to In order to calculate the electric field we must first solve the inhomogeneous wave equation for . To do this, split in the homogeneous and particular solutions Linearity then allows us to write The homogeneous solution, , is the initial plane wave traveling with wave vector in the positive direction We do not need to explicitly find since we are only interested in finding the field. The particular solution, and therefore, , is found using a time dependent Green's function method on the inhomogeneous wave equation for which produces the retarded integral Since the initial electric field is polarizing the material, the polarization vector must have the same space and time dependence More detail about this assumption is discussed by Wangsness. Plugging this into the integral and expressing in terms of Cartesian coordinates produces First, consider only the integration over and and convert this to cylindrical coordinates and call Then using the substitution and so the limits become and Then introduce a convergence factor with into the integrand since it does not change the value of the integral, Then implies , hence . Therefore, Now, plugging this result back into the z-integral yields Notice that is now only a function of and not , which was expected for the given symmetry. This integration must be split into two due to the absolute value inside the integrand. The regions are and . Again, a convergence factor must be introduced to evaluate both integrals and the result is Instead of plugging directly into the expression for the electric field, several simplifications can be made. Begin with the curl of the curl vector identity, therefore, Notice that because has no dependence and is always perpendicular to . Also, notice that the second and third terms are equivalent to the inhomogeneous wave equation, therefore, Therefore, the total field is which becomes, Now focus on the field inside the dielectric. Using the fact that is complex, we may immediately write recall also that inside the dielectric we have . Then by coefficient matching we find, and The first relation quickly yields the wave vector in the dielectric in terms of the incident wave as Using this result and the definition of in the second expression yields the polarization vector in terms of the incident electric field as Both of these results can be substituted into the expression for the electric field to obtain the final expression This is exactly the result as expected. There is only one wave inside the medium and it has wave speed reduced by n. The expected reflection and transmission coefficients are also recovered. Extinction lengths and tests of special relativity The characteristic "extinction length" of a medium is the distance after which the original wave can be said to have been completely replaced. For visible light, traveling in air at sea level, this distance is approximately 1 mm. In interstellar space, the extinction length for light is 2 light years. At very high frequencies, the electrons in the medium can't "follow" the original wave into oscillation, which lets that wave travel much further: for 0.5 MeV gamma rays, the length is 19 cm of air and 0.3 mm of Lucite, and for 4.4 GeV, 1.7 m in air, and 1.4 mm in carbon. Special relativity predicts that the speed of light in vacuum is independent of the velocity of the source emitting it. This widely believed prediction has been occasionally tested using astronomical observations. For example, in a binary star system, the two stars are moving in opposite directions, and one might test the prediction by analyzing their light. (See, for instance, the De Sitter double star experiment.) Unfortunately, the extinction length of light in space nullifies the results of any such experiments using visible light, especially when taking account of the thick cloud of stationary gas surrounding such stars. However, experiments using X-rays emitted by binary pulsars, with much longer extinction length, have been successful. References Eponymous theorems of physics Scattering, absorption and radiative transfer (optics)
Ewald–Oseen extinction theorem
Physics,Chemistry
2,521
903,032
https://en.wikipedia.org/wiki/Modular%20exponentiation
Modular exponentiation is exponentiation performed over a modulus. It is useful in computer science, especially in the field of public-key cryptography, where it is used in both Diffie–Hellman key exchange and RSA public/private keys. Modular exponentiation is the remainder when an integer (the base) is raised to the power (the exponent), and divided by a positive integer (the modulus); that is, . From the definition of division, it follows that . For example, given , and , dividing by leaves a remainder of . Modular exponentiation can be performed with a negative exponent by finding the modular multiplicative inverse of modulo using the extended Euclidean algorithm. That is: , where and . Modular exponentiation is efficient to compute, even for very large integers. On the other hand, computing the modular discrete logarithm – that is, finding the exponent when given , , and – is believed to be difficult. This one-way function behavior makes modular exponentiation a candidate for use in cryptographic algorithms. Direct method The most direct method of calculating a modular exponent is to calculate directly, then to take this number modulo . Consider trying to compute , given , , and : One could use a calculator to compute 413; this comes out to 67,108,864. Taking this value modulo 497, the answer is determined to be 445. Note that is only one digit in length and that is only two digits in length, but the value is 8 digits in length. In strong cryptography, is often at least 1024 bits. Consider and , both of which are perfectly reasonable values. In this example, is 77 digits in length and is 2 digits in length, but the value is 1,304 decimal digits in length. Such calculations are possible on modern computers, but the sheer magnitude of such numbers causes the speed of calculations to slow considerably. As and increase even further to provide better security, the value becomes unwieldy. The time required to perform the exponentiation depends on the operating environment and the processor. The method described above requires multiplications to complete. Memory-efficient method Keeping the numbers smaller requires additional modular reduction operations, but the reduced size makes each operation faster, saving time (as well as memory) overall. This algorithm makes use of the identity The modified algorithm is: Inputs An integer (base), integer (exponent), and a positive integer (modulus) Outputs The modular exponent where Initialise and loop variable While do Increment by 1 Calculate Output Note that at the end of every iteration through the loop, the equation holds true. The algorithm ends when the loop has been executed times. At that point contains the result of . In summary, this algorithm increases by one until it is equal to . At every step multiplying the result from the previous iteration, , by and performing a modulo operation on the resulting product, thereby keeping the resulting a small integer. The example , , and is presented again. The algorithm performs the iteration thirteen times: The final answer for is therefore 445, as in the direct method. Like the first method, this requires multiplications to complete. However, since the numbers used in these calculations are much smaller than the numbers used in the first algorithm's calculations, the computation time decreases by a factor of at least in this method. In pseudocode, this method can be performed the following way: function modular_pow(base, exponent, modulus) is if modulus = 1 then return 0 c := 1 for e_prime = 0 to exponent-1 do c := (c * base) mod modulus return c Right-to-left binary method A third method drastically reduces the number of operations to perform modular exponentiation, while keeping the same memory footprint as in the previous method. It is a combination of the previous method and a more general principle called exponentiation by squaring (also known as binary exponentiation). First, it is required that the exponent be converted to binary notation. That is, can be written as: In such notation, the length of is bits. can take the value 0 or 1 for any such that . By definition, . The value can then be written as: The solution is therefore: Pseudocode The following is an example in pseudocode based on Applied Cryptography by Bruce Schneier. The inputs base, exponent, and modulus correspond to , , and in the equations given above. function modular_pow(base, exponent, modulus) is if modulus = 1 then return 0 Assert :: (modulus - 1) * (modulus - 1) does not overflow base result := 1 base := base mod modulus while exponent > 0 do if (exponent mod 2 == 1) then result := (result * base) mod modulus exponent := exponent >> 1 base := (base * base) mod modulus return result Note that upon entering the loop for the first time, the code variable base is equivalent to . However, the repeated squaring in the third line of code ensures that at the completion of every loop, the variable base is equivalent to , where is the number of times the loop has been iterated. (This makes the next working bit of the binary exponent exponent, where the least-significant bit is exponent0). The first line of code simply carries out the multiplication in . If is zero, no code executes since this effectively multiplies the running total by one. If instead is one, the variable base (containing the value of the original base) is simply multiplied in. In this example, the base is raised to the exponent . The exponent is 1101 in binary. There are four binary digits, so the loop executes four times, with values , and . First, initialize the result to 1 and preserve the value of in the variable : . Step 1) bit 1 is 1, so set ; set . Step 2) bit 2 is 0, so do not reset ; set . Step 3) bit 3 is 1, so set ; set . Step 4) bit 4 is 1, so set ; This is the last step so we don't need to square . We are done: is now . Here is the above calculation, where we compute to the power , performed modulo 497. Initialize: and . Step 1) bit 1 is 1, so set ; set . Step 2) bit 2 is 0, so do not reset ; set . Step 3) bit 3 is 1, so set ; set . Step 4) bit 4 is 1, so set ; We are done: is now , the same result obtained in the previous algorithms. The running time of this algorithm is exponent. When working with large values of exponent, this offers a substantial speed benefit over the previous two algorithms, whose time is exponent. For example, if the exponent was 220 = 1048576, this algorithm would have 20 steps instead of 1048576 steps. Implementation in Lua function modPow(b, e, m) if m == 1 then return 0 end local r = 1 b = b % m while e > 0 do if e % 2 == 1 then r = (r*b) % m end b = (b*b) % m e = e >> 1 --use 'e = math.floor(e / 2)' on Lua 5.2 or older end return r end Left-to-right binary method We can also use the bits of the exponent in left to right order. In practice, we would usually want the result modulo some modulus . In that case, we would reduce each multiplication result before proceeding. For simplicity, the modulus calculation is omitted here. This example shows how to compute using left to right binary exponentiation. The exponent is 1101 in binary; there are 4 bits, so there are 4 iterations. Initialize the result to 1: . Step 1) ; bit 1 = 1, so compute ; Step 2) ; bit 2 = 1, so compute ; Step 3) ; bit 3 = 0, so we are done with this step; Step 4) ; bit 4 = 1, so compute . Minimum multiplications In The Art of Computer Programming, Vol. 2, Seminumerical Algorithms, page 463, Donald Knuth notes that contrary to some assertions, this method does always give the minimum possible number of multiplications. The smallest counterexample is for a power of 15, when the binary method needs six multiplications. Instead, form x3 in two multiplications, then x6 by squaring x3, then x12 by squaring x6, and finally x15 by multiplying x12 and x3, thereby achieving the desired result with only five multiplications. However, many pages follow describing how such sequences might be contrived in general. Generalizations Matrices The -th term of any constant-recursive sequence (such as Fibonacci numbers or Perrin numbers) where each term is a linear function of previous terms can be computed efficiently modulo by computing , where is the corresponding companion matrix. The above methods adapt easily to this application. This can be used for primality testing of large numbers , for example. Pseudocode A recursive algorithm for ModExp(A, b, c) = , where is a square matrix. function Matrix_ModExp(Matrix A, int b, int c) is if b == 0 then return I // The identity matrix if (b mod 2 == 1) then return (A * Matrix_ModExp(A, b - 1, c)) mod c Matrix D := Matrix_ModExp(A, b / 2, c) return (D * D) mod c Finite cyclic groups Diffie–Hellman key exchange uses exponentiation in finite cyclic groups. The above methods for modular matrix exponentiation clearly extend to this context. The modular matrix multiplication is simply replaced everywhere by the group multiplication . Reversible and quantum modular exponentiation In quantum computing, modular exponentiation appears as the bottleneck of Shor's algorithm, where it must be computed by a circuit consisting of reversible gates, which can be further broken down into quantum gates appropriate for a specific physical device. Furthermore, in Shor's algorithm it is possible to know the base and the modulus of exponentiation at every call, which enables various circuit optimizations. Software implementations Because modular exponentiation is an important operation in computer science, and there are efficient algorithms (see above) that are much faster than simply exponentiating and then taking the remainder, many programming languages and arbitrary-precision integer libraries have a dedicated function to perform modular exponentiation: Python's built-in pow() (exponentiation) function takes an optional third argument, the modulus .NET Framework's BigInteger class has a ModPow() method to perform modular exponentiation Java's java.math.BigInteger class has a method to perform modular exponentiation MATLAB's powermod function from Symbolic Math Toolbox Wolfram Language has the PowerMod function Perl's Math::BigInt module has a bmodpow() method to perform modular exponentiation Raku has a built-in routine expmod. Go's big.Int type contains an Exp() (exponentiation) method whose third parameter, if non-nil, is the modulus PHP's BC Math library has a bcpowmod() function to perform modular exponentiation The GNU Multiple Precision Arithmetic Library (GMP) library contains a mpz_powm() function to perform modular exponentiation Custom Function @PowerMod() for FileMaker Pro (with 1024-bit RSA encryption example) Ruby's openssl package has the OpenSSL::BN#mod_exp method to perform modular exponentiation. The HP Prime Calculator has the CAS.powmod() function to perform modular exponentiation. For a^b mod c, a can be no larger than 1 EE 12. This is the maximum precision of most HP calculators including the Prime. See also Montgomery reduction, for calculating the remainder when the modulus is very large. Kochanski multiplication, serializable method for calculating the remainder when the modulus is very large Barrett reduction, algorithm for calculating the remainder when the modulus is very large. References External links Paul Garrett, Fast Modular Exponentiation Java Applet Cryptographic algorithms Number theoretic algorithms Modular arithmetic Articles with example pseudocode
Modular exponentiation
Mathematics
2,653
15,204,685
https://en.wikipedia.org/wiki/Lithium%20cobalt%20oxide
Lithium cobalt oxide, sometimes called lithium cobaltate or lithium cobaltite, is a chemical compound with formula . The cobalt atoms are formally in the +3 oxidation state, hence the IUPAC name lithium cobalt(III) oxide. Lithium cobalt oxide is a dark blue or bluish-gray crystalline solid, and is commonly used in the positive electrodes of lithium-ion batteries. Structure The structure of has been studied with numerous techniques including x-ray diffraction, electron microscopy, neutron powder diffraction, and EXAFS. The solid consists of layers of monovalent lithium cations () that lie between extended anionic sheets of cobalt and oxygen atoms, arranged as edge-sharing octahedra, with two faces parallel to the sheet plane. The cobalt atoms are formally in the trivalent oxidation state () and are sandwiched between two layers of oxygen atoms (). In each layer (cobalt, oxygen, or lithium), the atoms are arranged in a regular triangular lattice. The lattices are offset so that the lithium atoms are farthest from the cobalt atoms, and the structure repeats in the direction perpendicular to the planes every three cobalt (or lithium) layers. The point group symmetry is in Hermann-Mauguin notation, signifying a unit cell with threefold improper rotational symmetry and a mirror plane. The threefold rotational axis (which is normal to the layers) is termed improper because the triangles of oxygen (being on opposite sides of each octahedron) are anti-aligned. Preparation Fully reduced lithium cobalt oxide can be prepared by heating a stoichiometric mixture of lithium carbonate and cobalt(II,III) oxide or metallic cobalt at 600–800 °C, then annealing the product at 900 °C for many hours, all under an oxygen atmosphere. Nanometer-size particles more suitable for cathode use can also be obtained by calcination of hydrated cobalt oxalate β-·2, in the form of rod-like crystals about 8 μm long and 0.4 μm wide, with lithium hydroxide , up to 750–900 °C. A third method uses lithium acetate, cobalt acetate, and citric acid in equal molar amounts, in water solution. Heating at 80 °C turns the mixture into a viscous transparent gel. The dried gel is then ground and heated gradually to 550 °C. Use in rechargeable batteries The usefulness of lithium cobalt oxide as an intercalation electrode was discovered in 1980 by an Oxford University research group led by John B. Goodenough and Tokyo University's Koichi Mizushima. The compound is now used as the cathode in some rechargeable lithium-ion batteries, with particle sizes ranging from nanometers to micrometers. During charging, the cobalt is partially oxidized to the +4 state, with some lithium ions moving to the electrolyte, resulting in a range of compounds with 0 < x < 1. Batteries produced with cathodes have very stable capacities, but have lower capacities and power than those with cathodes based on (especially nickel-rich) nickel-cobalt-aluminum (NCA) or nickel-cobalt-manganese (NCM) oxides. Issues with thermal stability are better for cathodes than other nickel-rich chemistries although not significantly. This makes batteries susceptible to thermal runaway in cases of abuse such as high temperature operation (>130 °C) or overcharging. At elevated temperatures, decomposition generates oxygen, which then reacts with the organic electrolyte of the cell, this reaction is often seen in Lithium-Ion batteries where the battery becomes highly volatile and must be recycled in a safe manner. The decomposition of LiCoO2 is a safety concern due to the magnitude of this highly exothermic reaction, which can spread to adjacent cells or ignite nearby combustible material. In general, this is seen for many lithium-ion battery cathodes. The delithiation process is usually by chemical means, although a novel physical process has been developed based on ion sputtering and annealing cycles, leaving the material properties intact. See also List of battery types Sodium cobalt oxide References External links Imaging the Structure of Lithium Cobalt Oxide at Atomic Level from the Lawrence Berkeley National Laboratory Cobalt(III) compounds Lithium compounds Oxides
Lithium cobalt oxide
Chemistry
881
49,662,809
https://en.wikipedia.org/wiki/Whole%20genome%20bisulfite%20sequencing
Whole genome bisulfite sequencing is a next-generation sequencing technology used to determine the DNA methylation status of single cytosines by treating the DNA with sodium bisulfite before high-throughput DNA sequencing. The DNA methylation status at various genes can reveal information regarding gene regulation and transcriptional activities. This technique was developed in 2009 along with reduced representation bisulfite sequencing after bisulfite sequencing became the gold standard for DNA methylation analysis. Whole genome bisulfite sequencing measures single-cytosine methylation levels genome-wide and directly estimates the ratio of molecules methylated rather than enrichment levels. Currently, this technique has recognized and tested approximately 95% of all cytosines in known genomes. With the improvement of library preparation methods and next-generation sequencing technology over the past decade, whole genome bisulfite sequencing has become an increasingly widespread and informative method for analyzing DNA methylation in epigenomic-wide studies. History Prior to the development of whole genome bisulfite sequencing, genome methylation analysis relied heavily on early non-specific and differential methods such as paper chromatography, high-performance liquid chromatography, and thin-layer chromatography to analyze methylation profiles. These methods were limited by the inability to amplify methylated DNA via polymerase chain reaction in vitro due to loss of methylation status. As a result, much of these early methods relied on detecting and analyzing naturally-manifested methylated cytosines in vivo rather than chemically methylated cytosines. In 1970, a breakthrough occurred when it was discovered that treating DNA with sodium bisulfite deaminated cytosine residues into uracil. In the following decade, this discovery led to the revelation that unmethylated cytosine reacted much faster to sodium bisulfite treatment than did 5-methylcytosine. This difference in reaction rates created the possibility of identifying chemical changes in DNA as an easily detectable genetic marker. Whole genome bisulfite sequencing was derived as a combination of this bisulfite treatment and next-generation sequencing technology, such as shotgun sequencing. The whole genome sequencing technique was first applied to the DNA methylation mapping at single nucleotide resolution to Arabidopsis thaliana in 2008, and shortly after in 2009, the first single-base-resolution DNA methylation map of the entire human genome was created using whole genome bisulfite sequencing. Since its development, many various protocols of whole genome bisulfite sequencing have been developed aiming to improve the efficiency and efficacy of its single-base mapping. As the costs of next-generation sequencing have decreased, whole genome bisulfite sequencing has become more widely used in clinical and experimental research. Currently, multiple public datasets of genomic data have been established, and this technique has recognized and tested approximately 95% of all cytosines in known genomes. Method The following steps are derived from one potential workflow of conventional whole genome bisulfite sequencing: target DNA extraction, bisulfite conversion, library amplification, and bioinformatics analysis. However, various sequencing systems and analysis tools often adapt the technical parameters and order of the following step processes in order to optimize assay coverage and efficacy. DNA extraction Library preparation protocols undergo DNA fragmentation, end repair, dA-tailing, and adapter ligation prior to bisulfite treatment and library amplification. Standard fragmentation under high-throughput technology such as Illumina Genome Analyser and Solexa requires nebulization to generate fragments that range from 0-1200 base pairs. After fragmentation, end repair enzymes and complementary adapters are then applied to the DNA in an end-prep polymerase chain reaction and adapter ligation reaction, respectively. Size selection occurs before the DNA is treated with sodium bisulfite. Conventional methods of eukaryotic DNA preparation during sequencing use a wide variety of DNA input amount, varying from as little as 10 ng for novel NGS library alternatives, such as the tagmentation approach, to as much as 500-1000 ng of DNA as sample input. Bisulfite conversion The adapter-ligated DNA sample is treated with sodium bisulfite, a chemical compound that converts unmethylated cytosines into uracil, at low pH and high temperatures. The chemical reaction is depicted in Figure 1, where sulfonation occurs at the carbon-6 position of cytosine to produce the intermediate cytosine sulfonate. This intermediate then undergoes irreversible hydrolytic deamination to create uracil sulfonate. Under alkaline conditions, uracil sulfonate desulfonates to generate uracil. This enables methylation detection by distinguishing the methylated cytosines (5-methylcytosine), which resist bisulfite treatment, from uracil. During amplification by polymerase chain reaction, the uracils are converted into thymines. Methylated cytosines are then recognized as cytosines. Their locations are then identified by comparison of the bisulfite-treated and original DNA sequence. Following bisulfite treatment, purification of the sample is required to remove unwanted products including bisulfite salts. Library amplification In order to amplify the epigenome library, bisulfite-treated DNA is primed to generate DNA with a specific tagging sequence. The 3' end of this sequence is then tagged again, creating DNA fragments with markers on either end. These fragments are amplified in a final polymerase chain reaction reaction, after which the library is prepped for sequencing-by-synthesis. This is demonstrated in Figure 2, in which high-throughput sequencing system developed by biotechnology company, Illumina, perform comprehensive assays based on sequencing-by-synthesis of base pairs. Bioinformatics analysis Following library amplification, a series of analyses can be performed on the expanded library to determine various methylation characteristics or map a genome-wide methylation profile. One such study aligns the new reads against the reference genome in order to directly compare locations of methylated cytosines and C-T mismatches. This requires software such as SOAP for side-by-side comparison of the genomes. Another potential sequencing analysis is methylated cytosine calling, which computes methylated cytosine ratios by mapping probabilities based on read quality. This helps determine methylated cytosine locations across the genome. Finally, global trends of methylome can be analyzed by calculating the distribution ratios of CG, CHGG, and CHH in methylated cytosines across the genome. These ratios can reflect features of whole genome methylation maps of certain species. Applications Due to its ability to screen methylation status at single-nucleotide resolution across a given genome, whole genome bisulfite sequencing has become increasingly promising in aiding fundamental epigenomics research, novel hypotheses on DNA methylation, and investigations of future large-scale epidemiological studies. This whole genome approach is also capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. DNA Methylation The whole genome bisulfite sequencing technique is capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. The use of whole genome bisulfite sequencing to create the first human DNA methylome in 2009 also helped identify a significant ratio of non-CG methylation. As a result, multiple single-base resolution methylomes of the human genome continue to be produced in order to identify the role of intragenic DNA methylation in gene expression and regulation. Future studies aim to use whole genome bisulfite sequencing in order to investigate the role DNA methylation has in multifarious cellular processes such as cellular differentiation, embryogenesis, X-inactivation, genomic imprinting, and tumorigenesis. Single-nucleotide maps have already been sequenced for two human cell lines, H1 human embryonic stem cells and IMR90 fetal lung fibroblasts, in order to study patterns of non-CG methylation in human cells. Developmental biology Whole genome bisulfite sequencing has also been applied to developmental biology studies in which non-CG methylation was discovered prevalent in pluripotent stem cells and oocytes. This technique helped researchers discover that non-CG methylation accumulated during oocyte growth and covered over half of all methylation in mouse germinal vesicle oocytes. Similarly, in plants, whole genome bisulfite sequencing was used to examine CG, CHH, and CHG methylation. It was then discovered that the plant germline conserved CG and CHG methylation while mammals lost CHH methylation in microspores and sperm cells. Other fields The unlimited resources provided by the approach of an entire genome have spurred many novel hypotheses on how whole genome bisulfite sequencing could be used in other various fields including disease diagnosis and forensic science. Studies have shown that whole genome bisulfite sequencing could detect abnormal methylation, or more specifically hyper-methylated suppressor genes, that are often seen in cancers including leukemia. Additionally, whole genome bisulfite sequencing has been applied to blood spot samples in forensic investigations to generate high-quality DNA methylation analyses on dried stains. Limitations Technical concerns The widespread use of whole genome bisulfite sequencing has been primarily limited by its excessive cost, complex data output, and minimal required coverage. Due to the high amount and subsequent cost of DNA input, many studies using whole genome bisulfite sequencing assays occur with few or no biological replicates. For human samples, the US National Institutes of Health (NIH) Roadmap Epigenomics Project recommends a minimum of 30x coverage sequencing to achieve accurate results and approximately 80 million aligned, high quality reads. Consequently, large-scale studies for genomic-wide methylation profiling remain less cost-effective, often requiring multiple re-sequences of the entire genome multiple times for every experiment. Current studies are being conducted to reduce the conventional minimum coverage requirements while maintaining mapping accuracy. Finally, the technique is also limited the complexity of data and lack of sufficiently advanced analytical tools for downstream computational requirements. The current bioinformatics requirements for accurate data interpretation are ahead of existing technology, which stalls the accessibility of sequencing results to the general public. Biases and over-representation of DNA methylation Additionally, there are biological limitations concerning various steps in the standard protocol, particularly in the library preparation method. One of the biggest concerns is the potential of bias in the base composition of sequences and over-representation of methylated DNA data following bioinformatics analyses. Bias can arise from multiple unintended effects of bisulfite conversion including DNA degradation. This degradation can cause uneven sequence coverage by misrepresenting genomic sequences and overestimating 5-methylcytosine values. Additionally, the bisulfite conversion process only distinguishes unmethylated cytosine from 5-methylcytosine. As a result, specificity between 5-methylcytosine and 5-hydroxymethylcytosine is limited. Another potential source of bias rises from polymerase chain reaction amplification of the library, which affects sequences with highly skewed base compositions due to high rates of polymerase sequence errors in high AT-content, bisulfite-converted DNA. See also Reduced representation bisulfite sequencing DNA methylation Shotgun sequencing ChIP-sequencing References DNA sequencing
Whole genome bisulfite sequencing
Chemistry,Biology
2,383
71,479,109
https://en.wikipedia.org/wiki/Sergey%20Bagayev%20%28scientist%29
Sergey Nikolayevich Bagayev (; 9 September 1941 – 15 August 2024) was a Russian scientist, a specialist in the field of quantum electronics and laser physics, director of the Institute of Laser Physics (1992–2016). His h-index was 16. Biography Sergey Bagayev was born in Novosibirsk on 9 September 1941. In 1964, he graduated from the Faculty of Physics of the Novosibirsk Electrotechnical Institute (NETI). In 1991, the scientist, together with Veniamin Chebotayev, participated in the creation of the Institute of Laser Physics, and in 1992, he became its director. He headed departments and taught at Novosibirsk State University, Novosibirsk State Technical University and Moscow Institute of Physics and Technology. Bagayev died on 15 August 2024, at the age of 82. Scientific activity Bagayev discovered new qualitative features of the absorption of laser radiation by a gas at low pressure. The physicist was a member of the editorial boards of Russian and international journals: Quantum Electronics, Laser Physics, Applied Physics B: Lasers and Optics, Optical Review, Opto-Electronics Letters). Awards In 1998, the scientist received the Order of Friendship of Peoples and the State Prize. In 2004, he was made a Chevalier of the Legion of Honor for his outstanding contribution to scientific cooperation between Russia and France. In 2006, Bagayev was awarded the Order "For Merit to the Fatherland" of the IV degree. References 1941 births 2024 deaths Russian physicists Quantum physicists Laser researchers Scientists from Novosibirsk Novosibirsk State Technical University alumni Academic staff of Novosibirsk State Technical University Academic staff of Novosibirsk State University Recipients of the Order of Friendship Knights of the Legion of Honour
Sergey Bagayev (scientist)
Physics
366
41,262,804
https://en.wikipedia.org/wiki/HD%204732
HD 4732 is a red giant star of magnitude 5.9 located in the constellation Cetus. It is 189 light years from the Solar System. HD 4732 is located in the celestial Southern Hemisphere, although it can be observed from most regions of the Earth. Near Antarctica the star is circumpolar, while it is always below the horizon near the Arctic. Its magnitude of 5.9 places it at the limit of visibility to the naked eye, so observing this star with the naked eye is possible a clear sky and no Moon. The best time to observe this star in the evening sky falls in the months between September and February, and from both hemispheres the period of visibility remains approximately the same, thanks to the star's position not far from the celestial equator. The star is a red giant with an absolute magnitude of 2.14, and its radial velocity indicates that the star is moving away from the Solar System. Planetary system In November 2012 a double planetary system was announced orbiting around this star from radial velocity measurements at Okayama Astrophysical Observatory and Australian Astronomical Observatory. The planetary system has two giant planets with identical minimum masses of 2.4  times that of Jupiter with orbital periods of 360 days and 2732 days. The maximum mass of the planets cannot exceed 28 times that of Jupiter based on dynamical stability analysis for the system, if the planets are coplanar and prograde. The planetary system of HD 4732 was found to be stable in 2019. See also Okayama Planet Search Program References External links Star Simbad data from the archive Data on the star system from the site Vizier Cetus K-type giants 004732 0228 Planetary systems with two confirmed planets 003834 Durchmusterung objects J00491393-2408119
HD 4732
Astronomy
363
1,041,645
https://en.wikipedia.org/wiki/Pentalene
Pentalene is a polycyclic hydrocarbon composed of two fused cyclopentadiene rings. It has chemical formula . It is antiaromatic, because it has 4n π electrons where n is any integer. For this reason it dimerizes even at temperatures as low as −100 °C. The derivative 1,3,5-tri-tert-butylpentalene was synthesized in 1973. Because of the tert-butyl substituents this compound is thermally stable. Pentalenes can also be stabilized by benzannulation for example in the compounds benzopentalene and dibenzopentalene. Dilithium pentalenide was isolated in 1962, long before pentalene itself in 1997. It is prepared from reaction of dihydropentalene (pyrolysis of an isomer of dicyclopentadiene) with n-butyllithium in solution and forms a stable salt. In accordance with its structure proton NMR shows 2 signals in a 2 to 1 ratio. The addition of two electrons removes the antiaromaticity; it becomes a planar 10π-electron aromatic species and is thus a bicyclic analogue of the cyclooctatetraene (COT) dianion . The dianion can also be considered as two fused cyclopentadienyl rings, and has been used as a ligand in organometallic chemistry to stabilise many types of mono- and bimetallic complexes, including those containing multiple metal-metal bonds, and anti-bimetallics with extremely high levels of electronic communication between the centers. See also Cyclooctatetraene Benzocyclobutadiene Acepentalene Butalene Heptalene Octalene References Antiaromatic compounds Hydrocarbons Bicyclic compounds
Pentalene
Chemistry
386
26,908,265
https://en.wikipedia.org/wiki/Atiprosin
Atiprosin (developmental code name AY-28,228) is an antihypertensive agent which acts as a selective α1-adrenergic receptor antagonist. It also possesses some antihistamine activity, though it is some 15-fold weaker in this regard than as an alpha blocker. It was never marketed. See also Prazosin Ketanserin References External links Abandoned drugs Alpha-1 blockers Antihistamines Antihypertensive agents Indoles Isopropyl compounds Piperazines
Atiprosin
Chemistry
113
4,334,477
https://en.wikipedia.org/wiki/Boss%20SP-303
The Boss Dr. Sample SP-303 is a discontinued digital sampler from Boss, successor of the Boss SP-202 Dr. Sample. The SP-303 was revamped and redesigned in 2005, and released as the SP-404, by Roland Corporation. Features While the Dr. Sample SP-303 may lack some of the features seen on other hip hop production samplers such as the Ensoniq ASR-10, the Akai MPC, and later SP installments, it however has many other unique features that make up for that. Like the SP-202, the SP-303 utilizes 8 pads, 4 soundbanks, and an external mic. The sampler provides up to three minutes and twelve seconds of sampling. The sample time can be expanded by the use of SmartMedia cards (8MB-64MB supported). The SP-303 features twenty-six internal effects that can be applied to samples and external sources as well. Some of these effects are Filter + Drive, Pitch, Delay, Vinyl Sim, Isolator, Reverb, and Tape Echo. Another notable feature is the built-in pattern sequencer, where loops and patterns can be programmed. Musicians The SP-303 is often praised by various musicians for its unique sound qualities, specifically its pitch and compression effects. Frequent SP-303 and 404 user Dibiase said of the sampler, "The difference between the 303 and SP-404 is that the vinyl sound compression sounds way different in the 303. It has a grittier sound." The sampler has often been used live and in the studio by artists such as Animal Collective, Panda Bear, Four Tet, Madlib and J Dilla. Dilla famously used only the SP-303 and a 45 record player to create 29 of the 31 tracks from Donuts while hospitalized. Madlib produced most of the collaboration album Madvillainy, by using a Boss SP-303, a portable turntable, and a cassette deck. This including beats for "Strange Ways", "Raid", and "Rhinestone Cowboy", which were all produced in his hotel room in São Paulo. References External links BOSS SP-303 Dr. Sample | Vintage Synth Explorer BOSS Global Official site Boss SP-303 Sound On Sound review (archive.org) SP-Forums.com - An active forum dedicated to Roland's SP range SP-303 Boss Corporation Samplers (musical instrument) Grooveboxes Music sequencers Sound modules Music workstations Hip-hop production Japanese inventions
Boss SP-303
Engineering
514
3,625,803
https://en.wikipedia.org/wiki/Hamburg%20University%20of%20Technology
The Hamburg University of Technology (in German Technische Universität Hamburg, abbreviated TUHH (HH as acronym of Hamburg state) or TU Hamburg) is a research university in Germany. The university was founded in 1978 and in 1982/83 lecturing followed. Around 110 senior lecturers/professors and 1,410 members of staff (802 scientists, including externally funded researchers) work at the TUHH. It is located in Harburg, a district in the south of Hamburg. Academics Interdisciplinary Studies Instead of traditional faculties, the TUHH has separate administrations for teaching and for research: research is conducted in departments, teaching is divided into schools of study. Scientists from different subjects work together in the departments. Curricula are organized by academic speciality, depending on the course of study followed. In the year 2000, the TUHH defined the following strategic topics of research activities: Information as economic value Organization for enterprises Production and process integrated environmental protection Sustainable management of resources Advanced energy systems and energy management Sustainable urban structures Systems of transport and logistic Advanced information and communication technologies Advanced materials and microsystems Biotechnologies and biomedical engineering Research is divided into six interdisciplinary research departments: Town, Environment, Technology Systems Engineering Civil Engineering and Marine Technology Information and Communication Technology Materials, Design, Manufacturing Processing Technology and Energy Systems Teaching is organized in eight schools of study: Civil Engineering Electrical Engineering, Computer Science and Mathematics Vocational Subject Education Management Science and Technology Mechanical Engineering Process and Chemical Engineering General Engineering Sciences Naval Architecture Northern Institute of Technology Management The Northern Institute of Technology (NIT) Management is a private educational institute located on the campus of the Hamburg University of Technology (TUHH) in Hamburg, Germany. It was founded in 1998 as a public-private partnership of the Hamburg University of Technology and sponsoring companies. The NIT offers a double degree master's program in technology management in cooperation with the Hamburg University of Technology (TUHH): Students study a Master of Science in an engineering or science program at the TUHH while studying in the MBA program at the NIT—which is also offered part-time for working professionals. Students will graduate from the NIT with an MBA or Master of Technology Management degree after completion of the courses. Besides the classical management disciplines, the MBA program includes modules from areas of classical management, self-development, innovation management, company foundation, and digitalization to familiarize students with the entrepreneurial challenges of the future: All classes are held in the English language. The faculty consists of professors and industry experts from various universities and international companies. The management program offered by the NIT is accredited by the Foundation for International Business Administration Accreditation (FIBAA). After the NIT developed the content and structure of its Master's program in Technology Management in 2019, the FIBAA reaccredited the program and it may continue to bear their seal of approval. Innovation TUHH founded the TUHH Technologie GmbH (TuTech). Since 1992 the TuTech has been responsible for technology transfer and advice, for trade fairs and further training, as well as congresses and the initiation of projects. Examples are the "Starterzentrum", the local initiation "hep", the "Gründerrat" of the TUHH as well as a course of studies for carriermanagement. Young entrepreneurs are accompanied and advised on setting up their own business. In 1994 the TUHH became a pioneer German university in the creation of modular courses and introduced a course with a bachelor's degree in General Engineering Science. Since 1997 nine master's degree courses and a bachelor's degree course have been added. Rankings According to the QS World University Rankings for 2024, the university is placed within the 1401-1500 range globally, and ranks between 47 and 49 at a national level. In contrast, the Times Higher Education World University Rankings for 2024 situates the institution between 501 and 600 globally, and similarly in the 42-45 range nationally. In the 2020, the university ranked 92nd in the Times Higher Education Young University Rankings. Campus Hamburg University of Technology is for the most part contained in a single campus. The buildings of which are almost all between the streets Eißendorfer Straße and "Am Schwarzenberg-Campus". However, there are some exceptions to this, including space in the Technologiezentrum Hamburg-Finkenwerder (Hamburg-Finkenwerder Center of Technology) and the Hamburg Innovation Port. The street, Denickestraße, divides the main campus into a northern and southern half. The southern half is centered around a series of small ponds and trees, while the northern half contains a more spacious paved courtyard. University Library The library is not only used internally but as a specialized technical library of the Hamburg region. Its services are also available to citizens who are not students. In addition to the basic service of providing printed media on loan or for use within the TUB HH, the library also procures documents from cooperation partners such as libraries, specialized information centres and publishers. References External links TUHH website Universities and colleges established in 1978 Buildings and structures in Harburg, Hamburg Business schools in Germany Management science 1978 establishments in West Germany Universities and colleges in Hamburg Technische Universitäten in Germany Universities in Germany
Hamburg University of Technology
Biology
1,066
28,153,186
https://en.wikipedia.org/wiki/Open%20JTAG
The Open JTAG project is an open source project released under GNU License. It is a complete hardware and software JTAG reference design, based on a simple hardware composed by a FTDI FT245 USB front-end and an Altera EPM570 MAX II CPLD. The capabilities of this hardware configuration make the Open JTAG device able to output TCK signals at 24 MHz using macro-instructions sent from the host end. The scope is to give the community a JTAG device not based on the PC parallel port: Open JTAG uses the USB channel to communicate with the internal CPLD, sending macro-instructions as fast as possible. The complete project (Beta version) is available at OpenCores.org and the Open JTAG project official site. References Software using the GNU Lesser General Public License Embedded systems
Open JTAG
Technology,Engineering
168
10,173,410
https://en.wikipedia.org/wiki/Immunologic%20adjuvant
In immunology, an adjuvant is a substance that increases or modulates the immune response to a vaccine. The word "adjuvant" comes from the Latin word , meaning to help or aid. "An immunologic adjuvant is defined as any substance that acts to accelerate, prolong, or enhance antigen-specific immune responses when used in combination with specific vaccine antigens." In the early days of vaccine manufacture, significant variations in the efficacy of different batches of the same vaccine were correctly assumed to be caused by contamination of the reaction vessels. However, it was soon found that more scrupulous cleaning actually seemed to reduce the effectiveness of the vaccines, and some contaminants actually enhanced the immune response. There are many known adjuvants in widespread use, including aluminium salts, oils and virosomes. Overview Adjuvants in immunology are often used to modify or augment the effects of a vaccine by stimulating the immune system to respond to the vaccine more vigorously, and thus providing increased immunity to a particular disease. Adjuvants accomplish this task by mimicking specific sets of evolutionarily conserved molecules, so called pathogen-associated molecular patterns, which include liposomes, lipopolysaccharide, molecular cages for antigens, components of bacterial cell walls, and endocytosed nucleic acids such as RNA, double-stranded RNA, single-stranded DNA, and unmethylated CpG dinucleotide-containing DNA. Because immune systems have evolved to recognize these specific antigenic moieties, the presence of an adjuvant in conjunction with the vaccine can greatly increase the innate immune response to the antigen by augmenting the activities of dendritic cells, lymphocytes, and macrophages by mimicking a natural infection. Types Inorganic compounds: potassium alum, aluminium hydroxide, aluminium phosphate, calcium phosphate hydroxide Oils: paraffin oil, propolis (only in preclinical studies). Adjuvant 65 (based on peanut oil) was tested in influenza vaccines in the 1970s, but was never released commercially. The oil squalene is used in the adjuvant MF59. Bacterial products: killed bacteria of the species Bordetella pertussis, Mycobacterium bovis, toxoids. MPL (Monophosphorylated lipid A) is a modified form of a bacterial lipid A protein that is used in several vaccines. Plant saponins from Quillaia (the bark of a soap bark tree), soybean, Polygala senega Cytokines: IL-1, IL-2, IL-12 CpG oligonucleotides Combinations: Freund's complete adjuvant, Freund's incomplete adjuvant, AS01 (combining MPL and Quillaia saponins), Matrix-M (combining Quillaia saponins and two types of fat) Small molecules: TLR7/8 agonists (imidazoquinolines, imidazopyrimidines) Inorganic adjuvants Aluminium salts There are many adjuvants, some of which are inorganic, that carry the potential to augment immunogenicity. Alum was the first aluminium salt used for this purpose, but has been almost completely replaced by aluminium hydroxide and aluminium phosphate for commercial vaccines. Aluminium salts are the most commonly-used adjuvants in human vaccines. Their adjuvant activity was described in 1926. The precise mechanism of aluminium salts remains unclear but some insights have been gained. It was formerly thought that they function as delivery systems by generating depots that trap antigens at the injection site, providing a slow release that continues to stimulate the immune system. However, studies have shown that surgical removal of these depots had no impact on the magnitude of IgG1 response. Alum can trigger dendritic cells and other immune cells to secrete Interleukin 1 beta (IL1β), an immune signal that promotes antibody production. Alum adheres to the cell's plasma membrane and rearranges certain lipids there. Spurred into action, the dendritic cells pick up the antigen and speed to lymph nodes, where they stick tightly to a helper T cell and presumably induce an immune response. A second mechanism depends on alum killing immune cells at the injection site although researchers aren't sure exactly how alum kills these cells. It has been speculated that the dying cells release DNA which serves as an immune alarm. Some studies found that DNA from dying cells causes them to adhere more tightly to helper T cells which ultimately leads to an increased release of antibodies by B cells. No matter what the mechanism is, alum is not a perfect adjuvant because it does not work with all antigens (e.g. malaria and tuberculosis). However, recent research indicates that alum formulated in a nanoparticle form rather than microparticles can broaden the utility of alum adjuvants and promote stronger adjuvant effects. Organic adjuvants Freund's complete adjuvant is a solution of inactivated Mycobacterium tuberculosis in mineral oil developed in 1930. It is not safe enough for human use. A version without the bacteria, that is only oil in water, is known as Freund's incomplete adjuvant. It helps vaccines release antigens for a longer time. Despite the side effects, its potential benefit has led to a few clinical trials. Squalene is a naturally-occurring organic compound used in human and animal vaccines. Squalene is an oil, made up of carbon and hydrogen atoms, produced by plants and is present in many foods. Squalene is also produced by the human liver as a precursor to cholesterol and is present in human sebum. MF59 is an oil-in-water emulsion of squalene adjuvant used in some human vaccines. As of 2021, over 22 million doses of one vaccine with squalene, FLUAD, have been administered with no severe adverse effects reported. AS03 is another squalene-containing adjuvant. In addition, squalene-based O/W emulsions have also been shown to stably incorporate small molecule TLR7/8 adjuvants (e.g. PVP-037) and lead to enhanced adjuvanticity via synergism. The plant extract QS-21 is a liposome made up of two plant saponins from Quillaja saponaria, a Chilean soap bark tree. Monophosphoryl lipid A (MPL), a detoxified version of the lipopolysaccharide from the bacterium Salmonella Minnesota, interacts with the receptor TLR4 to enhance immune response. The combination of QS-21, cholesterol and MPL forms the adjuvant AS01 which is used in the Shingrix vaccine approved in 2017, as well as in the approved malaria vaccine Mosquirix. The adjuvant Matrix-M is an immune stimulating complex (ISCOM) consisting of nanospheres made of QS-21, cholesterol and phospholipids. It is used in the approved Novavax Covid-19 vaccine and in the malaria vaccine R21/Matrix-M. Several unmethylated cytosine phosphoguanosine (CpG) oligonucleotides activate the TLR9 receptor that is present in a number of cell types of the immune system. The adjuvant CpG 1018 is used in an approved Hepatitis B vaccine. Adaptive immune response In order to understand the links between the innate immune response and the adaptive immune response to help substantiate an adjuvant function in enhancing adaptive immune responses to the specific antigen of a vaccine, the following points should be considered: Innate immune response cells such as dendritic cells engulf pathogens through a process called phagocytosis. Dendritic cells then migrate to the lymph nodes where T cells (adaptive immune cells) wait for signals to trigger their activation. In the lymph nodes, dendritic cells mince the engulfed pathogen and then express the pathogen clippings as antigen on their cell surface by coupling them to a special receptor known as a major histocompatibility complex. T cells can then recognize these clippings and undergo a cellular transformation resulting in their own activation. γδ T cells possess characteristics of both the innate and adaptive immune responses. Macrophages can also activate T cells in a similar approach (but do not do so naturally). This process carried out by both dendritic cells and macrophages is termed antigen presentation and represents a physical link between the innate and adaptive immune responses. Upon activation, mast cells release heparin and histamine to effectively increase trafficking to and seal off the site of infection to allow immune cells of both systems to clear the area of pathogens. In addition, mast cells also release chemokines which result in the positive chemotaxis of other immune cells of both the innate and adaptive immune responses to the infected area. Due to the variety of mechanisms and links between the innate and adaptive immune response, an adjuvant-enhanced innate immune response results in an enhanced adaptive immune response. Specifically, adjuvants may exert their immune-enhancing effects according to five immune-functional activities. First, adjuvants may help in the translocation of antigens to the lymph nodes where they can be recognized by T cells. This will ultimately lead to greater T cell activity resulting in a heightened clearance of pathogen throughout the organism. Second, adjuvants may provide physical protection to antigens which grants the antigen a prolonged delivery. This means the organism will be exposed to the antigen for a longer duration, making the immune system more robust as it makes use of the additional time by upregulating the production of B and T cells needed for greater immunological memory in the adaptive immune response. Third, adjuvants may help to increase the capacity to cause local reactions at the injection site (during vaccination), inducing greater release of danger signals by chemokine releasing cells such as helper T cells and mast cells. Fourth, they may induce the release of inflammatory cytokines which helps to not only recruit B and T cells at sites of infection but also to increase transcriptional events leading to a net increase of immune cells as a whole. Finally, adjuvants are believed to increase the innate immune response to antigen by interacting with pattern recognition receptors (PRRs) on or within accessory cells. Toll-like receptors The ability of the immune system to recognize molecules that are broadly shared by pathogens is, in part, due to the presence of immune receptors called toll-like receptors (TLRs) that are expressed on the membranes of leukocytes including dendritic cells, macrophages, natural killer cells, cells of the adaptive immunity (T and B lymphocytes) and non-immune cells (epithelial and endothelial cells, and fibroblasts). The binding of ligandseither in the form of adjuvant used in vaccinations or in the form of invasive moieties during times of natural infection to TLRs mark the key molecular events that ultimately lead to innate immune responses and the development of antigen-specific acquired immunity. As of 2016, several TLR ligands were in clinical development or being tested in animal models as potential adjuvants. Medical complications Humans Aluminium salts used in many human vaccines are regarded as safe by Food and Drug Administration. Although there are studies suggesting the role of aluminium, especially injected highly bioavailable antigen-aluminium complexes when used as adjuvant, in Alzheimer's disease development, the majority of researchers do not support a causal connection with aluminium. Adjuvants may make vaccines too reactogenic, which often leads to fever. This is often an expected outcome upon vaccination and is usually controlled in infants by over-the-counter medication if necessary. An increased number of narcolepsy (a chronic sleep disorder) cases in children and adolescents was observed in Scandinavian and other European countries after vaccinations to address the H1N1 "swine flu" pandemic in 2009. Narcolepsy has previously been associated with HLA-subtype DQB1*602, which has led to the prediction that it is an autoimmune process. After a series of epidemiological investigations, researchers found that the higher incidence correlated with the use of AS03-adjuvanted influenza vaccine (Pandemrix). Those vaccinated with Pandemrix have almost a twelve-times higher risk of developing the disease. The adjuvant of the vaccine contained vitamin E that was no more than a day's normal dietary intake. Vitamin E increases hypocretin-specific fragments that bind to DQB1*602 in cell culture experiments, leading to the hypothesis that autoimmunity may arise in genetically susceptible individuals, but there is no clinical data to support this hypothesis. The third AS03 ingredient is polysorbate 80. Polysorbate80 is also found in both the Oxford–AstraZeneca and Janssen COVID-19 vaccines. Animals Aluminium adjuvants have caused motor neuron death in mice when injected directly onto the spine at the scruff of the neck, and oil–water suspensions have been reported to increase the risk of autoimmune disease in mice. Squalene has caused rheumatoid arthritis in rats already prone to arthritis. In cats, vaccine-associated sarcoma (VAS) occurs at a rate of 1–10 per 10,000 injections. In 1993, a causal relationship between VAS and administration of aluminium adjuvated rabies and FeLV vaccines was established through epidemiologic methods, and in 1996 the Vaccine-Associated Feline Sarcoma Task Force was formed to address the problem. However, evidence conflicts on whether types of vaccines, manufacturers or factors have been associated with sarcomas. Controversy TLR signaling , the premise that TLR signaling acts as the key node in antigen-mediated inflammatory responses has been in question as researchers have observed antigen-mediated inflammatory responses in leukocytes in the absence of TLR signaling. One researcher found that in the absence of MyD88 and Trif (essential adapter proteins in TLR signaling), they were still able to induce inflammatory responses, increase T cell activation and generate greater B cell abundancy using conventional adjuvants (alum, Freund's complete adjuvant, Freund's incomplete adjuvant, and monophosphoryl-lipid A/trehalose dicorynomycolate (Ribi's adjuvant)). These observations suggest that although TLR activation can lead to increases in antibody responses, TLR activation is not required to induce enhanced innate and adaptive responses to antigens. Investigating the mechanisms which underlie TLR signaling has been significant in understanding why adjuvants used during vaccinations are so important in augmenting adaptive immune responses to specific antigens. However, with the knowledge that TLR activation is not required for the immune-enhancing effects caused by common adjuvants, we can conclude that there are, in all likelihood, other receptors besides TLRs that have not yet been characterized, opening the door to future research. Safety Reports after the first Gulf War linked anthrax vaccine adjuvants to Gulf War syndrome in American and British troops. The United States Department of Defense strongly denied the claims. Discussing the safety of squalene as an adjuvant in 2006, the World Health Organisation stated "follow-up to detect any vaccine-related adverse events will need to be performed." No such followup has been published by the WHO. Subsequently, the American National Center for Biotechnology Information published an article discussing the comparative safety of vaccine adjuvants which stated that "the biggest remaining challenge in the adjuvant field is to decipher the potential relationship between adjuvants and rare vaccine adverse reactions, such as narcolepsy, macrophagic myofasciitis or Alzheimer's disease." See also Beta-glucan Immunomodulator Immunostimulant Pharmaceutic adjuvant References External links Adjuvant therapy Animal research Vaxjo database Adjuvants Immunology
Immunologic adjuvant
Biology
3,445
5,008,526
https://en.wikipedia.org/wiki/Topicity
In stereochemistry, topicity is the stereochemical relationship between substituents and the structure to which they are attached. Depending on the relationship, such groups can be heterotopic, homotopic, enantiotopic, or diastereotopic. Homotopic Homotopic groups in a chemical compound are equivalent groups. Two groups A and B are homotopic if the molecule remains achiral when the groups are interchanged with some other atom (such as bromine) while the remaining parts of the molecule stay fixed. Homotopic atoms are always identical, in any environment. Homotopic NMR-active nuclei have the same chemical shift in an NMR spectrum. For example, the four hydrogen atoms of methane (CH4) are homotopic with one another, as are the two hydrogens or the two chlorines in dichloromethane (CH2Cl2). Enantiotopic The stereochemical term enantiotopic refers to the relationship between two groups in a molecule which, if one or the other were replaced, would generate a chiral compound. The two possible compounds resulting from that replacement would be enantiomers. For example, the two hydrogen atoms attached to the second carbon in butane are enantiotopic. Replacement of one hydrogen atom (colored blue) with a bromine atom will produce (R)-2-bromobutane. Replacement of the other hydrogen atom (colored red) with a bromine atom will produce the enantiomer (S)-2-bromobutane. Enantiotopic groups are identical and indistinguishable except in chiral environments. For instance, the CH2 hydrogens in ethanol (CH3CH2OH) are normally enantiotopic, but can be made different (diastereotopic) if combined with a chiral center, for instance by conversion to an ester of a chiral carboxylic acid such as lactic acid, or if coordinated to a chiral metal center, or if associated with an enzyme active site, since enzymes are constituted of chiral amino acids. Indeed, in the presence of the enzyme LADH, one specific hydrogen is removed from the CH2 group during the oxidation of ethanol to acetaldehyde, and it gets replaced in the same place during the reverse reaction. The chiral environment needs not be optically pure for this effect. Enantiotopic groups are mirror images of each other about an internal plane of symmetry. A chiral environment removes that symmetry. Enantiotopic pairs of NMR-active nuclei are also indistinguishable by NMR and produce a single signal. Enantiotopic groups need not be attached to the same atom. For example, two hydrogen atoms adjacent to the carbonyl group in cis-2,6-dimethylcyclohexanone are enantiotopic; they are related by an internal plane of symmetry passing through the carbonyl group, but deprotonation on one side of the carbonyl group or on the other will generate compounds that are enantiomers. Similarly, the replacement of one or the other with deuterium will generate enantiomers. Diastereotopic The stereochemical term diastereotopic refers to the relationship between two groups in a molecule which, if replaced, would generate compounds that are diastereomers. Diastereotopic groups are often, but not always, identical groups attached to the same atom in a molecule containing at least one chiral center. For example, the two hydrogen atoms of the CH2 moiety in (S)-2-bromobutane are diastereotopic. Replacement of one hydrogen atom (colored blue) with a bromine atom will produce (2S,3R)-2,3-dibromobutane. Replacement of the other hydrogen atom (colored red) with a bromine atom will produce the diastereomer (2S,3S)-2,3-dibromobutane. In chiral molecules containing diastereotopic groups, such as in 2-bromobutane, there is no requirement for enantiomeric or optical purity; no matter its proportion, each enantiomer will generate enantiomeric sets of diastereomers upon substitution of diastereotopic groups (though, as in the case of substitution by bromine in 2-bromobutane, meso isomers have, strictly speaking, no enantiomer). Diastereotopic groups are not mirror images of one another about any plane. They are always different, in any environment, but may not be distinguishable. For instance, both pairs of CH2 hydrogens in ethyl phenylalaninate hydrochloride (PhCH2CH(NH3+)COOCH2CH3 Cl−) are diastereotopic and both give pairs of distinct 1H-NMR signals in DMSO-d6 at 300 MHz, but in the similar ethyl 2-nitrobutanoate (CH3CH2CH(NO2)COOCH2CH3), only the CH2 group next to the chiral center gives distinct signals from its two hydrogens with the same instrument in CDCl3. Such signals are often complex because of small differences in chemical shift, overlap and an additional strong coupling between geminal hydrogens. On the other hand, the two CH3 groups of ipsenol, which are three bonds away from the chiral center, give separate 1H doublets at 300 MHz and separate 13C-NMR signals in CDCl3, but the diastereotopic hydrogens in ethyl alaninate hydrochloride (CH3CH(NH3+)COOCH2CH3 Cl−), also three bonds away from the chiral center, show barely distinguishable 1H-NMR signals in DMSO-d6. Diastereotopic groups also arise in achiral molecules. For instance, any one pair of CH2 hydrogens in 3-pentanol (Figure 1) are diastereotopic, as the two CH2 carbons are enantiotopic. Substitution of any one of the four CH2 hydrogens creates two chiral centers at once, and the two possible hydrogen substitution products at any one CH2 carbon will be diastereomers. This kind of relationship is often easier to detect in cyclic molecules. For instance, any pair of CH2 hydrogens in cyclopentanol (Figure 2) are similarly diastereotopic, and this is easily discerned as one of the hydrogens in the pair will be cis to the OH group (on the same side of the ring face) while the other will be trans to it (on the opposite side). The term diastereotopic is also applied to identical groups attached to the same end of an alkene moiety which, if replaced, would generate geometric isomers (also falling in the category of diastereomers). Thus, the CH2 hydrogens of propene are diastereotopic, one being cis to the CH3 group, and the other being trans to it, and replacement of one or the other with CH3 would generate cis- or trans--2-butene. Diastereotopicity is not limited to organic molecules, nor to groups attached to carbon, nor to molecules with chiral tetrahedral (sp3-hybridized) centers: for instance, the pair of hydrogens in any CH2 or NH2 group in tris(ethylenediamine)chromium(III) ion (Cr(en)33+), where the metal center is chiral, are diastereotopic (Figure 2). The terms enantiotopic and diastereotopic can also be applied to the faces of planar groups (especially carbonyl groups and alkene moieties). See Cahn-Ingold-Prelog priority rule. Heterotopic Heterotopic groups are those that when substituted are structurally different. They are neither diastereotopic or enantiotopic nor homotopic. See also Prochiral conformational analysis References Stereochemistry Nuclear magnetic resonance
Topicity
Physics,Chemistry
1,751
537,271
https://en.wikipedia.org/wiki/ANIM
ANIM is a file format, used to store digital movies and computer generated animations (hence the ANIM name), and is a variation of the ILBM format, which is a subformat of Interchange File Format. Main Features Anim FileTypes Known filetypes for Anim into AmigaOS are: Anim1, Anim2, Anim3, Anim5 and Anim7. Anim1 to Anim3 did not support audio. Anim 5 and Anim7 should be able to contain Audio Data, being a complete movie animation file format. Additions to IFF Standard In addition to the normal ILBM chunks, ANIM filetype also defines: ANHD (ANimation HeaDer) DLTA - stores changes between frames, with various compression methods supported to make use of the redundancy between frames. Compression modes: ANIM-0 ILBM BODY (no delta compression) ANIM-1 ILBM XOR ANIM-2 Long Delta mode ANIM-3 Short Delta mode ANIM-4 General Delta mode ANIM-5 Byte Vertical Delta mode (most common) ANIM-6 Stereo Byte Delta mode (stereoscopic frames) ANIM-7 Anim-5 compression using LONG/WORD data ANIM-8 Anim-5 compression using LONG/WORD data ANIM-J Eric Grahams compression format (Sculpt 3D / Sculpt 4D) It is possible to have several compression modes inside a file. History The ANIM IFF format was developed in 1988 at Sparta Inc., a firm based in California, originally for the production of animated video sequences on the Amiga computer, and was used for the first time in Aegis Development's Videoscape and Video Titler programs for the Amiga line of computers. Being very efficient and an official subset of existing Amiga ILBM/IFF standard file format, it became the de facto standard for animation files on the Amiga. The file format must have these characteristics: Be able to store, and playback, sequences of frames and to minimize both the storage space on disk (through compression) and playback time (through efficient de-compression algorithms). Maintain maximum compatibility with existing IFF formats and to be able to display the initial frame as a normal still IFF picture. Several compression schemes have been introduced in the ANIM format. Most of these are strictly of historical interest, as the only one currently used is the vertical run length encoded byte encoding developed by Atari software programmer Jim Kent. Amiga Anim7 format was created in 1992 by programmer Wolfgang Hofer. A video file format originally created for the Commodore CDTV, and later adapted for the Amiga CD32, was called CDXL and was similar to the ANIM file format. The ANIM format is supported by at least one current online image editor. Technical Overview A minimum Anim file consists of three ILBM interleaved bitmap images. The first bitmap is a full image, necessary for the creation of the "next" frame whilst the other two are "delta" images, calculated as differences from the first one. The initial frame is a normal run-length-encoded, IFF picture, and this allows a preview of the contents of the file. Subsequent frames are then described by listing only their differences from a previous frame. While the first frame is displayed, the subsequent frames are loaded into a buffer in graphics memory. The Amiga switches between the screens almost instantaneously while loading further frames using the blitter. Utilising its DMA capabilities, the graphics chipset could access memory without interrupting the CPU. This technique is called double buffering. To better understand this, suppose one has two screens, called A and B, with the ability to instantly switch the display from one to the other. The initial frame is loaded in to screen A and B. Screen A is displayed. The differences between frame 1 and frame 2 are calculated and altered in screen B, which is then displayed. Then the differences from this and frame 3 are used to alter screen A, which is then displayed, and so on. Note that frame 2 is stored as differences from frame 1, but all other frames are stored as differences from two frames back. ANIM is an IFF FORM and its chunk structure is as follows: FORM ANIM *FORM ILBM (first frame) **BMHD (normal type IFF data) **ANHD (optional animation header chunk for timing of 1st frame) **CMAP (Colormap) **BODY *FORM ILBM (frame 2) **ANHD (animation header chunk) **DLTA (delta mode data) *FORM ILBM (frame 3) **ANHD **DLTA (And so on...) The initial FORM ILBM can contain all the normal ILBM chunks, such as CRNG, etc. The BODY will normally be a standard run-length-encoded data chunk (but also any other legal compression mode as indicated by the BMHD). If desired, an ANHD chunk can appear here to provide timing data for the first frame. If it is here, the operation field should be =0. The subsequent FORMs ILBM contain an ANHD, instead of a BMHD, which duplicates some of BMHD and has additional parameters pertaining to the animation frame. The DLTA chunk contains the data for the delta compression modes. If the older XOR compression mode is used, then a BODY chunk will be placed here. In addition, other chunks may be placed in each of these as deemed necessary (and as code is placed in player programs to utilize them). For example, the CMAP chunks to alter the color palette. A basic assumption in ANIMs is that the size of the bitmap, and the display mode (e.g. HAM) will not change through the animation. The DLTA chunks are not interleaved bitmap representations, thus the use of the ILBM form is inappropriate for these frames. However, this inconsistency was not noted until there were a number of commercial products either released or close to release which generated/played this format. Compression methods used in Anim format Anim format allow five methods of compression: XOR mode, Long Delta mode, Short Delta mode, General Delta mode and Byte Vertical Compression. Playing ANIM files Playback of ANIMs will usually require two buffers, as mentioned above, and double-buffering between them. The frame data from the ANIM file is used to modify the hidden frame to the next frame to be shown. When using the XOR mode, the usual run-length-decoding routine can be easily modified to do the exclusive-or operation required. Note that runs of zero bytes, which will be very common, can be ignored, as an exclusive or of any byte value to a byte of zero will not alter the original byte value. The general procedure, for all compression techniques, is to first decode the initial ILBM picture into the hidden buffer and double buffer it into view. Then this picture is copied to the other (now hidden) buffer. At this point each frame is displayed with the same procedure. The next frame is formed in the hidden buffer by applying the DLTA data (or the XOR data from the BODY chunk) and the new frame is double-buffered into view. This process continues to the end of the file. Influences of ANIM on other Animation filetypes The Anim standard of Amiga influenced the development of Animated GIF format. References Graphics file formats Amiga Graphics standards Computer file formats Digital container formats Computer files Film and video technology
ANIM
Technology
1,564
2,661,582
https://en.wikipedia.org/wiki/Moscow%20State%20Mining%20University
Moscow State Mining University () is a Russian institute of higher education that prepares mining engineers. In 2014, the university merged with the National University of Science and Technology MISiS and became a part of it as the Moscow Mining Institute (College of Mining). History Its history can be traced back to September 4, 1918, when Moscow Mining Academy was founded. There was a task in the USSR - to prepare 435,000 engineers and technicians in 5 years (1930-1935) during the USSR industrialization period, while their number in 1929 was 66,000. In 1930 the Moscow Mining Academy was divided into six independent institutes by the order of Supreme Soviet of the National Economy. Among the new colleges which grew out of the Academy's departments was Moscow Mining Institute. In 1993 the Institute was transformed into the State University of Mining. Since 2014, the university is a part of the National University of Science and Technology MISiS. Education A multi-level structure of higher education has been introduced at the University. The first four years of University study are known as undergraduate study and usually lead to the Bachelor of Science (B.Sc.) degree. All bachelor's programs include general education in science and engineering, social sciences, arts, and a field of specialization called the major. The second level, five years together with the first level, may lead to receiving a diploma of a chartered mining engineer. The objective of the program is to give high level specialized training to engineers. A professionally qualified mining engineer must be a graduate proficient in technical management as well as in practical knowledge of actual mining operations. Students who have excelled as undergraduates may wish to continue their education at the graduate level (the third level). Upon conclusion of two additional years at the University, the student will be awarded the Master of Science (M.Sc.) degree. Calendar The academic year is divided into 17-week terms called autumn and spring semesters. Towards the end of the second year the student is expected to select his field of specialization. After graduating from the University students may specialize in underground or surface mining, geology and surveying, mineral processing, mining economics and management, ecology and environmental engineering, computing and computer programming, etc. Departments Six faculties (departments) of the University are currently training over 5,000 undergraduate students and about 300 postgraduates. The faculties of the University are: The Faculty of Coal Mining and Underground Construction The Faculty of Ore and Non-Ore Mining The Faculty of Physical Engineering The Faculty of Mining Electrical Mechanics The Faculty of Automation and Computer Science The Faculty of Evening and Correspondence Education. The University has at its disposal research and laboratory facilities, automation and computer systems, recreation centers, hostels and sports facilities. There is a Lyceum for high-school students (9-10 grades) and a preparatory department. Moscow State University of Mining takes orders for conducting research and development. The University has a publishing house of its own. There is a military training department. Categories of graduates: Bachelor of Science, Mining Engineer, Master of Science, Candidate of Science, Doctor of Science. Specialties: computer-aided information and control systems; blasting; mining machinery and equipment; environmental engineering; surveying; management; mineral processing; open pit mining; underground mining; computer-aided design systems; mechanical engineering; technology of artistic decoration of engineering materials; control and information in engineering systems; physical processes of mining production; construction of mines and underground structures; economy and management of mines and geological prospecting enterprises; economy of nature management; electrical engineering and automation of industrial installations and technological systems; power supply of mining enterprises. Notable faculty Vadym Kopylov (b. 1958), Ukrainian statesman Georgi Mondzolevski (b. 1934), Soviet Olympic and world champion volleyball player Alexander Pavlovitch Serebrovsky (1884-1938), reformer of Soviet gold mining industry Notable alumni Dmytro Salamatin- Ukrainian politician Trần Hồng Hà - Vietnamese Deputy Prime Minister References External links Official website Schools of mines Moscow State Mining University
Moscow State Mining University
Engineering
813
6,591,195
https://en.wikipedia.org/wiki/Andrew%20Donald%20Booth
Andrew Donald Booth (11 February 1918 – 29 November 2009) was a British electrical engineer, physicist and computer scientist, who was an early developer of the magnetic drum memory for computers. He is known for Booth's multiplication algorithm. In his later career in Canada he became president of Lakehead University. Early life Andrew Donald Booth was born on February 11, 1918, in East Molesy, Surrey, UK. He was the son of Sidney Booth (died 1955) and a cousin of Sir Felix Booth. He was raised in Weybridge, Surrey, and educated at Haberdashers' Aske's Boys' School. In 1937, he won a scholarship to read mathematics at Jesus College, Cambridge. Booth left Cambridge without taking a degree, having become disaffected with pure mathematics as a subject. He chose an external degree from the University of London instead, which he obtained with a first. Career From 1943 to 1945, Booth worked as a mathematical physicist in the X-ray team at the British Rubber Producers' Research Association (BRPRA), Welwyn Garden City, Hertfordshire, gaining his PhD in crystallography from the University of Birmingham in 1944. In 1945, he moved to Birkbeck College, University of London, where his work in the crystallography group led him to build some of the first electronic computers in the United Kingdom including the All Purpose Electronic Computer, first installed at the British Rayon Research Association. Booth founded Birkbeck's department of numerical automation and was named a fellow at the university in 2004. He also did early pioneering work in machine translation. After World War II, he worked on crystallographic problems research at Birkbeck College and constructed a fourier synthesis device. He was then introduced to the work of Alan Turing and John von Neumann on logical automata by Douglas Hartree. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Dr. Booth served as President of Lakehead University from 1972 to 1978. Personal life Booth married mathematician and computer engineer Kathleen Britten in 1950, and had two children, Amanda and Ian; between 1947 and 1953, together they produced three computing machines. See also Booth's multiplication algorithm Bibliography . Booth, A.D. and Britten, K.H.V. (1947) Coding for A.R.C., Institute for Advanced Study, Princeton Booth, A.D. and Britten, K.H.V. (1947) General considerations in the design of an all-purpose electronic digital computer, Institute for Advance Study, Princeton Booth, A.D. and Britten, K.H.V. (1948) The accuracy of atomic co-ordinates derived from Fourier series in X-ray crystallography Part V, Proc. Roy. Soc. Vol A 193 pp305–310 The Electronic Principles of Digital Computers, Electronics Forum (1948); . Booth, A.D (1949) A Magnetic Digital Storage System, Electronic Engineering Booth, A.D. (1950) The Physical Realization of An Electronic Digital Computer, Electronic Engineering Booth, A.D. (1952) On Optimum Relations Between Circuit Elements and Logical Symbols in the Design of Electronic Calculators, Journal of British Institution of Radio Engineers Booth, A.D. and Booth K.H.V. (1953) Automatic Digital Calculators, Butterworth-Heinmann (Academic Press) London References External links The APEXC driver page Principles and Progress in the Construction of High-Speed Digital Computers Andrew Booth Collection, University of Manchester Library. 1918 births 2009 deaths People educated at Haberdashers' Boys' School Academics of Birkbeck, University of London Alumni of Jesus College, Cambridge Alumni of the University of Birmingham Alumni of the University of London British electrical engineers British computer scientists Computer designers History of computing in the United Kingdom Academic staff of Lakehead University British expatriate academics in Canada Canadian university and college chief executives
Andrew Donald Booth
Technology
931
14,674,081
https://en.wikipedia.org/wiki/Centrin
Centrins, also known as caltractins, are a family of calcium-binding phosphoproteins found in the centrosome of eukaryotes. Centrins are small calcium binding proteins that are ubiquitous centrosome components. There are about 350 “signature” proteins that are unique to eukaryotic cells but have no significant homology to proteins in archaea and bacteria. They are a type of protein that is essential and present in almost all eukaryotic cells and are found in the centrioles and pericentriolar lattice. Human centrin genes are CETN1, CETN2 and CETN3. Humans and mice have three centrin genes: Cetn-1, which is typically only expressed in male germ cells, and Cetn-2 and Cetn-3, which are typically only expressed in somatic cells. Centrin-2 is a recombinant GFP-centrin-2 and centriole protein that localizes to centrioles throughout the cell cycle, while centrin-3 seems to stick to the pericentriolar material that surrounds the centrioles. History Centrin was first isolated and characterized from the flagellar roots of the green alga Tetraselmis striata in 1984. Jeffrey Salisbury, who discovered centrin in the green algae, and his colleagues used RNA interference (RNAi) to reduce the levels of centrin-2 in human tissue culture cells. The RNAi of centrin-2 from HeLa cells had led to progressive losses in the centrioles and was consistent with full blocks in the centriole replication. He had proved that centrin was involved in centriole duplication in animal cells like seen in his previous work with algae. This implies that centrin requirement was absolute for plants and animals within the centriole. Function Centrins are required for duplication of centrioles. They may also play a role in severing of microtubules by causing calcium-mediated contraction. It was found that centrin was essential within the calcium channel metabolism and it has a high affinity for calcium and a way lower affinity for phosphorus and other cell mineral constituents. Centrins show calcium-sensitive contractile behavior and was identified before as a calcium sensing regulator of the centriole structure. It is one of the first proteins to localize at sites of newly forming centrioles in semiconservative and novo assembly pathways. In algae, ciliates, and lower land plants failure of centrioles to duplicate is shown when a mutation, deletion, or knockdown of centrin happens by RNAi because centrin is a key factor for the structural integrity of centrioles. Studies of experimental ablation of centrin synthesis in alga Chlamydomonas cryptogamous water fern Marsilea indicate a key role of centrin having to do with centriole biogenesis. Centrins facilitated the duplication of centrioles and the severing of microtubules by calcium mediated contraction. The centrin found was highly concentrated outside of the centrosome and a lot of it was found to be non-centrosomal, which assembled during meiosis two. The extra-centrosomal materials function is not yet fully understood by researchers yet but using cross linking found centrin does have an affinity for actin and the terminal portion of the HC. Immunoprecipitation assays are needed in order to confirm this. Structure Centrin belongs to the EF-hand superfamily of calcium-binding proteins and has four calcium-binding EF-hands. It has a molecular weight of 20 kDa. Centrins contain four helix-loop-helix features specifically made binding with calcium in the transitional region of the axoneme. The axoneme is the bridge between the nucleus and the basal body where the proximal and distal fibers are connecting two basal bodies. Centrin is also present in the set of fiberd that connect the microtubule blades. Studies of higher eukaryotic cells such as human cells proved that centrins are the universal centrosome protein that occurs in fibers linking centrioles to one another and the distal most core structure called the "transition zone". See also Centriole Centrosome References Protein families
Centrin
Biology
892
1,810,909
https://en.wikipedia.org/wiki/Drag-reducing%20aerospike
A drag-reducing aerospike is a device (see nose cone design) used to reduce the forebody pressure aerodynamic drag of blunt bodies at supersonic speeds. The aerospike creates a detached shock ahead of the body. Between the shock and the forebody a zone of recirculating flow occurs which acts like a more streamlined forebody profile, reducing the drag. Development This concept was used on the UGM-96 Trident I and is estimated to have increased the range by 550 km. The Trident aerospike consists of a flat circular plate mounted on an extensible boom which is deployed shortly after the missile breaks through the surface of the water after launch from the submarine. The use of the aerospike allowed a much blunter nose shape, providing increased internal volume for payload and propulsion without increasing the drag. This was required because the Trident I C-4 was fitted with a third propulsion stage to achieve the desired increase in range over the Poseidon C-3 missile it replaced. To fit within the existing submarine launch tubes the third-stage motor had to be mounted in the center of the post-boost vehicle with the reentry vehicles arranged around the motor. At the same time (middle 1970s) an aerospike was developed in KB Mashinostroyeniya (KBM) for the 9M39 surface-to-air missile of 9K38 Igla MANPADS (in order to diminish heating of infrared homing seeker fairing and reduce wave drag), giving the name to the whole system ( means 'needle'). A simplified Igla-1 version with a different kind of target seeker featured a tripod instead of a 'needle' for the same purpose. Further development of this concept has resulted in the "air-spike". This is formed by concentrated energy, either from an electric arc torch or a pulsed laser, projected forwards from the body, which produces a region of low density hot air ahead of the body. In 1995 at the 33rd Aerospace Sciences Meeting, it was reported that tests were performed with an aerospike-protected missile dome to Mach 6, obtaining quantitative surface pressure and temperature-rise data on the feasibility of using aerospikes on hypersonic missiles. Missiles with aerospikes USSR 9K38 Igla (MANPADS) US UGM-96 Trident I UGM-133 Trident II France M51 (missile) See also Index of aviation articles References External links American Institute of Aeronautics and Astronautics National Aeronautics and Space Administration Progress in Flight Physics Drag (physics) Aircraft components
Drag-reducing aerospike
Chemistry
526
40,397,328
https://en.wikipedia.org/wiki/IUPAC%20Inorganic%20Chemistry%20Division
The Inorganic Chemistry Division of the International Union of Pure and Applied Chemistry (IUPAC), also known as Division II, deals with all aspects of inorganic chemistry, including materials and bioinorganic chemistry, and also with isotopes, atomic weights and the periodic table. It furthermore advises the Chemical Nomenclature and Structure Representation Division (Division VIII) on issues dealing with inorganic compounds and materials. For the general public, the most visible result of the division's work is that it evaluates and advises the IUPAC on names and symbols proposed for new elements that have been approved for addition to the periodic table. For the scientific end educational community the work on isotopic abundances and atomic weights is of fundamental importance as these numbers are continuously checked and updated. Subcommittees The division has the following subcommittees and commissions: Subcommittee on Isotopic Abundance Measurements Interdivisional Subcommittee on Materials Chemistry Subcommittee on Stable Isotope Reference Material Assessment Commission on Isotopic Abundances and Atomic Weights (CIAAW) Running Projects List of Running Projects of IUPAC Division II Recommendations for Isotope Data in Geosciences Priority claims for the discovery of elements with atomic number greater than 111 Evaluation of Isotopic Abundance Variations in Selected Heavier Elements Evaluated Compilation of International Reference Materials for Isotope Abundance Measurements Development of an Isotopic Periodic Table for the Educational Community Towards a comprehensive definition of oxidation state Coordination polymers and metal organic frameworks: nomenclature guidelines Evaluation of Radiogenic Abundance Variations in Selected Elements Technical Guidelines for Isotope Abundances and Atomic Weight Measurements Assessment of Stable Isotopic Reference and Inter-Comparison Materials Online evaluated isotope ratio database for use communities (2011-2014) Evaluated Published Isotope Ratio Data (2010- 2011) Guidelines for Measurement of Luminescence Spectra and Quantum Yields of Inorganic Compounds, Metal Complexes and Materials Terminology and definition of quantities related to the isotope distribution in elements with more than two stable isotopes Evaluated published isotope ratio data (2011- 2013) Evaluation of published lead isotopic data (1950- 2013) for a new standard atomic weight of lead Development a procedure for using intervals instead of fixed values for atomic weights: an educational exercise Former projects and other notable activities The Inorganic Chemistry Division was a partner in the 2011 Global Chemistry Experiment “Water: A Chemical Solution” that took part during the International Year of Chemistry. Notable former division members Mary L. Good former president Norman Greenwood former president Edward Wichers president 1955–1957 See also Chemical nomenclature Commission on Isotopic Abundances and Atomic Weights References Chemical nomenclature Chemistry organizations International scientific organizations Standards organisations in Switzerland
IUPAC Inorganic Chemistry Division
Chemistry
511
57,385,342
https://en.wikipedia.org/wiki/Pako%202
Pako 2 is a car chase driving and shooting game developed and published by Tree Men Games. Pako 2 was released on November 16, 2017 for Microsoft Windows and macOS in Steam and later released on Android and iOS on January 31, 2018. On summer 2024 there was an unexpected game update and the Linux support got added on Steam along with other features. Pako 2 is a sequel of Pako - Car Chase Simulator released in May 2014. In the first game the vehicle is only controllable by steering left or right without the gas pedal or brakes. Gameplay In Pako 2, the player drives a range of cars in selected levels. The objective of the player is to deliver different groups of robbers to their designated locations. This grants player various bonus items (perks) that make player more powerful during a game run. The longer the player survives, the tougher the cops will be, and more of them will appear. The player also has the option of performing a drive-by shooting at police cars while on the run. Once the player's car bumps into any obstacle or is shot, they will loose health. Once that hits zero, the player dies. The game can be played in top-down bird's eye view or alternatively in the 3rd person view. Gameplay varies on different versions of the game. Such as the PC version, where the player has the goal of completing a certain amount of objectives, but has the option to escape. Escaping gives the player the full money reward, while dying results in a penalty on the overall money. The PC version also includes a "deck building" mechanic, where by the player can choose the order of power ups to appear, as well as the ability to upgrade your car and the option to aim your weapon, giving the game "Tank controls". There is no in-app-purchases on Windows/Mac/Linux/iOS versions. In the Android version of game, in-game credits or real money (through in-app purchases) can be used to buy a variety of different vehicles, maps and power-ups in one's garage. Reception Pako 2 has received a score of 80 out of 100 on Metacritic, References External links 2017 video games Windows games MacOS games Linux games IOS games Android (operating system) games Driving simulators Single-player video games
Pako 2
Technology
475
23,464,858
https://en.wikipedia.org/wiki/List%20of%20British%20bingo%20nicknames
This is a list of British bingo nicknames. In the game of bingo in the United Kingdom, callers announcing the numbers have traditionally used some nicknames to refer to particular numbers if they are drawn. The nicknames are sometimes known by the rhyming phrase 'bingo lingo' and there are rhymes for each number from 1 to 90, some of which date back many decades. In some clubs, the 'bingo caller' will say the number, with the assembled players intoning the rhyme in a call and response manner, in others, the caller will say the rhyme and the players chant the number. One purpose of the nicknames is to allow called numbers to be clearly understood in a noisy environment. In 2003, Butlins holiday camps introduced some more modern calls devised by a Professor of Popular Culture in an attempt to bring fresh interest to bingo. Calls References Citations Sources Bingo Lists of slang Bingo,British Bingo Nicknames
List of British bingo nicknames
Mathematics
188
9,480,763
https://en.wikipedia.org/wiki/Argon%20flash
Argon flash, also known as argon bomb, argon flash bomb, argon candle, and argon light source, is a single-use source of very short and extremely bright flashes of light. The light is generated by a shock wave in argon or, less commonly, another noble gas. The shock wave is usually produced by an explosion. Argon flash devices are almost exclusively used for photographing explosions and shock waves. Although krypton and xenon can be also used, argon is favored because of its low cost. Process The light generated by an explosion is produced primarily by compression heating of the surrounding air. Replacement of the air with a noble gas considerably increases the light output; with molecular gases, the energy is consumed partially by dissociation and other processes, while noble gases are monatomic and can only undergo ionization; the ionized gas then produces the light. The low specific heat capacity of noble gases allows heating to higher temperatures, yielding brighter emission. Flashtubes are filled with noble gases for the same reason. Engineering Typical argon flash devices consist of an argon-filled cardboard or plastic tube with a transparent window on one end and an explosive charge on the other end. Many explosives can be used; Composition B, PETN, RDX, and plastic bonded explosives are just a few examples. The device consists of a vessel filled with argon and a solid explosive charge. The explosion generates a shock wave, which heats the gas to very high temperature (over 104 K; published values vary between 15,000 K to 30,000 K with the best values around 25,000 K). The gas becomes incandescent and emits a flash of intense visible and ultraviolet black-body radiation. The emission for the temperature range is highest between 97–193 nm, but usually only the visible and near-ultraviolet ranges are exploited. To achieve emission, the layer of at least one or two optical depths of the gas has to be compressed to sufficient temperature. The light intensity rises to full magnitude in about 0.1 microsecond. For about 0.5 microsecond the shock wave front instabilities are sufficient to create significant striations in the produced light; this effect diminishes as the thickness of the compressed layer increases. Only an about 75 micrometer thick layer of the gas is responsible for the light emission. The shock wave reflects after reaching the window at the end of the tube; this yields a brief increase of light intensity. The intensity then fades. The amount of explosive can control the intensity of the shock wave and therefore of the flash. The intensity of the flash can be increased and its duration decreased by reflecting the shock wave by a suitable obstacle; a foil or a curved glass can be used. The duration of the flash is about as long as the explosion itself, depending on the construction of the lamp, between 0.1 and 100 microseconds. The duration is dependent on the length of the shockwave path through the gas, which is proportional to the length of the tube; it was shown that each centimeter of the path of shock wave through the argon medium is equivalent to 2 microseconds. Uses Argon flash is a standard procedure for high-speed photography, especially for photographing explosions, or less commonly for use in high altitude test vehicles. The photography of explosions and shock waves is made easy by the fact that the detonation of the argon flash lamp charge can be accurately timed relative to the test specimen explosion and the light intensity can overpower the light generated by the explosion itself. The formation of shock waves during explosions of shaped charges can be imaged this way. As the amount of released radiant energy is fairly high, significant heating of the illuminated object can occur. Especially in the case of high explosives, this has to be taken into account. Superradiant Light (SRL) sources are an alternative to argon flash. An electron beam source delivers a brief and intense pulse of electrons to suitable crystals (e.g. doped cadmium sulfide). Flash times in the nanosecond to picosecond range are achievable. Pulsed lasers are another alternative. See also Sonoluminescence References Argon Explosives Flash photography Photographic lighting Types of lamp
Argon flash
Chemistry
875
3,143,150
https://en.wikipedia.org/wiki/History%20of%20scientific%20method
The history of scientific method considers changes in the methodology of scientific inquiry, as distinct from the history of science itself. The development of rules for scientific reasoning has not been straightforward; scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of one or another approach to establishing scientific knowledge. Rationalist explanations of nature, including atomism, appeared both in ancient Greece in the thought of Leucippus and Democritus, and in ancient India, in the Nyaya, Vaisheshika and Buddhist schools, while Charvaka materialism rejected inference as a source of knowledge in favour of an empiricism that was always subject to doubt. Aristotle pioneered scientific method in ancient Greece alongside his empirical biology and his work on logic, rejecting a purely deductive framework in favour of generalisations made from observations of nature. Some of the most important debates in the history of scientific method center on: rationalism, especially as advocated by René Descartes; inductivism, which rose to particular prominence with Isaac Newton and his followers; and hypothetico-deductivism, which came to the fore in the early 19th century. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was central to discussions of scientific method as powerful scientific theories extended beyond the realm of the observable, while in the mid-20th century some prominent philosophers argued against any universal rules of science at all. Early methodology Ancient Egypt and Babylonia There are few explicit discussions of scientific methodologies in surviving records from early cultures. The most that can be inferred about the approaches to undertaking science in this period stems from descriptions of early investigations into nature, in the surviving records. An Egyptian medical textbook, the Edwin Smith papyrus, (c. 1600 BCE), applies the following components: examination, diagnosis, treatment and prognosis, to the treatment of disease, which display strong parallels to the basic empirical method of science and according to G. E. R. Lloyd played a significant role in the development of this methodology. The Ebers papyrus (c. 1550 BCE) also contains evidence of traditional empiricism. By the middle of the 1st millennium BCE in Mesopotamia, Babylonian astronomy had evolved into the earliest example of a scientific astronomy, as it was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian Asger Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in the Islamic world, and in the West – if not indeed all subsequent endeavour in the exact sciences – depend upon Babylonian astronomy in decisive and fundamental ways." The early Babylonians and Egyptians developed much technical knowledge, crafts, and mathematics used in practical tasks of divination, as well as a knowledge of medicine, and made lists of various kinds. While the Babylonians in particular had engaged in the earliest forms of an empirical mathematical science, with their early attempts at mathematically describing natural phenomena, they generally lacked underlying rational theories of nature. Classical antiquity Greek-speaking ancient philosophers engaged in the earliest known forms of what is today recognized as a rational theoretical science, with the move towards a more rational understanding of nature which began at least since the Archaic Period (650 – 480 BCE) with the Presocratic school. Thales was the first known philosopher to use natural explanations, proclaiming that every event had a natural cause, even though he is known for saying "all things are full of gods" and sacrificed an ox when he discovered his theorem. Leucippus, went on to develop the theory of atomism – the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. This was elaborated in great detail by Democritus. Similar atomist ideas emerged independently among ancient Indian philosophers of the Nyaya, Vaisesika and Buddhist schools. In particular, like the Nyaya, Vaisesika, and Buddhist schools, the Cārvāka epistemology was materialist, and skeptical enough to admit perception as the basis for unconditionally true knowledge, while cautioning that if one could only infer a truth, then one must also harbor a doubt about that truth; an inferred truth could not be unconditional. Towards the middle of the 5th century BCE, some of the components of a scientific tradition were already heavily established, even before Plato, who was an important contributor to this emerging tradition, thanks to the development of deductive reasoning, as propounded by his student, Aristotle. In Protagoras (318d-f), Plato mentioned the teaching of arithmetic, astronomy and geometry in schools. The philosophical ideas of this time were mostly freed from the constraints of everyday phenomena and common sense. This denial of reality as we experience it reached an extreme in Parmenides who argued that the world is one and that change and subdivision do not exist. As early as the 4th century BCE, armillary spheres had been invented in China, and in the 3rd century BCE in Greece for use in astronomy; their use was promulgated thereafter, for example by § Ibn al-Haytham, and by § Tycho Brahe. In the 3rd and 4th centuries BCE, the Greek physicians Herophilos (335–280 BCE) and Erasistratus of Chios employed experiments to further their medical research; Erasistratus at one time repeatedly weighed a caged bird, and noted its weight loss between feeding times. Aristotle Aristotle's inductive-deductive method used inductions from observations to infer general principles, deductions from those principles to check against further observations, and more cycles of induction and deduction to continue the advance of knowledge. The Organon (Greek: , meaning "instrument, tool, organ") is the standard collection of Aristotle's six works on logic. The name Organon was given by Aristotle's followers, the Peripatetics. The order of the works is not chronological (the chronology is now difficult to determine) but was deliberately chosen by Theophrastus to constitute a well-structured system. Indeed, parts of them seem to be a scheme of a lecture on logic. The arrangement of the works was made by Andronicus of Rhodes around 40 BCE. The Organon comprises the following six works: The Categories (Greek: , Latin: ) introduces Aristotle's 10-fold classification of that which exists: substance, quantity, quality, relation, place, time, situation, condition, action, and passion. On Interpretation (Greek: , Latin: ) introduces Aristotle's conception of proposition and judgment, and the various relations between affirmative, negative, universal, and particular propositions. Aristotle discusses the square of opposition or square of Apuleius in Chapter 7 and its appendix Chapter 8. Chapter 9 deals with the problem of future contingents. The Prior Analytics (Greek: , Latin: ) introduces Aristotle's syllogistic method (see term logic), argues for its correctness, and discusses inductive inference. The Posterior Analytics (Greek: , Latin: ) deals with demonstration, definition, and scientific knowledge. The Topics (Greek: , Latin: ) treats of issues in constructing valid arguments, and of inference that is probable, rather than certain. It is in this treatise that Aristotle mentions the predicables, later discussed by Porphyry and by the scholastic logicians. The Sophistical Refutations (Greek: , Latin: ) gives a treatment of logical fallacies, and provides a key link to Aristotle's work on rhetoric. Aristotle's Metaphysics has some points of overlap with the works making up the Organon but is not traditionally considered part of it; additionally there are works on logic attributed, with varying degrees of plausibility, to Aristotle that were not known to the Peripatetics. Aristotle has been called the founder of modern science by De Lacy O'Leary. His demonstration method is found in Posterior Analytics. He provided another of the ingredients of scientific tradition: empiricism. For Aristotle, universal truths can be known from particular things via induction. To some extent then, Aristotle reconciles abstract thought with observation, although it would be a mistake to imply that Aristotelian science is empirical in form. Indeed, Aristotle did not accept that knowledge acquired by induction could rightly be counted as scientific knowledge. Nevertheless, induction was for him a necessary preliminary to the main business of scientific enquiry, providing the primary premises required for scientific demonstrations. Aristotle largely ignored inductive reasoning in his treatment of scientific enquiry. To make it clear why this is so, consider this statement in the Posterior Analytics: We suppose ourselves to possess unqualified scientific knowledge of a thing, as opposed to knowing it in the accidental way in which the sophist knows, when we think that we know the cause on which the fact depends, as the cause of that fact and of no other, and, further, that the fact could not be other than it is. It was therefore the work of the philosopher to demonstrate universal truths and to discover their causes. While induction was sufficient for discovering universals by generalization, it did not succeed in identifying causes. For this task Aristotle used the tool of deductive reasoning in the form of syllogisms. Using the syllogism, scientists could infer new universal truths from those already established. Aristotle developed a complete normative approach to scientific inquiry involving the syllogism, which he discusses at length in his Posterior Analytics. A difficulty with this scheme lay in showing that derived truths have solid primary premises. Aristotle would not allow that demonstrations could be circular (supporting the conclusion by the premises, and the premises by the conclusion). Nor would he allow an infinite number of middle terms between the primary premises and the conclusion. This leads to the question of how the primary premises are found or developed, and as mentioned above, Aristotle allowed that induction would be required for this task. Towards the end of the Posterior Analytics, Aristotle discusses knowledge imparted by induction. Thus it is clear that we must get to know the primary premises by induction; for the method by which even sense-perception implants the universal is inductive. [...] it follows that there will be no scientific knowledge of the primary premises, and since except intuition nothing can be truer than scientific knowledge, it will be intuition that apprehends the primary premises. [...] If, therefore, it is the only other kind of true thinking except scientific knowing, intuition will be the originative source of scientific knowledge. The account leaves room for doubt regarding the nature and extent of Aristotle's empiricism. In particular, it seems that Aristotle considers sense-perception only as a vehicle for knowledge through intuition. He restricted his investigations in natural history to their natural settings, such as at the Pyrrha lagoon, now called Kalloni, at Lesbos. Aristotle and Theophrastus together formulated the new science of biology, inductively, case by case, for two years before Aristotle was called to tutor Alexander. Aristotle performed no modern-style experiments in the form in which they appear in today's physics and chemistry laboratories. Induction is not afforded the status of scientific reasoning, and so it is left to intuition to provide a solid foundation for Aristotle's science. With that said, Aristotle brings us somewhat closer an empirical science than his predecessors. Epicurus In his work Kαvώv ('canon', a straight edge or ruler, thus any type of measure or standard, referred to as 'canonic'), Epicurus laid out his first rule for inquiry in physics: 'that the first concepts be seen, and that they not require demonstration '. His second rule for inquiry was that prior to an investigation, we are to have self-evident concepts, so that we might infer [ἔχωμεν οἷς σημειωσόμεθα] both what is expected [τò προσμένον], and also what is non-apparent [τò ἄδηλον]. Epicurus applies his method of inference (the use of observations as signs, Asmis' summary, p. 333: the method of using the phenomena as signs (σημεῖα) of what is unobserved) immediately to the atomic theory of Democritus. In Aristotle's Prior Analytics, Aristotle himself employs the use of signs. But Epicurus presented his 'canonic' as rival to Aristotle's logic. See: Lucretius (c. 99 BCE – c. 55 BCE) De rerum natura (On the nature of things) a didactic poem explaining Epicurus' philosophy and physics. Emergence of inductive experimental method During the Middle Ages issues of what is now termed science began to be addressed. There was greater emphasis on combining theory with practice in the Islamic world than there had been in Classical times, and it was common for those studying the sciences to be artisans as well, something that had been "considered an aberration in the ancient world." Islamic experts in the sciences were often expert instrument makers who enhanced their powers of observation and calculation with them. Starting in the early ninth century, early Muslim scientists such as al-Kindi (801–873) and the authors writing under the name of Jābir ibn Hayyān (writings dated to c. 850–950) began to put a greater emphasis on the use of experiment as a source of knowledge. Several scientific methods thus emerged from the medieval Muslim world by the early 11th century, all of which emphasized experimentation as well as quantification to varying degrees. Ibn al-Haytham The Arab physicist Ibn al-Haytham (Alhazen) used experimentation to obtain the results in his Book of Optics (1021). He combined observations, experiments and rational arguments to support his intromission theory of vision, in which rays of light are emitted from objects rather than from the eyes. He used similar arguments to show that the ancient emission theory of vision supported by Ptolemy and Euclid (in which the eyes emit the rays of light used for seeing), and the ancient intromission theory supported by Aristotle (where objects emit physical particles to the eyes), were both wrong. Experimental evidence supported most of the propositions in his Book of Optics and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics. His legacy was elaborated through the 'reforming' of his Optics by Kamal al-Din al-Farisi (d. c. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics). Alhazen viewed his scientific studies as a search for truth: "Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things. Finding the truth is difficult, and the road to it is rough. ... Alhazen's work included the conjecture that "Light travels through transparent bodies in straight lines only", which he was able to corroborate only after years of effort. He stated, "[This] is clearly observed in the lights which enter into dark rooms through holes. ... the entering light will be clearly observable in the dust which fills the air." He also demonstrated the conjecture by placing a straight stick or a taut thread next to the light beam. Ibn al-Haytham also employed scientific skepticism and emphasized the role of empiricism. He also explained the role of induction in syllogism, and criticized Aristotle for his lack of contribution to the method of induction, which Ibn al-Haytham regarded as superior to syllogism, and he considered induction to be the basic requirement for true scientific research. Something like Occam's razor is also present in the Book of Optics. For example, after demonstrating that light is generated by luminous objects and emitted or reflected into the eyes, he states that therefore "the extramission of [visual] rays is superfluous and useless." He may also have been the first scientist to adopt a form of positivism in his approach. He wrote that "we do not go beyond experience, and we cannot be content to use pure concepts in investigating natural phenomena", and that the understanding of these cannot be acquired without mathematics. After assuming that light is a material substance, he does not further discuss its nature but confines his investigations to the diffusion and propagation of light. The only properties of light he takes into account are those treatable by geometry and verifiable by experiment. Al-Biruni The Persian scientist Abū Rayhān al-Bīrūnī introduced early scientific methods for several different fields of inquiry during the 1020s and 1030s. For example, in his treatise on mineralogy, Kitab al-Jawahir (Book of Precious Stones), al-Biruni is "the most exact of experimental scientists", while in the introduction to his study of India, he declares that "to execute our project, it has not been possible to follow the geometric method" and thus became one of the pioneers of comparative sociology in insisting on field experience and information. He also developed an early experimental method for mechanics. Al-Biruni's methods resembled the modern scientific method, particularly in his emphasis on repeated experimentation. Biruni was concerned with how to conceptualize and prevent both systematic errors and observational biases, such as "errors caused by the use of small instruments and errors made by human observers." He argued that if instruments produce errors because of their imperfections or idiosyncratic qualities, then multiple observations must be taken, analyzed qualitatively, and on this basis, arrive at a "common-sense single value for the constant sought", whether an arithmetic mean or a "reliable estimate." In his scientific method, "universals came out of practical, experimental work" and "theories are formulated after discoveries", as with inductivism. Ibn Sina (Avicenna) In the On Demonstration section of The Book of Healing (1027), the Persian philosopher and scientist Avicenna (Ibn Sina) discussed philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper procedure for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist might find "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty." Avicenna added two further methods for finding a first principle: the ancient Aristotelian method of induction (istiqra), and the more recent method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he advocated "a method of experimentation as a means for scientific inquiry." Earlier, in The Canon of Medicine (1025), Avicenna was also the first to describe what is essentially methods of agreement, difference and concomitant variation which are critical to inductive logic and the scientific method. However, unlike his contemporary al-Biruni's scientific method, in which "universals came out of practical, experimental work" and "theories are formulated after discoveries", Avicenna developed a scientific procedure in which "general and universal questions came first and led to experimental work." Due to the differences between their methods, al-Biruni referred to himself as a mathematical scientist and to Avicenna as a philosopher, during a debate between the two scholars. Robert Grosseteste During the European Renaissance of the 12th century, ideas on scientific methodology, including Aristotle's empiricism and the experimental approaches of Alhazen and Avicenna, were introduced to medieval Europe via Latin translations of Arabic and Greek texts and commentaries. Robert Grosseteste's commentary on the Posterior Analytics places Grosseteste among the first scholastic thinkers in Europe to understand Aristotle's vision of the dual nature of scientific reasoning. Concluding from particular observations into a universal law, and then back again, from universal laws to prediction of particulars. Grosseteste called this "resolution and composition". Further, Grosseteste said that both paths should be verified through experimentation to verify the principles. Roger Bacon Roger Bacon was inspired by the writings of Grosseteste. In his account of a method, Bacon described a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification. He recorded the way he had conducted his experiments in precise detail, perhaps with the idea that others could reproduce and independently test his results. About 1256 he joined the Franciscan Order and became subject to the Franciscan statute forbidding Friars from publishing books or pamphlets without specific approval. After the accession of Pope Clement IV in 1265, the Pope granted Bacon a special commission to write to him on scientific matters. In eighteen months he completed three large treatises, the Opus Majus, Opus Minus, and Opus Tertium which he sent to the Pope. William Whewell has called Opus Majus at once the Encyclopaedia and Organon of the 13th century. Part I (pp. 1–22) treats of the four causes of error: authority, custom, the opinion of the unskilled many, and the concealment of real ignorance by a pretense of knowledge. Part VI (pp. 445–477) treats of experimental science, domina omnium scientiarum. There are two methods of knowledge: the one by argument, the other by experience. Mere argument is never sufficient; it may decide a question, but gives no satisfaction or certainty to the mind, which can only be convinced by immediate inspection or intuition, which is what experience gives. Experimental science, which in the Opus Tertium (p. 46) is distinguished from the speculative sciences and the operative arts, is said to have three great prerogatives over all sciences: It verifies their conclusions by direct experiment; It discovers truths which they could never reach; It investigates the secrets of nature, and opens to us a knowledge of past and future. Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow, as a specimen of inductive research. Renaissance humanism and medicine Aristotle's ideas became a framework for critical debate beginning with absorption of the Aristotelian texts into the university curriculum in the first half of the 13th century. Contributing to this was the success of medieval theologians in reconciling Aristotelian philosophy with Christian theology. Within the sciences, medieval philosophers were not afraid of disagreeing with Aristotle on many specific issues, although their disagreements were stated within the language of Aristotelian philosophy. All medieval natural philosophers were Aristotelians, but "Aristotelianism" had become a somewhat broad and flexible concept. With the end of Middle Ages, the Renaissance rejection of medieval traditions coupled with an extreme reverence for classical sources led to a recovery of other ancient philosophical traditions, especially the teachings of Plato. By the 17th century, those who clung dogmatically to Aristotle's teachings were faced with several competing approaches to nature. The discovery of the Americas at the close of the 15th century showed the scholars of Europe that new discoveries could be found outside of the authoritative works of Aristotle, Pliny, Galen, and other ancient writers. Galen of Pergamon (129 – c. 200 AD) had studied with four schools in antiquity — Platonists, Aristotelians, Stoics, and Epicureans, and at Alexandria, the center of medicine at the time. In his Methodus Medendi, Galen had synthesized the empirical and dogmatic schools of medicine into his own method, which was preserved by Arab scholars. After the translations from Arabic were critically scrutinized, a backlash occurred and demand arose in Europe for translations of Galen's medical text from the original Greek. Galen's method became very popular in Europe. Thomas Linacre, the teacher of Erasmus, thereupon translated Methodus Medendi from Greek into Latin for a larger audience in 1519. Limbrick 1988 notes that 630 editions, translations, and commentaries on Galen were produced in Europe in the 16th century, eventually eclipsing Arabic medicine there, and peaking in 1560, at the time of the scientific revolution. By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. Other Renaissance teaching gardens were established, notably by the physician Leonhart Fuchs, one of the founders of botany. The first printed work devoted to the concept of method is Jodocus Willichius, De methodo omnium artium et disciplinarum informanda opusculum (1550). An Informative Essay on the Method of All Arts and Disciplines (1550) Skepticism as a basis for understanding In 1562 Outlines of Pyrrhonism by the ancient Pyrrhonist philosopher Sextus Empiricus (c. 160-210 AD) was published in a Latin translation (from Greek), quickly placing the arguments of classical skepticism in the European mainstream. These arguments establish seemingly insurmountable challenges for the possibility of certain knowledge. The skeptic philosopher and physician Francisco Sanches, was led by his medical training at Rome, 1571–73, to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, 1) syllogism fails upon circular reasoning; 2) Aristotle's modal logic was not stated clearly enough for use in medieval times, and remains a research problem to this day. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581, in Latin Quod Nihil Scitur). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon who was influenced by another prominent exponent of skepticism, Montaigne; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor. "Sanches develops his scepticism by means of an intellectual critique of Aristotelianism, rather than by an appeal to the history of human stupidity and the variety and contrariety of previous theories." —, as cited by Descartes' famous "Cogito" argument is an attempt to overcome skepticism and reestablish a foundation for certainty but other thinkers responded by revising what the search for knowledge, particularly physical knowledge, might be. Tycho Brahe See History of astronomy § Renaissance and Early Modern Europe, Kepler's laws of planetary motion, and History of optics § Renaissance and Early Modern The first modern science, in which practitioners were prepared to revise or reject long-held beliefs in the light of new evidence, was astronomy, and Tycho Brahe was the first modern astronomer. See Sextant, right. Note the explicit reduction of geometrical diagrams to practice (real objects with actual lengths and angles). In 1572, Tycho noticed a completely new star that was brighter than any star or planet. Astonished by the existence of a star that ought not to have been there and gaining the patronage of King Frederick II of Denmark, Tycho built the Uraniborg observatory at enormous cost. Over a period of fifteen years (1576–91), Tycho and upwards of thirty assistants charted the positions of stars, planets, and other celestial bodies at Uraniborg with unprecedented accuracy. In 1600, Tycho hired Johannes Kepler to assist him in analyzing and publishing his observations. Kepler later used Tycho's observations of the motion of Mars to deduce the laws of planetary motion, which were later explained in terms of Newton's law of universal gravitation. Besides Tycho's specific role in advancing astronomical knowledge, Tycho's single-minded pursuit of ever-more-accurate measurement was enormously influential in creating a modern scientific culture in which theory and evidence were understood to be inseparably linked. See Sextant, right. By 1723, standard units of measure had spread to § terrestrial mass and length. Francis Bacon's eliminative induction Francis Bacon (1561–1626) entered Trinity College, Cambridge in April 1573, where he applied himself diligently to the several sciences as then taught, and came to the conclusion that the methods employed and the results attained were alike erroneous; he learned to despise the current Aristotelian philosophy. He believed philosophy must be taught its true purpose, and for this purpose a new method must be devised. With this conception in his mind, Bacon left the university. Bacon attempted to describe a rational procedure for establishing causation between phenomena based on induction. Bacon's induction was, however, radically different than that employed by the Aristotelians. As Bacon put it, [A]nother form of induction must be devised than has hitherto been employed, and it must be used for proving and discovering not first principles (as they are called) only, but also the lesser axioms, and the middle, and indeed all. For the induction which proceeds by simple enumeration is childish. —Novum Organum section CV Bacon's method relied on experimental histories to eliminate alternative theories. Bacon explains how his method is applied in his Novum Organum (published 1620). In an example he gives on the examination of the nature of heat, Bacon creates two tables, the first of which he names "Table of Essence and Presence", enumerating the many various circumstances under which we find heat. In the other table, labelled "Table of Deviation, or of Absence in Proximity", he lists circumstances which bear resemblance to those of the first table except for the absence of heat. From an analysis of what he calls the natures (light emitting, heavy, colored, etc.) of the items in these lists we are brought to conclusions about the form nature, or cause, of heat. Those natures which are always present in the first table, but never in the second are deemed to be the cause of heat. The role experimentation played in this process was twofold. The most laborious job of the scientist would be to gather the facts, or 'histories', required to create the tables of presence and absence. Such histories would document a mixture of common knowledge and experimental results. Secondly, experiments of light, or, as we might say, crucial experiments would be needed to resolve any remaining ambiguities over causes. Bacon showed an uncompromising commitment to experimentation. Despite this, he did not make any great scientific discoveries during his lifetime. This may be because he was not the most able experimenter. It may also be because hypothesising plays only a small role in Bacon's method compared to modern science. Hypotheses, in Bacon's method, are supposed to emerge during the process of investigation, with the help of mathematics and logic. Bacon gave a substantial but secondary role to mathematics "which ought only to give definiteness to natural philosophy, not to generate or give it birth" (Novum Organum XCVI). An over-emphasis on axiomatic reasoning had rendered previous non-empirical philosophy impotent, in Bacon's view, which was expressed in his Novum Organum: XIX. There are and can be only two ways of searching into and discovering truth. The one flies from the senses and particulars to the most general axioms, and from these principles, the truth of which it takes for settled and immoveable, proceeds to judgment and to the discovery of middle axioms. And this way is now in fashion. The other derives axioms from the senses and particulars, rising by a gradual and unbroken ascent, so that it arrives at the most general axioms last of all. This is the true way, but as yet untried. In Bacon's utopian novel, The New Atlantis, the ultimate role is given for inductive reasoning: Lastly, we have three that raise the former discoveries by experiments into greater observations, axioms, and aphorisms. These we call interpreters of nature. Descartes In 1619, René Descartes began writing his first major treatise on proper scientific and philosophical thinking, the unfinished Rules for the Direction of the Mind. His aim was to create a complete science that he hoped would overthrow the Aristotelian system and establish himself as the sole architect of a new system of guiding principles for scientific research. This work was continued and clarified in his 1637 treatise, Discourse on Method, and in his 1641 Meditations. Descartes describes the intriguing and disciplined thought experiments he used to arrive at the idea we instantly associate with him: I think therefore I am. From this foundational thought, Descartes finds proof of the existence of a God who, possessing all possible perfections, will not deceive him provided he resolves "[...] never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of methodic doubt." This rule allowed Descartes to progress beyond his own thoughts and judge that there exist extended bodies outside of his own thoughts. Descartes published seven sets of objections to the Meditations from various sources along with his replies to them. Despite his apparent departure from the Aristotelian system, a number of his critics felt that Descartes had done little more than replace the primary premises of Aristotle with those of his own. Descartes says as much himself in a letter written in 1647 to the translator of Principles of Philosophy, a perfect knowledge [...] must necessarily be deduced from first causes [...] we must try to deduce from these principles knowledge of the things which depend on them, that there be nothing in the whole chain of deductions deriving from them that is not perfectly manifest. And again, some years earlier, speaking of Galileo's physics in a letter to his friend and critic Mersenne from 1638, without having considered the first causes of nature, [Galileo] has merely looked for the explanations of a few particular effects, and he has thereby built without foundations. Whereas Aristotle purported to arrive at his first principles by induction, Descartes believed he could obtain them using reason only. In this sense, he was a Platonist, as he believed in the innate ideas, as opposed to Aristotle's blank slate (tabula rasa), and stated that the seeds of science are inside us. Unlike Bacon, Descartes successfully applied his own ideas in practice. He made significant contributions to science, in particular in aberration-corrected optics. His work in analytic geometry was a necessary precedent to differential calculus and instrumental in bringing mathematical analysis to bear on scientific matters. Galileo Galilei During the period of religious conservatism brought about by the Reformation and Counter-Reformation, Galileo Galilei unveiled his new science of motion. Neither the contents of Galileo's science, nor the methods of study he selected were in keeping with Aristotelian teachings. Whereas Aristotle thought that a science should be demonstrated from first principles, Galileo had used experiments as a research tool. Galileo nevertheless presented his treatise in the form of mathematical demonstrations without reference to experimental results. It is important to understand that this in itself was a bold and innovative step in terms of scientific method. The usefulness of mathematics in obtaining scientific results was far from obvious. This is because mathematics did not lend itself to the primary pursuit of Aristotelian science: the discovery of causes. Whether it is because Galileo was realistic about the acceptability of presenting experimental results as evidence or because he himself had doubts about the epistemological status of experimental findings is not known. Nevertheless, it is not in his Latin treatise on motion that we find reference to experiments, but in his supplementary dialogues written in the Italian vernacular. In these dialogues experimental results are given, although Galileo may have found them inadequate for persuading his audience. Thought experiments showing logical contradictions in Aristotelian thinking, presented in the skilled rhetoric of Galileo's dialogue were further enticements for the reader. As an example, in the dramatic dialogue titled Third Day from his Two New Sciences, Galileo has the characters of the dialogue discuss an experiment involving two free falling objects of differing weight. An outline of the Aristotelian view is offered by the character Simplicio. For this experiment he expects that "a body which is ten times as heavy as another will move ten times as rapidly as the other". The character Salviati, representing Galileo's persona in the dialogue, replies by voicing his doubt that Aristotle ever attempted the experiment. Salviati then asks the two other characters of the dialogue to consider a thought experiment whereby two stones of differing weights are tied together before being released. Following Aristotle, Salviati reasons that "the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter". But this leads to a contradiction, since the two stones together make a heavier object than either stone apart, the heavier object should in fact fall with a speed greater than that of either stone. From this contradiction, Salviati concludes that Aristotle must, in fact, be wrong and the objects will fall at the same speed regardless of their weight, a conclusion that is borne out by experiment. In his 1991 survey of developments in the modern accumulation of knowledge such as this, Charles Van Doren considers that the Copernican Revolution really is the Galilean Cartesian (René Descartes) or simply the Galilean revolution on account of the courage and depth of change brought about by the work of Galileo. Isaac Newton Both Bacon and Descartes wanted to provide a firm foundation for scientific thought that avoided the deceptions of the mind and senses. Bacon envisaged that foundation as essentially empirical, whereas Descartes provides a metaphysical foundation for knowledge. If there were any doubts about the direction in which scientific method would develop, they were set to rest by the success of Isaac Newton. Implicitly rejecting Descartes' emphasis on rationalism in favor of Bacon's empirical approach, he outlines his four "rules of reasoning" in the Principia, We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore to the same natural effects we must, as far as possible, assign the same causes. The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever. In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, until such time as other phænomena occur, by which they may either be made more accurate, or liable to exceptions. But Newton also left an admonition about a theory of everything: To explain all nature is too difficult a task for any one man or even for any one age. 'Tis much better to do a little with certainty, and leave the rest for others that come after you, than to explain all things. Newton's work became a model that other sciences sought to emulate, and his inductive approach formed the basis for much of natural philosophy through the 18th and early 19th centuries. Some methods of reasoning were later systematized by Mill's Methods (or Mill's canon), which are five explicit statements of what can be discarded and what can be kept while building a hypothesis. George Boole and William Stanley Jevons also wrote on the principles of reasoning. Integrating deductive and inductive method Attempts to systematize a scientific method were confronted in the mid-18th century by the problem of induction, a positivist logic formulation which, in short, asserts that nothing can be known with certainty except what is actually observed. David Hume took empiricism to the skeptical extreme; among his positions was that there is no logical necessity that the future should resemble the past, thus we are unable to justify inductive reasoning itself by appealing to its past success. Hume's arguments, of course, came on the heels of many, many centuries of excessive speculation upon excessive speculation not grounded in empirical observation and testing. Many of Hume's radically skeptical arguments were argued against, but not resolutely refuted, by Immanuel Kant's Critique of Pure Reason in the late 18th century. Hume's arguments continue to hold a strong lingering influence and certainly on the consciousness of the educated classes for the better part of the 19th century when the argument at the time became the focus on whether or not the inductive method was valid. Hans Christian Ørsted, (Ørsted is the Danish spelling; Oersted in other languages) (1777–1851) was heavily influenced by Kant, in particular, Kant's Metaphysische Anfangsgründe der Naturwissenschaft (Metaphysical Foundations of Natural Science). The following sections on Ørsted encapsulate our current, common view of scientific method. His work appeared in Danish, most accessibly in public lectures, which he translated into German, French, English, and occasionally Latin. But some of his views go beyond Kant: "In order to achieve completeness in our knowledge of nature, we must start from two extremes, from experience and from the intellect itself. ... The former method must conclude with natural laws, which it has abstracted from experience, while the latter must begin with principles, and gradually, as it develops more and more, it becomes ever more detailed. Of course, I speak here about the method as manifested in the process of the human intellect itself, not as found in textbooks, where the laws of nature which have been abstracted from the consequent experiences are placed first because they are required to explain the experiences. When the empiricist in his regression towards general laws of nature meets the metaphysician in his progression, science will reach its perfection." Ørsted's "First Introduction to General Physics" (1811) exemplified the steps of observation, hypothesis, deduction and experiment. In 1805, based on his researches on electromagnetism Ørsted came to believe that electricity is propagated by undulatory action (i.e., fluctuation). By 1820, he felt confident enough in his beliefs that he resolved to demonstrate them in a public lecture, and in fact observed a small magnetic effect from a galvanic circuit (i.e., voltaic circuit), without rehearsal; In 1831 John Herschel (1792–1871) published A Preliminary Discourse on the study of Natural Philosophy, setting out the principles of science. Measuring and comparing observations was to be used to find generalisations in "empirical laws", which described regularities in phenomena, then natural philosophers were to work towards the higher aim of finding a universal "law of nature" which explained the causes and effects producing such regularities. An explanatory hypothesis was to be found by evaluating true causes (Newton's "vera causae") derived from experience, for example evidence of past climate change could be due to changes in the shape of continents, or to changes in Earth's orbit. Possible causes could be inferred by analogy to known causes of similar phenomena. It was essential to evaluate the importance of a hypothesis; "our next step in the verification of an induction must, therefore, consist in extending its application to cases not originally contemplated; in studiously varying the circumstances under which our causes act, with a view to ascertain whether their effect is general; and in pushing the application of our laws to extreme cases." William Whewell (1794–1866) regarded his History of the Inductive Sciences, from the Earliest to the Present Time (1837) to be an introduction to the Philosophy of the Inductive Sciences (1840) which analyzes the method exemplified in the formation of ideas. Whewell attempts to follow Bacon's plan for discovery of an effectual art of discovery. He named the hypothetico-deductive method (which Encyclopædia Britannica credits to Newton); Whewell also coined the term scientist. Whewell examines ideas and attempts to construct science by uniting ideas to facts. He analyses induction into three steps: the selection of the fundamental idea, such as space, number, cause, or likeness a more special modification of those ideas, such as a circle, a uniform force, etc. the determination of magnitudes Upon these follow special techniques applicable for quantity, such as the method of least squares, curves, means, and special methods depending on resemblance (such as pattern matching, the method of gradation, and the method of natural classification (such as cladistics). But no art of discovery, such as Bacon anticipated, follows, for "invention, sagacity, genius" are needed at every step. Whewell's sophisticated concept of science had similarities to that shown by Herschel, and he considered that a good hypothesis should connect fields that had previously been thought unrelated, a process he called consilience. However, where Herschel held that the origin of new biological species would be found in a natural rather than a miraculous process, Whewell opposed this and considered that no natural cause had been shown for adaptation so an unknown divine cause was appropriate. John Stuart Mill (1806–1873) was stimulated to publish A System of Logic (1843) upon reading Whewell's History of the Inductive Sciences. Mill may be regarded as the final exponent of the empirical school of philosophy begun by John Locke, whose fundamental characteristic is the duty incumbent upon all thinkers to investigate for themselves rather than to accept the authority of others. Knowledge must be based on experience. In the mid-19th century Claude Bernard was also influential, especially in bringing the scientific method to medicine. In his discourse on scientific method, An Introduction to the Study of Experimental Medicine (1865), he described what makes a scientific theory good and what makes a scientist a true discoverer. Unlike many scientific writers of his time, Bernard wrote about his own experiments and thoughts, and used the first person. William Stanley Jevons' The Principles of Science: a treatise on logic and scientific method (1873, 1877) Chapter XII "The Inductive or Inverse Method", Summary of the Theory of Inductive Inference, states "Thus there are but three steps in the process of induction :- Framing some hypothesis as to the character of the general law. Deducing some consequences of that law. Observing whether the consequences agree with the particular tasks under consideration." Jevons then frames those steps in terms of probability, which he then applied to economic laws. Ernest Nagel notes that Jevons and Whewell were not the first writers to argue for the centrality of the hypothetico-deductive method in the logic of science. Charles Sanders Peirce In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the further development of scientific method generally. Peirce's work quickly accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both Deduction and Induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume a century before). Secondly, and of more direct importance to scientific method, Peirce put forth the basic schema for hypothesis-testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that play a role in scientific inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself – indeed this was his primary specialty. Charles S. Peirce was also a pioneer in statistics. Peirce held that science achieves statistical probabilities, not certainties, and that chance, a veering from law, is very real. He assigned probability to an argument's conclusion rather than to a proposition, event, etc., as such. Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the "propensity" theory of probability. Peirce (sometimes with Jastrow) investigated the probability judgments of experimental subjects, pioneering decision analysis. Peirce was one of the founders of statistics. He formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With a repeated measures design, he introduced blinded, controlled randomized experiments (before Fisher). He invented an optimal design for experiments on gravity, in which he "corrected the means". He used logistic regression, correlation, and smoothing, and improved the treatment of outliers. He introduced terms "confidence" and "likelihood" (before Neyman and Fisher). (See the historical books of Stephen Stigler.) Many of Peirce's ideas were later popularized and developed by Ronald A. Fisher, Jerzy Neyman, Frank P. Ramsey, Bruno de Finetti, and Karl Popper. Modern perspectives Karl Popper (1902–1994) is generally credited with providing major improvements in the understanding of the scientific method in the mid-to-late 20th century. In 1934 Popper published The Logic of Scientific Discovery, which repudiated the by then traditional observationalist-inductivist account of the scientific method. He advocated empirical falsifiability as the criterion for distinguishing scientific work from non-science. According to Popper, scientific theory should make predictions (preferably predictions not made by a competing theory) which can be tested and the theory rejected if these predictions are shown not to be correct. Following Peirce and others, he argued that science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism. His astute formulations of logical procedure helped to rein in the excessive use of inductive speculation upon inductive speculation, and also helped to strengthen the conceptual foundations for today's peer review procedures. Ludwik Fleck, a Polish epidemiologist who was contemporary with Karl Popper but who influenced Kuhn and others with his Genesis and Development of a Scientific Fact (in German 1935, English 1979). Before Fleck, scientific fact was thought to spring fully formed (in the view of Max Jammer, for example), when a gestation period is now recognized to be essential before acceptance of a phenomenon as fact. Critics of Popper, chiefly Thomas Kuhn, Paul Feyerabend and Imre Lakatos, rejected the idea that there exists a single method that applies to all science and could account for its progress. In 1962 Kuhn published the influential book The Structure of Scientific Revolutions which suggested that scientists worked within a series of paradigms, and argued there was little evidence of scientists actually following a falsificationist methodology. Kuhn quoted Max Planck who had said in his autobiography, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." A well quoted source on the subject of the scientific method and statistical models, George E. P. Box (1919-2013) wrote "Since all models are wrong the scientist cannot obtain a correct one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist, so over-elaboration and over-parameterization is often the mark of mediocrity" and "Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad." These debates clearly show that there is no universal agreement as to what constitutes the "scientific method". There remain, nonetheless, certain core principles that are the foundation of scientific inquiry today. Mention of the topic In Quod Nihil Scitur (1581), Francisco Sanches refers to another book title, De modo sciendi (on the method of knowing). This work appeared in Spanish as Método universal de las ciencias. In 1833 Robert and William Chambers published their 'Chambers's information for the people'. Under the rubric 'Logic' we find a description of investigation that is familiar as scientific method, Investigation, or the art of inquiring into the nature of causes and their operation, is a leading characteristic of reason [...] Investigation implies three things – Observation, Hypothesis, and Experiment [...] The first step in the process, it will be perceived, is to observe... In 1885, the words "Scientific method" appear together with a description of the method in Francis Ellingwood Abbot's 'Scientific Theism', Now all the established truths which are formulated in the multifarious propositions of science have been won by the use of Scientific Method. This method consists in essentially three distinct steps (1) observation and experiment, (2) hypothesis, (3) verification by fresh observation and experiment. The Eleventh Edition of Encyclopædia Britannica did not include an article on scientific method; the Thirteenth Edition listed scientific management, but not method. By the Fifteenth Edition, a 1-inch article in the Micropædia of Britannica was part of the 1975 printing, while a fuller treatment (extending across multiple articles, and accessible mostly via the index volumes of Britannica) was available in later printings. Current issues In the past few centuries, some statistical methods have been developed, for reasoning in the face of uncertainty, as an outgrowth of methods for eliminating error. This was an echo of the program of Francis Bacon's Novum Organum of 1620. Bayesian inference acknowledges one's ability to alter one's beliefs in the face of evidence. This has been called belief revision, or defeasible reasoning: the models in play during the phases of scientific method can be reviewed, revisited and revised, in the light of further evidence. This arose from the work of Frank P. Ramsey (1903–1930), of John Maynard Keynes (1883–1946), and earlier, of William Stanley Jevons (1835–1882) in economics. Science and pseudoscience The question of how science operates and therefore how to distinguish genuine science from pseudoscience has importance well beyond scientific circles or the academic community. In the judicial system and in public policy controversies, for example, a study's deviation from accepted scientific practice is grounds for rejecting it as junk science or pseudoscience. However, the high public perception of science means that pseudoscience is widespread. An advertisement in which an actor wears a white coat and product ingredients are given Greek or Latin sounding names is intended to give the impression of scientific endorsement. Richard Feynman has likened pseudoscience to cargo cults in which many of the external forms are followed, but the underlying basis is missing: that is, fringe or alternative theories often present themselves with a pseudoscientific appearance to gain acceptance. See also Timeline of the history of the scientific method Notes and references Sources . Third enlarged edition. as cited by as cited by Critical edition of Sanches' Quod Nihil Scitur Latin:(1581, 1618, 1649, 1665), Portuguese:( 1948, 1955, 1957), Spanish:(1944, 1972), French:(1976, 1984), German:(2007) English translation: On Discipline. Part 1: De causis corruptarum artium, Part 2: De tradendis disciplinis Part 3: De artibus Scientific Method
History of scientific method
Technology
11,969
405,305
https://en.wikipedia.org/wiki/Training%20stamp
A training stamp is a label resembling a postage stamp that is used by postal authorities to train postal workers. They generally have the same size and shape as regular stamps, but with a minimal design. Alternatively, several countries have simply obliterated their regular stamps in order to make the training process more realistic, for instance Sudan and the United Kingdom. In some cases, training stamps may be interchangeable with test stamps though test stamps do not need to have a range of values to assist with training postal workers. Although training stamps are not normally available to the general public, some have found their way into private hands, and they are a recognised stamp collecting speciality. Training stamps are a form of cinderella stamp. France Training stamps have been widely used in France and one series consists of a number of plain labels of minimal design with different numbers and the words sans valeur (without value). Sudan A number of Sudanese stamps have been overprinted "school" for use at the post office training school. United Kingdom Stamps used for training postal workers in the United Kingdom are usually normal postage or other stamps, including television license and national insurance stamps (when they were in use), obliterated with two vertical or horizontal bars to prevent genuine use, though other forms of cancellation have been used such as overprinting or rubber stamps. They have frequently found their way into the hands of collectors. Early examples were properly printed with bars but more recent examples tend to simply be crossed through with a black marker pen. A range of cancelled or voided paper money, cheques, postal orders, credit cards and horizon labels are also used to train workers which takes place at counter training schools (CTOs). Before decimalisation in 1971, post offices were issued with very simple training stamps in the same colours as the upcoming decimal stamps. Gallery of British training stamps United States Around the early twentieth century, some U.S. business colleges used specially pre-cancelled stamps or stamp-like labels to train students in the handling of stamps. See also Test stamp Dummy stamp Specimen stamp Printer's sample stamp References Further reading Oliver, T & A. The History of Post Office Training. The Post Office Training Schools: A Handbook & Reference Listing. Sarum Publications, 1996. Postal systems Philatelic terminology
Training stamp
Technology
460
16,087,261
https://en.wikipedia.org/wiki/Hydrological%20code
A hydrological code or hydrologic unit code is a sequence of numbers or letters (a geocode) that identify a hydrological unit or feature, such as a river, river reach, lake, or area like a drainage basin (also called watershed in North America) or catchment. One system, developed by Arthur Newell Strahler, known as the Strahler stream order, ranks streams based on a hierarchy of tributaries. Each segment of a stream or river within a river network is treated as a node in a tree, with the next segment downstream as its parent. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream, and so on. Another example is the system of assigning IDs to watersheds devised by , known as the Pfafstetter Coding System or the Pfafstetter System. Drainage areas are delineated in a hierarchical fashion, with "level 1" watersheds at continental scales, subdivided into smaller level 2 watersheds, which are divided into level 3 watersheds, and so on. Each watershed is assigned a unique number, called a Pfafsetter Code, based on its location within the overall drainage system. Europe A comprehensive coding system is in use in Europe. This system codes from the ocean to the so-called primary catchment. The system determines a set of oceans or endorheic systems identified by a letter. These systems are subdivided into a maximum of 9 seas. The seas are numbered 1 to 9. Seas lying far from the ocean, for example the Black Sea receive a higher number. The seas are delimited using the so-called definitions made by the International Hydrographic Organization in 1953. The coasts of these seas are defined clockwise from north west to south east from the strait where the sea connects to the ocean or the other seas. Subsequently every watershed along this coast is assigned a number using the Pfafstetter Coding System. This implies that the four largest watersheds are selected and receive numbers 2,4,6, or 8. The watersheds in between the large systems receive numbers 3, 5, and 7. Numbers 1 and 9 are used for the small watersheds on the edges of the strait. The smaller systems can subsequently be numbered recursively or kept together for grouping purpose. Landmasses (Continent and Islands) are also numbered in a logical manner, along a clock-wise oriented sea. For Europe containing many inner seas this feature helps to read the relative location of a hydrological object in the sea. United States See also Hydrologic Unit Modeling for the United States Water Resource Region References External links Hydrology Limnology Water and the environment Geocodes
Hydrological code
Chemistry,Engineering,Environmental_science
559
2,014,458
https://en.wikipedia.org/wiki/Mizar
Mizar is a second-magnitude star in the handle of the Big Dipper asterism in the constellation of Ursa Major. It has the Bayer designation ζ Ursae Majoris (Latinised as Zeta Ursae Majoris). It forms a well-known naked eye double star with the fainter star Alcor, and is itself a quadruple star system. The Mizar and Alcor system lies about 83 light-years away from the Sun, as measured by the Hipparcos astrometry satellite, and is part of the Ursa Major Moving Group. Nomenclature ζ Ursae Majoris (Latinised to Zeta Ursae Majoris and abbreviated to ζ UMa or Zeta UMa) is Mizar's Bayer designation. It also has the Flamsteed designation 79 Ursae Majoris. The traditional name Mizar derives from the Arabic meaning 'apron; wrapper, covering, cover'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Mizar for ζ UMa. According to IAU rules, the name Mizar strictly only applies to component Aa, although it is traditionally and popularly used for all four stars making up the single naked-eye star. Stellar system Mizar is a visual double with a separation of 14.4 arcseconds, each of which is a spectroscopic binary. Its combined apparent magnitude is 2.04. The two visible stars are referred to as ζ1 and ζ2 Ursae Majoris, or Mizar A and B. The spectroscopic components are generally referred to as Mizar Aa, Ab, Ba, and Bb. The stars all share a single Hipparcos designation of HIP 65378, but separate Bright Star Catalogue and Henry Draper Catalogue entries. Mizar, together with Alcor and many of the other bright stars in Ursa Major, is a member of the Ursa Major Moving Group. Mizar may have been the first telescopic binary known to Europeans; Benedetto Castelli in 1617 asked Galileo Galilei to observe it. Galileo then produced a detailed record of the double star. Later, around 1650, Riccioli wrote of Mizar appearing as a double. The secondary star (Mizar B) comes within 380 AU of the primary (Mizar A) and the two take thousands of years to revolve around each other. Mizar A was the first spectroscopic binary to be discovered, as part of Antonia Maury's spectral classification work, and an orbit was published in 1890. Some spectroscopic binaries cannot be visually resolved and are discovered by studying the spectral lines of the suspect system over a long period of time. The two components of Mizar A are both about 35 times as bright as the Sun, and revolve around each other in about 20 days 12 hours and 55 minutes. In 1908, Mizar B was also found to be a spectroscopic binary, its components completing an orbital period every six months. In 1996, 107 years after their discovery, the components of the Mizar A binary system were imaged in extremely high resolution using the Navy Prototype Optical Interferometer. ζ1 Ursae Majoris The two components of ζ1 Ursae Majoris (Mizar Aa and Ab) are observed to be identical, with the exception of slightly different radial velocity variations which indicate very slightly different masses. The spectral lines of the two stars can be observed separately and both are given a spectral type of A2Vp. They are Ap stars, chemically peculiar due to stratification of some heavy elements in the photosphere of slowly-rotating hot stars. In this case, they show elevated abundances of strontium and silicon. With the assumption of identical physical properties for the two stars, they both have temperatures of 9,000 K, radii of , and bolometric luminosities of . They are thought to be around 370 million years old. ζ2 Ursae Majoris ζ2 Ursae Majoris is a single-lined spectroscopic binary, and the visible spectrum is of an Am star, named for their unusually strong lines of some metals. The spectral type of kA1h(eA)mA7IV-V is in a form used for metallic-lined stars: the type is A1 based on the calcium K lines, early A based on the hydrogen lines, and A7 based on lines of other metals. The luminosity class is ranked between main sequence and subgiant. Based on the orbital properties of the system, the total mass of the two stars is approximately 2.1 solar masses, most of which is contributed by the primary star. Other names Mizar is known as Vashistha, one of the Saptarishi, in traditional Indian astronomy. Chinese Taoism personifies ζ Ursae Majoris as the Lu star. In Chinese, (), meaning Northern Dipper, refers to an asterism equivalent to the Big Dipper. Consequently, the Chinese name for ζ Ursae Majoris itself is Běi Dǒu liù, () and Kāi Yáng, (). In the Mi'kmaq myth of the great bear and the seven hunters, Mizar is Chickadee and Alcor is his cooking pot. Military namesakes USS Mizar is a cargo and passenger liner converted to a United States Navy ship USNS Mizar, a United States Navy ship In popular culture The band Steely Dan references Mizar in their song "Sign In Stranger" from their album The Royal Scam. Mizar is the home system of a race of friendly, spherical aliens contacted by the Earth ship Stardust in the 1971 science fiction short story "The Bear With the Knot on His Tail" by Stephen Tall. References External links Mizar at Jim Kaler's Stars website A New View Of Mizar (a comprehensive article about the system) A-type main-sequence stars Am stars Ap stars Spectroscopic binaries A-type subgiants 4 Ursa Major moving group Ursae Majoris, Zeta Big Dipper Ursa Major BD+55 1598 Ursae Majoris, 79 116656 7 065378 5054 5
Mizar
Astronomy
1,319
329,549
https://en.wikipedia.org/wiki/Surface%20of%20revolution
A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints). The volume bounded by the surface created by this revolution is the solid of revolution. Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle, and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus). Properties The sections of the surface of revolution made by planes through the axis are called meridional sections. Any meridional section can be considered to be the generatrix in the plane determined by it and the axis. The sections of the surface of revolution made by planes that are perpendicular to the axis are circles. Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular. Area formula If the curve is described by the parametric functions , , with ranging over some interval , and the axis of revolution is the -axis, then the surface area is given by the integral provided that is never negative between the endpoints and . This formula is the calculus equivalent of Pappus's centroid theorem. The quantity comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity is the path of (the centroid of) this small segment, as required by Pappus' theorem. Likewise, when the axis of rotation is the -axis and provided that is never negative, the area is given by If the continuous curve is described by the function , , then the integral becomes for revolution around the -axis, and for revolution around the y-axis (provided ). These come from the above formula. This can also be derived from multivariable integration. If a plane curve is given by then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by with . Then the surface area is given by the surface integral Computing the partial derivatives yields and computing the cross product yields where the trigonometric identity was used. With this cross product, we get where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar. For example, the spherical surface with unit radius is generated by the curve , , when ranges over . Its area is therefore For the case of the spherical curve with radius , rotated about the -axis A minimal surface of revolution is the surface of revolution of the curve between two given points which minimizes surface area. A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution. There are only two minimal surfaces of revolution (surfaces of revolution which are also minimal surfaces): the plane and the catenoid. Coordinate expressions A surface of revolution given by rotating a curve described by around the x-axis may be most simply described by . This yields the parametrization in terms of and as . If instead we revolve the curve around the y-axis, then the curve is described by , yielding the expression in terms of the parameters and . If x and y are defined in terms of a parameter , then we obtain a parametrization in terms of and . If and are functions of , then the surface of revolution obtained by revolving the curve around the x-axis is described by , and the surface of revolution obtained by revolving the curve around the y-axis is described by . Geodesics Meridians are always geodesics on a surface of revolution. Other geodesics are governed by Clairaut's relation. Toroids A surface of revolution with a hole in, where the axis of revolution does not intersect the surface, is called a toroid. For example, when a rectangle is rotated around an axis parallel to one of its edges, then a hollow square-section ring is produced. If the revolved figure is a circle, then the object is called a torus. See also Channel surface, a generalisation of a surface of revolution Gabriel's Horn Generalized helicoid Lemon (geometry), surface of revolution of a circular arc Liouville surface, another generalization of a surface of revolution Spheroid Surface integral Translation surface (differential geometry) References External links Integral calculus Surfaces of revolution
Surface of revolution
Mathematics
962
59,067,089
https://en.wikipedia.org/wiki/Font%20Bomb
Font Bomb is a JavaScript bookmarklet to "Blow Up" web pages. When the script is loaded, clicking on a web page starts a countdown. When the countdown reaches zero, it uses Cascading Style Sheets to scatter nearby text across the page. The script wraps all affected letters in a tag, so that they can be moved individually. References External links Main site JavaScript Web development
Font Bomb
Engineering
83
63,048,872
https://en.wikipedia.org/wiki/Invasion%20genetics
Invasion genetics is the area of study within biology that examines evolutionary processes in the context of biological invasions. Invasion genetics considers how genetic and demographic factors affect the success of a species introduced outside of its native range, and how the mechanisms of evolution, such as natural selection, mutation, and genetic drift, operate in these populations. Researchers exploring these questions draw upon theory and approaches from a range of biological disciplines, including population genetics, evolutionary ecology, population biology, and phylogeography. Invasion genetics, due to its focus on the biology of introduced species, is useful for identifying potential invasive species and developing practices for managing biological invasions. It is distinguished from the broader study of invasive species because it is less directly concerned with the impacts of biological invasions, such as environmental or economic harm. In addition to applications for invasive species management, insights gained from invasion genetics also contribute to a broader understanding of evolutionary processes such as genetic drift and adaptive evolution. History Descriptions of invasive species Charles Elton formed the basis for examining biological invasions as a unified issue in his 1958 monograph, The Ecology of Invasions by Animals and Plants, drawing together case studies of species introductions. Other important events in the study of invasive species include a series of issues published by the Scientific Committee on Problems of the Environment in the 1980s and the founding of the journal Biological Invasions in 1999. Much of the research motivated by Elton's monograph is generally identified with invasion ecology, and focuses on the ecological causes and impacts of biological invasions. The Genetics of Colonizing Species The evolutionary modern synthesis in the early 20th century brought together Charles Darwin's theory of evolution by natural selection and classical genetics through the development of population genetics, which provided the conceptual basis for studying how evolutionary processes shape variation in populations. This development was crucial to the emergence of invasion genetics, which is concerned with the evolution of populations of introduced species. The beginning of invasion genetics as a distinct study has been identified with a symposium held at Asilomar in 1964 which included a number of major contributors to the modern synthesis, including Theodosius Dobzhansky, Ernst Mayr, and G. Ledyard Stebbins, as well as scientists with experience working in areas of weed and pest control. Stebbins, working with another botanist, Herbert G. Baker, collected a series of articles which emerged from the Asilomar symposium and published a volume titled The Genetics of Colonizing Species in 1965. This volume introduced many of the questions which continue to motivate research in invasion genetics today, including questions about the characteristics of successful invaders, the importance of a species' mating system in colonization success, the relative importance of genetic variation and phenotypic plasticity in adaptation to new environments, and the effect of population bottlenecks on genetic variation. Terminology of invasion genetics Since its publication in 1965, The Genetics of Colonizing Species helped to motivate research which would provide a theoretical and empirical foundation for invasion genetics. However, the term invasion genetics only first appeared in the literature in 1998, and the first published definition appeared in 2005. The success of introduced species is quite variable, consequently researchers have sought to develop terminology which allows distinguishing different levels of success. These approaches rely on describing invasion as a biological process. Process of biological invasion Background Researchers have proposed a number of different methods for describing biological invasions. In 1992, the ecologists Mark Williamson and Alastair Fitter divided the process of biological invasion into three stages: escaping, establishing, and becoming a pest. Since then, there has been an expanding effort to develop a framework for categorizing biological invasions in terms that are neutral with respect to a species' environmental and economic impacts. This approach has allowed biologists to focus on the processes which facilitate or inhibit the spread of introduced species. David M. Richardson and colleagues describe how introduced species must pass a series of barriers prior to becoming naturalized or invasive in a new range. Alternatively, the stages of an invasion may be separated by filters, as described by Robert I. Colautti and Hugh MacIsaac, so that invasion success would depend on the rate of introduction (propagule pressure) as well as the traits possessed by the organism. Description The most recent systematic effort to describe the steps of a biological invasion was made by Tim Blackburn and colleagues in 2011, which combined the concepts of barriers and stages. According to this framework, there are four stages of an invasion: transport, introduction, establishment, and spread. Each of these stages is accompanied by one or more barriers. Application of invasion genetics to different stages of invasion Invasion genetics can be used to understand the processes involved at each stage of a biological invasion. Many of the foundational questions of invasion genetics focused on processes involved during establishment and spread. As early as 1955, Herbert G. Baker proposed that self-fertilization would be a favourable trait for colonizing species because successful establishment would not require the simultaneous introduction of two individuals of opposite sexes. Baker subsequently elaborated a series of "ideal weed characteristics" in an article in The Genetics of Colonizing Species, which included traits such as the ability to tolerate environmental variation, dispersal ability, and the ability to tolerate generalist herbivores and pathogens. While some of the traits, such as ease of germination, may aid a species in transport or introduction, most of the traits Baker identified were primarily conducive to establishment and spread. Advances in the study of molecular evolution may help biologists to understand better the processes of transport and introduction. Genomicist Melania Cristescu and her colleagues examined mitochondrial DNA of the fishhook waterflea introduced into the Great Lakes, tracing the source of the invasive populations to the Baltic Sea. More recently, Cristescu has argued for expanding the use of phylogenetic and phylogenomic approaches, as well as applying metabarcoding and population genomics, to understand how species are introduced and identify "failed invasions" where introduction does not lead to establishment. Factors influencing invasion success Propagule pressure Propagule pressure describes the number of individuals introduced into an area in which they are not native, and can strongly affect the ability of species to reach a later stage of invasion. Factors which may influence the rate of transport and introduction into a novel environment include the species' abundance in its native range, as well as its tendency to co-occur with or be deliberately moved by humans. The likelihood of reaching establishment is also highly dependent on the number of individuals introduced. Small populations can be limited by Allee effects, as individuals may have difficulty finding suitable mates and populations are vulnerable to demographic stochasticity. Small populations may also suffer from inbreeding depression. Species that are introduced in larger numbers are more likely to establish in different environments, and high propagule pressure will introduce more genetic diversity into a population. These factors can help a species adapt to different environmental conditions during establishment as well as during subsequent spread in a new range. Traits of successful invaders Herbert G. Baker's list of 14 "ideal weed characteristics", published in the 1965 volume The Genetics of Colonizing Species, has been the basis for investigation into characteristics which could contribute to invasion success of plants. Since Baker first proposed this list, researchers have debated whether or not particular traits could be linked to the "invasiveness" of a species. Mark van Kleunen, in revisiting the question, proposed examining the traits of candidate invaders in the context of the process of biological invasion. According to this approach, particular traits might be useful for introduced species because they would allow them to pass through a filter associated with a particular stage of an invasion. Genetic variation A population of introduced species exhibiting higher genetic variation could be more successful during establishment and spread, due to the higher likelihood of possessing a suitable genotype for the novel environment. However, populations of a species in an introduced range are likely to exhibit lower genetic variation compared to populations in the native range due to population bottlenecks and founder effects experienced during introduction. A classic study on population bottlenecks, conducted by Masatoshi Nei, described a genetic signature of bottlenecks on introduced populations of Drosophila pseudoobscura in Colombia. The ecological success of many invaders despite these apparent genetic limitations suggests a "genetic paradox of invasion", for which a number of answers have been proposed. One of the possible resolutions for the genetic paradox of invasion is that most bottlenecks experienced by introduced species are typically not severe enough to have a strong effect on genetic variation. As well, a species may be introduced multiple times from multiple sources, resulting in genetic admixture which could compensate for lost genetic variation. The evolutionary ecologist Katrina Dlugosch has noted that the relationship between genetic variation and capacity for adaptation is nonlinear and may depend on factors such as the effect size of adaptive loci (in quantitative genetics, effect size refers to the magnitude of change in a phenotypic trait value associated with a particular locus) and the presence of cryptic variation. Phenotypic plasticity Phenotypic plasticity is the expression of different traits (or phenotypes), such as morphology or behaviour, in response to different environments. Plasticity allows organisms to cope with environmental variation without necessitating genetic evolution. Herbert G. Baker proposed that the possession of "general purpose" genotypes which were tolerant of a range of environments could be advantageous for species introduced into new areas. General purpose genotypes could help introduced species encountering environmental variation during establishment and spread, in part because introduced species should have less genetic variation than native species. However, it remains disputed whether or not invasive species exhibit higher plasticity than native and non-invasive species. Evolution during biological invasions Genetic consequences of range expansion Range expansion is the process by which an organism spreads and establishes new populations across a geographic scale, so it is part of a biological invasion. During a range expansion, there exists an expanding wave front, where rapidly-growing populations are established by a relatively small number of individuals. Under these demographic conditions, the phenomenon of gene surfing can lead to the accumulation of deleterious mutations. This reduces the fitness of individuals at the wave front, and is described as an expansion load (see also: mutation load). These mutations can limit the rate of range expansion and, in the absence of effective recombination and natural selection which would remove such mutations, can have severe and persisting negative effects on populations. Local adaptation Invasive species may encounter environments which differ either from those experienced in their natural range or where they are introduced. In these environments natural selection can act on these introduced populations, provided that there is sufficient genetic variation present in the population, which may lead to local adaptation. Such adaptation can facilitate both the establishment and spread of an introduced species. Local adaptation can, however, be inhibited by genetic admixture between populations. Admixture can result in hybrid breakdown by breaking up beneficial gene linkages and introducing maladapted alleles. Admixture can also facilitation species introductions by increasing genetic variation, thereby limiting the cost of inbreeding in small populations. Through heterosis, the increased quality of hybrid offspring, admixture has also been shown to increase the vigour of introduced populations of common yellow monkeyflower. Hybridization Hybridization broadly refers to breeding between individuals from genetically isolated populations, and may therefore be within a species (intraspecific) or between species (interspecific). When offspring are distinct from either parent, hybridization can be a source of evolutionary novelty. Hybridization can also lead to gene flow between populations or species through the mechanism of introgression. Hybridization and its contribution to evolution was a subject of interest for G. Ledyard Stebbins, who noted in a 1959 review that the introduction of European species of the genus Tragopogon to North America had led to hybrid speciation; this example was also discussed by Herbert G. Baker in The Genetics of Colonizing Species. The first systematic review of the role of invasive plant species in interspecific hybridization appeared in 1992, and the phenomenon has also been explored in fish and aquatic invertebrates. Hybridization may increase the invasiveness of introduced species, either by introducing genetic variation, heterosis, or by creating novel genotypes which perform better in a given environment. Gene flow between introduced and native species can also result in the loss of biodiversity through genetic pollution. Evolutionary responses of native species to invaders Because biological invasions can have a profound impact on the invaded environment, it is expected that the arrival of invasive species creates new selective pressures on native organisms, typically through competitive or predatory interactions. Through adaptive evolution, species in affected ecological communities could evolve to tolerate invasive species. This means that biological invasions potentially have both ecological and evolutionary consequences for native species. However, many studies have failed to detect an adaptive response of native species to ecological disruptions. The ecologists Jennifer Lau and Casey terHorst have pointed to this absence of an evolutionary response as an important consideration for understanding how invasive species disrupt ecological communities and the multiple challenges faced by native populations. See also Invasive species Introduced species Colonisation (biology) Population genetics Population genomics Glossary of invasion biology terms Invader potential Indigenous (ecology) Conservation genetics Ecological genetics References Further reading Barrett, Spencer C.H.; Colautti, Robert I.; Dlugosch, Katrina M.; Rieseberg, Loren H., eds. (2016). Invasion genetics: The Baker and Stebbins legacy. Hoboken, NJ: John Wiley & Sons. . WorldCat External links Spencer Barrett on the Foundation of Invasion Genetics (YouTube link) Invasive animal species Evolutionary biology terminology
Invasion genetics
Biology
2,762
1,300,790
https://en.wikipedia.org/wiki/Abarelix
Abarelix, sold under the brand name Plenaxis, is an injectable gonadotropin-releasing hormone antagonist (GnRH antagonist) which is marketed in Germany and the Netherlands. It is primarily used in oncology to reduce the amount of testosterone made in patients with advanced symptomatic prostate cancer for which no other treatment options are available. It was originally marketed by Praecis Pharmaceuticals as Plenaxis, and is now marketed by Speciality European Pharma in Germany after receiving a marketing authorization in 2005. The drug was introduced in the United States in 2003, but was discontinued in this country in May 2005 due to poor sales and a higher-than-expected incidence of severe allergic reactions. It remains marketed in Germany and the Netherlands however. See also Gonadotropin-releasing hormone receptor § Antagonists References GnRH antagonists Peptide therapeutics
Abarelix
Chemistry
184
3,127,042
https://en.wikipedia.org/wiki/Behavioral%20medicine
Behavioral medicine is concerned with the integration of knowledge in the biological, behavioral, psychological, and social sciences relevant to health and illness. These sciences include epidemiology, anthropology, sociology, psychology, physiology, pharmacology, nutrition, neuroanatomy, endocrinology, and immunology. The term is often used interchangeably, but incorrectly, with health psychology. The practice of behavioral medicine encompasses health psychology, but also includes applied psychophysiological therapies such as biofeedback, hypnosis, and bio-behavioral therapy of physical disorders, aspects of occupational therapy, rehabilitation medicine, and physiatry, as well as preventive medicine. In contrast, health psychology represents a stronger emphasis specifically on psychology's role in both behavioral medicine and behavioral health. Behavioral medicine is especially relevant in recent days, where many of the health problems are primarily viewed as behavioral in nature, as opposed to medical. For example, smoking, leading a sedentary lifestyle, and alcohol use disorder or other substance use disorder are all factors in the leading causes of death in the modern society. Practitioners of behavioral medicine include appropriately qualified nurses, social workers, psychologists, and physicians (including medical students and residents), and these professionals often act as behavioral change agents, even in their medical roles. Behavioral medicine uses the biopsychosocial model of illness instead of the medical model. This model incorporates biological, psychological, and social elements into its approach to disease instead of relying only on a biological deviation from the standard or normal functioning. Origins and history Writings from the earliest civilizations have alluded to the relationship between mind and body, the fundamental concept underlying behavioral medicine. The field of psychosomatic medicine is among its academic forebears, albeit, it is now obsolete as an psychological discipline. In the form in which it is generally understood today, the field dates back to the 1970s. The earliest uses of the term were in the title of a book by Lee Birk (Biofeedback: Behavioral Medicine), published in 1973; and in the names of two clinical research units, the Center for Behavioral Medicine, founded by Ovide F. Pomerleau and John Paul Brady at the University of Pennsylvania in 1973, and the Laboratory for the Study of Behavioral Medicine, founded by William Stewart Agras at Stanford University in 1974. Subsequently, the field burgeoned, and inquiry into behavioral, physiological, and biochemical interactions with health and illness gained prominence under the rubric of behavioral medicine. In 1976, in recognition of this trend, the National Institutes of Health created the Behavioral Medicine Study Section to encourage and facilitate collaborative research across disciplines. The 1977 Yale Conference on Behavioral Medicine and a meeting of the National Academy of Sciences were explicitly aimed at defining and delineating the field in the hopes of helping to guide future research. Based on deliberations at the Yale conference, Schwartz and Weiss proposed the biopsychosocial model, emphasizing the new field's interdisciplinary roots and calling for the integration of knowledge and techniques broadly derived from behavioral and biomedical science. Shortly after, Pomerleau and Brady published a book entitled Behavioral Medicine: Theory and Practice, in which they offered an alternative definition focusing more closely on the particular contribution of the experimental analysis of behavior in shaping the field. Additional developments during this period of growth and ferment included the establishment of learned societies (the Society of Behavioral Medicine and the Academy of Behavioral Medicine Research, both in 1978) and of journals (the Journal of Behavioral Medicine in 1977 and the Annals of Behavioral Medicine in 1979). In 1990, at the International Congress of Behavioral Medicine in Sweden, the International Society of Behavioral Medicine was founded to provide, through its many daughter societies and through its own peer-reviewed journal (the International Journal of Behavioral Medicine), an international focus for professional and academic development. Areas of study Behavior-related illnesses Many chronic diseases have a behavioral component, but the following illnesses can be significantly and directly modified by behavior, as opposed to using pharmacological treatment alone: Substance use: many studies demonstrate that medication is most effective when combined with behavioral intervention Hypertension: deliberate attempts to reduce stress can also reduce high blood pressure Insomnia: cognitive and behavioural interventions are recommended as a first line treatment for insomnia Treatment adherence and compliance Medications work best for controlling chronic illness when the patients use them as prescribed and do not deviate from the physician's instructions. This is true for both physiological and mental illnesses. However, in order for the patient to adhere to a treatment regimen, the physician must provide accurate information about the regimen, an adequate explanation of what the patient must do, and should also offer more frequent reinforcement of appropriate compliance. Patients with strong social support systems, particularly through marriages and families, typically exhibit better compliance with their treatment regimen. Examples: telemonitoring through telephone or video conference with the patient case management by using a range of medical professionals to consistently follow up with the patient Doctor-patient relationship It is important for doctors to make meaningful connections and relationships with their patients, instead of simply having interactions with them, which often occurs in a system that relies heavily on specialist care. For this reason, behavioral medicine emphasizes honest and clear communication between the doctor and the patient in the successful treatment of any illness, and also in the maintenance of an optimal level of physical and mental health. Obstacles to effective communication include power dynamics, vulnerability, and feelings of helplessness or fear. Doctors and other healthcare providers also struggle with interviewing difficult or uncooperative patients, as well as giving undesirable medical news to patients and their families. The field has placed increasing emphasis on working towards sharing the power in the relationship, as well as training the doctor to empower the patient to make their own behavioral changes. More recently, behavioral medicine has expanded its area of practice to interventions with providers of medical services, in recognition of the fact that the behavior of providers can have a determinative effect on patient outcomes. Objectives include maintaining professional conduct, productivity, and altruism, in addition to preventing burnout, depression, and job dissatisfaction among practitioners. Learning principles, models and theories Behavioral medicine includes understanding the clinical applications of learning principles such as reinforcement, avoidance, generalisation, and discrimination, and of cognitive-social learning models as well, such as the cognitive-social learning model of relapse prevention by Marlatt. Learning theory Learning can be defined as a relatively permanent change in a behavioral tendency occurring as a result of reinforced practice. A behavior is significantly more likely to occur again in the future as a result of learning, making learning important in acquiring maladaptive physiological responses that can lead to psychosomatic disease. This also implies that patients can change their unhealthy behaviors in order to improve their diagnoses or health, especially in treating addictions and phobias. The three primary theories of learning are: classical conditioning operant conditioning modeling Other areas include correcting perceptual bias in diagnostic behavior; remediating clinicians' attitudes that impinge negatively upon patient treatment; and addressing clinicians' behaviors that promote disease development and illness maintenance in patients, whether within a malpractice framework or not. Our modern-day culture involves many acute, microstressors that add up to a large amount of chronic stress over time, leading to disease and illness. According to Hans Selye, the body's stress response is designed to heal and involves three phases of his General Adaptation Syndrome: alarm, resistance, and exhaustion. Applications An example of how to apply the biopsychosocial model that behavioral medicine utilizes is through chronic pain management. Before this model was adopted, physicians were unable to explain why certain patients did not experience pain despite experiencing significant tissue damage, which led them to see the purely biomedical model of disease as inadequate. However, increasing damage to body parts and tissues is generally associated with increasing levels of pain. Doctors started including a cognitive component to pain, leading to the gate control theory and the discovery of the placebo effect. Psychological factors that affect pain include self-efficacy, anxiety, fear, abuse, life stressors, and pain catastrophizing, which is particularly responsive to behavioral interventions. In addition, one's genetic predisposition to psychological distress and pain sensitivity will affect pain management. Finally, social factors such as socioeconomic status, race, and ethnicity also play a role in the experience of pain. Behavioral medicine involves examining all of the many factors associated with illness, instead of just the biomedical aspect, and heals disease by including a component of behavioral change on the part of the patient. In a review published 2011 Fisher et al. illustrates how a behavior medical approach can be applied on a number of common diseases and risk factors such as cardiovascular disease/diabetes, cancer, HIV/AIDS and tobacco use, poor diet, physical inactivity and excessive alcohol consumption. Evidence indicates that behavioral interventions are cost effectiveness and add in terms of quality of life. Importantly behavioral interventions can have broad effects and benefits on prevention, disease management, and well-being across the life span. Journals Annals of Behavioral Medicine International Journal of Behavioral Medicine Journal of Behavior Analysis of Sports, Health, Fitness and Behavioral Medicine Journal of Behavioral Health and Medicine Journal of Behavioral Medicine Organizations Association for Behavior Analysis International's Behavioral Medicine Special Interest Group Society of Behavioral Medicine International Society of Behavioral Medicine See also Health psychology Organizational psychology Medical psychology Occupational health psychology References Epidemiology Health Interdisciplinary branches of psychology Neuroanatomy
Behavioral medicine
Environmental_science
1,930
2,669,524
https://en.wikipedia.org/wiki/Dulmage%E2%80%93Mendelsohn%20decomposition
In graph theory, the Dulmage–Mendelsohn decomposition is a partition of the vertices of a bipartite graph into subsets, with the property that two adjacent vertices belong to the same subset if and only if they are paired with each other in a perfect matching of the graph. It is named after A. L. Dulmage and Nathan Mendelsohn, who published it in 1958. A generalization to any graph is the Edmonds–Gallai decomposition, using the Blossom algorithm. Construction The Dulmage-Mendelshon decomposition can be constructed as follows. (it is attributed to who in turn attribute it to ). Let G be a bipartite graph, M a maximum-cardinality matching in G, and V0 the set of vertices of G unmatched by M (the "free vertices"). Then G can be partitioned into three parts: E - the even vertices - the vertices reachable from V0 by an M-alternating path of even length. O - the odd vertices - the vertices reachable from V0 by an M-alternating path of odd length. U - the unreachable vertices - the vertices unreachable from V0 by an M-alternating path. An illustration is shown on the left. The bold lines are the edges of M. The weak lines are other edges of G. The red dots are the vertices of V0. Note that V0 is contained in E, since it is reachable from V0 by a path of length 0. Based on this decomposition, the edges in G can be partitioned into six parts according to their endpoints: E-U, E-E, O-O, O-U, E-O, U-U. This decomposition has the following properties: The sets E, O, U are pairwise-disjoint. Proof: U is disjoint from E and O by definition. To prove that E and O are disjoint, suppose that some vertex v has both an even-length alternating path to an unmatched vertex u1, and an odd-length alternating path to an unmatched vertex u2. Then, concatenating these two paths yields an augmenting path from u1 through v to u2. But this contradicts the assumption that M is a maximum-cardinality matching. The sets E, O, U do not depend on the maximum-cardinality matching M (i.e., any maximum-cardinality matching defines exactly the same decomposition). G contains only O-O, O-U, E-O and U-U edges. Any maximum-cardinality matching in G contains only E-O and U-U edges. Any maximum-cardinality matching in G saturates all vertices in O and all vertices in U. The size of a maximum-cardinality matching in G is |O| + |U| / 2. If G has a perfect matching, then all vertices of G are in U. Alternative definition The coarse decomposition Let G = (X+Y,E) be a bipartite graph, and let D be the set of vertices in G that are not matched in at least one maximum matching of G. Then D is necessarily an independent set. So G can be partitioned into three parts: The vertices in D ∩ X and their neighbors; The vertices in D ∩ Y and their neighbors; The remaining vertices. Every maximum matching in G consists of matchings in the first and second part that match all neighbors of D, together with a perfect matching of the remaining vertices. If G has a perfect matching, then the third set contains all vertices of G. The fine decomposition The third set of vertices in the coarse decomposition (or all vertices in a graph with a perfect matching) may additionally be partitioned into subsets by the following steps: Find a perfect matching of G. Form a directed graph H whose vertices are the matched edges in G. For each unmatched edge (x,y) in G, add a directed edge in H from the matched edge of x to the matched edge of y. Find the strongly connected components of the resulting graph. For each component of H, form a subset of the Dulmage–Mendelsohn decomposition consisting of the vertices in G that are endpoints of edges in the component. To see that this subdivision into subsets characterizes the edges that belong to perfect matchings, suppose that two vertices x and y in G belong to the same subset of the decomposition, but are not already matched by the initial perfect matching. Then there exists a strongly connected component in H containing edge x,y. This edge must belong to a simple cycle in H (by the definition of strong connectivity) which necessarily corresponds to an alternating cycle in G (a cycle whose edges alternate between matched and unmatched edges). This alternating cycle may be used to modify the initial perfect matching to produce a new matching containing edge x,y. An edge x,y of the graph G belongs to all perfect matchings of G, if and only if x and y are the only members of their set in the decomposition. Such an edge exists if and only if the matching preclusion number of the graph is one. Core As another component of the Dulmage–Mendelsohn decomposition, Dulmage and Mendelsohn defined the core of a graph to be the union of its maximum matchings. However, this concept should be distinguished from the core in the sense of graph homomorphisms, and from the k-core formed by the removal of low-degree vertices. Applications This decomposition has been used to partition meshes in finite element analysis, and to determine specified, underspecified and overspecified equations in systems of nonlinear equations. It was also used for an algorithm for rank-maximal matching. Asymmetric variant In there is a different decomposition of a bipartite graph, which is asymmetric - it distinguishes between vertices in one side of the graph and the vertices on the other side. It can be used to find a maximum-cardinality envy-free matching in an unweighted bipartite graph, as well as a minimum-cost maximum-cardinality matching in a weighted bipartite graph. References External links A good explanation of its application to systems of nonlinear equations is available in this paper: An open source implementation of the algorithm is available as a part of the sparse-matrix library: SPOOLES Graph-theoretical aspects of constraint solving in the SST project: Graph algorithms Matching (graph theory)
Dulmage–Mendelsohn decomposition
Mathematics
1,355
282,450
https://en.wikipedia.org/wiki/3G
3G is the third generation of cellular network technology, representing a significant advancement over 2G, particularly in terms of data transfer speeds and mobile internet capabilities. While 2G networks, including technologies such as GPRS and EDGE, supported limited data services, 3G introduced significantly higher-speed mobile internet, improved voice quality, and enhanced multimedia capabilities. Although 3G enabled faster data speeds compared to 2G, it provided moderate internet speeds suitable for general browsing and multimedia content, but not for high-definition or data-intensive applications. Based on the International Mobile Telecommunications-2000 (IMT-2000) specifications established by the International Telecommunication Union (ITU), 3G supports a range of services, including voice telephony, mobile internet access, video calls, video streaming, and mobile TV. 3G telecommunication networks support services that provide an information transfer rate of at least 144 kbit/s. Later 3G releases, often referred to as 3.5G (HSPA) and 3.75G (HSPA+), introduced important improvements, enabling 3G networks to offer mobile broadband access with speeds ranging from several Mbit/s up to 42 Mbit/s. These updates improved the reliability and speed of internet browsing, video streaming, and online gaming, enhancing the overall user experience for smartphones and mobile modems (e.g., for laptops) in comparison to earlier 3G technologies. A new generation of cellular standards has emerged roughly every decade since the introduction of 1G systems in 1979. Each generation is defined by the introduction of new frequency bands, higher data rates, and transmission technologies that are not backward-compatible due to the need for significant changes in network architecture and infrastructure. The first commercial 3G networks were launched in mid-2001. It was later succeeded by 4G technology, which provided even higher data transfer rates and introduced advancements in network performance. Overview Several telecommunications companies marketed wireless mobile Internet services as 3G, indicating that the advertised service was provided over a 3G wireless network. However, 3G services have largely been supplanted in marketing by 4G and 5G services in most areas of the world. Services advertised as 3G are required to meet IMT-2000 technical standards, including standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, Third-generation mobile networks, or 3G, must maintain minimum consistent Internet speeds of 144 Kbps. However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service. Subsequent 3G releases, denoted 3.5G and 3.75G, provided mobile broadband access of several Mbit/s for smartphones and mobile modems in laptop computers. 3G branded standards: The UMTS (Universal Mobile Telecommunications System) system, standardized by 3GPP in 2001, was used in Europe, Japan, China (with a different radio interface) and other regions predominated by GSM (Global Systems for Mobile Communications) 2G system infrastructure. The cell phones are typically UMTS and GSM hybrids. Several radio interfaces are offered, sharing the same infrastructure: The original and most widespread radio interface is called W-CDMA (Wideband Code Division Multiple Access). The TD-SCDMA radio interface was commercialized in 2009 and only offered in China. The latest UMTS release, HSPA+, can provide peak data rates up to 56 Mbit/s in the downlink in theory (28 Mbit/s in existing services) and 22 Mbit/s in the uplink. The CDMA2000 system, first offered in 2002, standardized by 3GPP2, used especially in North America and South Korea, sharing infrastructure with the IS-95 2G standard. The cell phones are typically CDMA2000 and IS-95 hybrids. The latest release EVDO Rev. B offers peak rates of 14.7 Mbit/s downstream. The 3G systems and radio interfaces are based on spread spectrum radio transmission technology. While the GSM EDGE standard ("2.9G"), DECT cordless phones and Mobile WiMAX standards formally also fulfill the IMT-2000 requirements and are approved as 3G standards by ITU, these are typically not branded as 3G and are based on completely different technologies. The common standards complying with the IMT2000/3G standard are: EDGE, a revision by the 3GPP organization to the older 2G GSM based transmission methods, which utilizes the same switching nodes, base station sites, and frequencies as GPRS, but includes a new base station and cellphone RF circuits. It is based on the three times as efficient 8PSK modulation scheme as a supplement to the original GMSK modulation scheme. EDGE is still used extensively due to its ease of upgrade from existing 2G GSM infrastructure and cell phones. EDGE combined with the GPRS 2.5G technology is called EGPRS, and allows peak data rates in the order of 200 kbit/s, just like the original UMTS WCDMA versions and thus formally fulfill the IMT2000 requirements on 3G systems. However, in practice, EDGE is seldom marketed as a 3G system, but a 2.9G system. EDGE shows slightly better system spectral efficiency than the original UMTS and CDMA2000 systems, but it is difficult to reach much higher peak data rates due to the limited GSM spectral bandwidth of 200 kHz, and it is thus a dead end. EDGE was also a mode in the IS-136 TDMA system, no longer used. Evolved EDGE, the latest revision, has peaks of 1 Mbit/s downstream and 400 kbit/s upstream but is not commercially used. The Universal Mobile Telecommunications System, created and revised by the 3GPP. The family is a full revision from GSM in terms of encoding methods and hardware, although some GSM sites can be retrofitted to broadcast in the UMTS/W-CDMA format. W-CDMA is the most common deployment, commonly operated on the 2,100 MHz band. A few others use the 850, 900, and 1,900 MHz bands. HSPA is an amalgamation of several upgrades to the original W-CDMA standard and offers speeds of 14.4 Mbit/s down and 5.76 Mbit/s up. HSPA is backward-compatible and uses the same frequencies as W-CDMA. HSPA+, a further revision and upgrade of HSPA, can provide theoretical peak data rates up to 168 Mbit/s in the downlink and 22 Mbit/s in the uplink, using a combination of air interface improvements as well as multi-carrier HSPA and MIMO. Technically though, MIMO and DC-HSPA can be used without the "+" enhancements of HSPA+. The CDMA2000 system, or IS-2000, including CDMA2000 1x and CDMA2000 High Rate Packet Data (or EVDO), standardized by 3GPP2 (differing from the 3GPP), evolving from the original IS-95 CDMA system, is used especially in North America, China, India, Pakistan, Japan, South Korea, Southeast Asia, Europe, and Africa. CDMA2000 1x Rev. E has an increased voice capacity (by three times the original amount) compared to Rev. 0 EVDO Rev. B offers downstream peak rates of 14.7 Mbit/s while Rev. C enhanced existing and new terminal user experience. While DECT cordless phones and Mobile WiMAX standards formally also fulfill the IMT-2000 requirements, they are not usually considered due to their rarity and unsuitability for usage with mobile phones. Break-up of 3G systems The 3G (UMTS and CDMA2000) research and development projects started in 1992. In 1999, ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation; WiMAX was added in 2007. There are evolutionary standards (EDGE and CDMA) that are backward-compatible extensions to pre-existing 2G networks as well as revolutionary standards that require all-new network hardware and frequency allocations. The cell phones use UMTS in combination with 2G GSM standards and bandwidths, but do not support EDGE. The latter group is the UMTS family, which consists of standards developed for IMT-2000, as well as the independently developed standards DECT and WiMAX, which were included because they fit the IMT-2000 definition. While EDGE fulfills the 3G specifications, most GSM/UMTS phones report EDGE ("2.75G") and UMTS ("3G") functionality. History 3G technology was the result of research and development work carried out by the International Telecommunication Union (ITU) in the early 1980s. 3G specifications and standards were developed in fifteen years. The technical specifications were made available to the public under the name IMT-2000. The communication spectrum between 400 MHz to 3 GHz was allocated for 3G. Both the government and communication companies approved the 3G standard. The first pre-commercial 3G network was launched by NTT DoCoMo in Japan in 1998, branded as FOMA. It was first available in May 2001 as a pre-release (test) of W-CDMA technology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on 1 October 2001, although it was initially somewhat limited in scope; broader availability of the system was delayed by apparent concerns over its reliability. The first European pre-commercial network was an UMTS network on the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network (also UMTS based W-CDMA) in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. The first network to go commercially live was by SK Telecom in South Korea on the CDMA-based 1xEV-DO technology in January 2002. By May 2002, the second South Korean 3G network was by KT on EV-DO and thus the South Koreans were the first to see competition among 3G operators. The first commercial United States 3G network was by Monet Mobile Networks, on CDMA2000 1x EV-DO technology, but the network provider later shut down operations. The second 3G network operator in the US was Verizon Wireless in July 2002, also on CDMA2000 1x EV-DO. AT&T Mobility was also a true 3G UMTS network, having completed its upgrade of the 3G network to HSUPA. The first commercial United Kingdom 3G network was started by Hutchison Telecom which was originally behind Orange S.A. In 2003, it announced first commercial third generation or 3G mobile phone network in the UK. The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide, South Australia, by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three or "3" in June 2003. In India, on 11 December 2008, the first 3G mobile and internet services were launched by a state-owned company, Mahanagar Telecom Nigam Limited (MTNL), within the metropolitan cities of Delhi and Mumbai. After MTNL, another state-owned company, Bharat Sanchar Nigam Limited (BSNL), began deploying the 3G networks country-wide. Emtel launched the first 3G network in Africa. Adoption Japan was one of the first countries to adopt 3G, the reason being the process of 3G spectrum allocation, which in Japan was awarded without much upfront cost. The frequency spectrum was allocated in the US and Europe based on auctioning, thereby requiring a huge initial investment for any company wishing to provide 3G services. European companies collectively paid over 100 billion dollars in their spectrum auctions. Nepal Telecom adopted 3G Service for the first time in southern Asia. However, its 3G was relatively slow to be adopted in Nepal. In some instances, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies, especially to achieve high data transmission rates. Other countries' delays were due to the expenses of upgrading transmission hardware, especially for UMTS, whose deployment required the replacement of most broadcast towers. Due to these issues and difficulties with deployment, many carriers could not or delayed the acquisition of these updated capabilities. In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada, and the US, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks. The roll-out of 3G networks was delayed by the enormous costs of additional spectrum licensing fees in some countries. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. This led to a telecoms crash that ran concurrently with similar crashes in the fibre-optic and dot.com fields. The 3G standard is perhaps well known because of a massive expansion of the mobile communications market post-2G and advances of the consumer mobile phone. An especially notable development during this time is the smartphone (for example, the iPhone, and the Android family), combining the abilities of a PDA with a mobile phone, leading to widespread demand for mobile internet connectivity. 3G has also introduced the term "mobile broadband" because its speed and capability made it a viable alternative for internet browsing, and USB Modems connecting to 3G networks, and now 4G became increasingly common. Market penetration By June 2007, the 200 millionth 3G subscriber had been connected of which 10 million were in Nepal and 8.2 million in India. This 200 millionth is only 6.7% of the 3 billion mobile phone subscriptions worldwide. (When counting CDMA2000 1x RTT customers—max bitrate 72% of the 200 kbit/s which defines 3G—the total size of the nearly-3G subscriber base was 475 million as of June 2007, which was 15.8% of all subscribers worldwide.) In the countries where 3G was launched first – Japan and South Korea – 3G penetration is over 70%. In Europe the leading country for 3G penetration is Italy with a third of its subscribers migrated to 3G. Other leading countries for 3G use include Nepal, UK, Austria, Australia and Singapore at the 32% migration level. According to ITU estimates, as of Q4 2012 there were 2096 million active mobile-broadband subscribers worldwide out of a total of 6835 million subscribers—this is just over 30%. About half the mobile-broadband subscriptions are for subscribers in developed nations, 934 million out of 1600 million total, well over 50%. Note however that there is a distinction between a phone with mobile-broadband connectivity and a smart phone with a large display and so on—although according to the ITU and informatandm.com the US has 321 million mobile subscriptions, including 256 million that are 3G or 4G, which is both 80% of the subscriber base and 80% of the US population, according to ComScore just a year earlier in Q4 2011 only about 42% of people surveyed in the US reported they owned a smart phone. In Japan, 3G penetration was similar at about 81%, but smart phone ownership was lower at about 17%. In China, there were 486.5 million 3G subscribers in June 2014, in a population of 1,385,566,537 (2013 UN estimate). Decline and decommissions Since the increasing adoption of 4G networks across the globe, 3G use has been in decline. Several operators around the world have already or are in the process of shutting down their 3G networks (see table below). In several places, 3G is being shut down while its older predecessor 2G is being kept in operation; Vodafone Europe is doing this, citing 2G's usefulness as a low-power fallback. EE in the UK, plans to switch off their 3G networks in early 2024. In the US, Verizon shutdown their 3G services on 31 December 2022, T-Mobile shut down Sprint's networks on and shutdown their main networks on 1 July 2022, and AT&T has done so on 22 February 2022. Currently 3G around the world is declining in availability and support. Technology that depends on 3G for usage are becoming inoperable in many places. For example, the European Union plans to ensure that member countries maintain 2G networks as a fallback, so 3G devices that are backwards compatible with 2G frequencies can continue to be used. However, in countries that plan to decommission 2G networks or have already done so as well, such as the United States and Singapore, devices supporting only 3G and backwards compatible with 2G are becoming inoperable. As of February 2022, less than 1% of cell phone customers in the United States used 3G; AT&T offered free replacement devices to some customers in the run-up to its shutdown. Patents It has been estimated that there are almost 8,000 patents declared essential (FRAND) related to the 483 technical specifications which form the 3GPP and 3GPP2 standards. Twelve companies accounted in 2004 for 90% of the patents (Qualcomm, Ericsson, Nokia, Motorola, Philips, NTT DoCoMo, Siemens, Mitsubishi, Fujitsu, Hitachi, InterDigital, and Matsushita). Even then, some patents essential to 3G might not have been declared by their patent holders. It is believed that Nortel and Lucent have undisclosed patents essential to these standards. Furthermore, the existing 3G Patent Platform Partnership Patent pool has little impact on FRAND protection because it excludes the four largest patent owners for 3G. Features Data rates ITU has not provided a clear definition of the data rate that users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle," the ITU does not actually clearly specify minimum required rates, nor required average rates, nor what modes of the interfaces qualify as 3G, so various data rates are sold as '3G' in the market. In a market implementation, 3G downlink data speeds defined by telecom service providers vary depending on the underlying technology deployed; up to 384 kbit/s for UMTS (WCDMA), up to 7.2 Mbit/sec for HSPA, and a theoretical maximum of 21.1 Mbit/s for HSPA+ and 42.2 Mbit/s for DC-HSPA+ (technically 3.5G, but usually clubbed under the tradename of 3G). Security 3G networks offer greater security than their 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block cipher instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified. In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property. Applications of 3G The bandwidth and location capabilities introduced by 3G networks enabled a wide range of applications that were previously impractical or unavailable on 2G networks. Among the most significant advancements was the ability to perform data-intensive tasks, such as browsing the internet seamlessly while on the move, as well as engaging in other activities that benefited from faster data speeds and enhanced reliability. Beyond personal communication, 3G networks supported applications in various fields, including medical devices, fire alarms, and ankle monitors. This versatility marked a significant milestone in cellular communications, as 3G became the first network to enable such a broad range of use cases. By expanding its functionality beyond traditional mobile phone usage, 3G set the stage for the integration of cellular networks into a wide array of technologies and services, paving the way for further advancements with subsequent generations of mobile networks. Evolution Both 3GPP and 3GPP2 are working on the extensions to 3G standards that are based on an all-IP network infrastructure and using advanced wireless technologies such as MIMO. These specifications already display features characteristic for IMT-Advanced (4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G. 3GPP plans to meet the 4G goals with LTE Advanced, whereas Qualcomm has halted UMB development in favour of the LTE family. On 14 December 2009, TeliaSonera announced in an official press release that "We are very proud to be the first operator in the world to offer our customers 4G services." With the launch of their LTE network, initially they are offering pre-4G (or beyond 3G) services in Stockholm, Sweden and Oslo, Norway. Phase-out See also List of mobile phone generations Mobile radio telephone (also known as "0G") Mobile broadband Wireless device radiation and health 1G 2G 4G 5G 6G LTE (telecommunication) References External links Computer-related introductions in 2001 Japanese inventions Mobile telecommunications Software-defined radio Videotelephony Wireless communication systems
3G
Technology,Engineering
4,617
46,520,227
https://en.wikipedia.org/wiki/HTC%20Desire%20820
The HTC Desire 820 is a mid range Android-based smartphone designed and manufactured by HTC aired and available from November 2014. It is the successor of HTC Desire 816. The smartphone features a 5.5-inch super LCD 2 display with a 1280x720 resolution. Unlike the scratch resistance glass panel of HTC Desire 816, it has Corning Gorilla Glass lll. Desire 820 supports full hd video recording and play back. It offers HTC Sense 6.5. The processor is a Qualcomm Snapdragon 615, 64 bit ARM Cortex A53 octa-core system on a chip (1.7 GHz quad core and quad core 1.0 GHz). It is accompanied by 2 GB RAM, 16 GB internal memory, 128 GB external memory capacity and a non removable 2600 mAh battery. It also comes with a 13.0 MP rear-facing camera and 8 MP front-facing camera. The smartphone came with Android KitKat version 4.4.2 and Android Marshmallow was released in 2016. From March HTC desire 820 can have sense 7 home, like that of desire 816. The Desire 820 also has Dot view style orientation. Like HTC desire 816, the smartphone also supports aptX. With it user can experience CD quality sound through compatible Bluetooth devices. References Android (operating system) devices Desire 816 Mobile phones introduced in 2014 Discontinued smartphones
HTC Desire 820
Technology
295
54,869,297
https://en.wikipedia.org/wiki/Tim%20Elliott%20%28geochemist%29
Timothy Richard Elliott is a professor at the University of Bristol. Education Timothy Elliot was educated at the University of Cambridge and the Open University where he was awarded a PhD in 1991 for research investigating element fractionation in the petrogenesis of ocean island basalts. Career and research Elliott specialises in developing analytical approaches to yield novel isotopic means to reconstruct planetary histories. He has investigated production of melt from the Earth's interior and the chemical consequences of the return of solidified melts to depth via the plate tectonic cycle. In particular, he has assessed elemental fluxes from descending plates and has highlighted how the rise of atmospheric oxygen has been remarkably recorded in the isotopic composition of the deep, solid Earth. His recent focus on planetary growth has identified the rapid formation of metallic cores, how bulk chemistry is notably modified during early accretion and distinctively embellished in its terminal stages. Awards and honours Elliot was awarded the Murchison Medal by the Geological Society of London in 2017 and elected a Fellow of the Royal Society (FRS) in 2017. References Fellows of the Royal Society Living people Year of birth missing (living people) Alumni of the University of Cambridge Alumni of the Open University Academics of the University of Bristol British geochemists Murchison Medal winners
Tim Elliott (geochemist)
Chemistry
261
31,512,398
https://en.wikipedia.org/wiki/Agaricus%20cupreobrunneus
Agaricus cupreobrunneus, commonly known as the copper mushrooom or brown field mushroom, is an edible mushroom of the genus Agaricus. Description The brown cap is wide with flattened reddish-brown fibrils. The white stalk is tall and 1–2 cm wide. The spores are dark brown, elliptical, and smooth. Similar species A. cupreobrunneus is similar in general appearance to a number of other Agaricus species, especially to A. campestris. It also bears strong similarities to A. argenteus, A. augustus, A. hondensis, A. porphyrocephalus, and A. rutilescens. The only potential lookalikes of A. cupreobrunneus that are poisonous are yellow- or red-staining, or occur in much different habitats. Distribution and habitat Agaricus cupreobrunneus tends to fruit in disturbed areas and grassy places, such as lawns, pastures, and roadsides. It can fruit by itself, gregariously, or in fairy rings. Edibility A. cupreobrunneus is edible and good. Its taste is comparable to that of A. campestris, but it is comparatively lacking in texture. A. cupreobrunneus is not currently cultivated on a widespread basis, but is commonly eaten by collectors in the areas in which it grows. It does not contain the carcinogen agaritine, which appears in many other members of the genus Agaricus. See also List of Agaricus species References External links cupreobrunneus Edible fungi Fungi described in 1939 Fungus species
Agaricus cupreobrunneus
Biology
345
8,262,596
https://en.wikipedia.org/wiki/Takraf%20GmbH
TAKRAF Group (“TAKRAF”), is a global German industrial company. Through its brands, TAKRAF and DELKOR, the Group provides equipment, systems and services to the mining and associated industries. The TAKRAF portfolio covers high-capacity run-of-mine and bulk material handling from overburden removal, to raw material extraction, comminution, conveying, loading/unloading, processing, homogenizing, blending and storage to final loading for onward shipment. TAKRAF has supplied the most powerful conveying system in the world. History While the official foundation date of TAKRAF Group is given as 1948, its origins stretch back to 1725 when the Lauchhammer works for fabricating construction equipment were established, in then Prussia, together with the first blast furnace for producing iron. The 19th century saw, in 1809, the start of activities as a mechanical engineering company, as well as major milestones being contributed to Germany’s industrial history. These included, in 1874, the Lauchhammer works commencing high-rise and iron bridge construction in Oberhammer, and the start of fabrication of overburden and lignite mining equipment. The Lauchhammer works continued to contribute important firsts into the 20th century. The first overburden conveyor bridge was supplied in 1924, followed, two years later, in 1926, by fabrication of the first three bucket-wheel excavators. The years following the foundation of TAKRAF, then known as ABUS, in 1948 saw supply of the first 60 meter moveable overburden conveyor bridge for the Welzon Sued lignite mine in 1973. This was followed by the construction of four other similar conveyor bridges prior to 1991, when the world’s largest bridge complex, the 60 m overburden conveyor bridge in the Klettwitz-Nord opencast mine was commissioned. The bridge (Visitor Mine “F 60 Lichterfeld”) is open for visitors at the Internationale Bauausstellung Fürst-Pückler-Land (International Mining Exhibition Fürst-Pückler-Land). In 1990, the 500th bucket-wheel excavator was supplied. Large scale equipment developed for on-off heap leach technology for copper ore in 1994. In 1998, large, customized gearboxes for bucket-wheel drive gearboxes were developed while, in 2000, the longest conveyor for its time was supplied. In 2006, the first TAKRAF mobile conveyor bridges for stacking and reclaiming were developed and supplied. Renamed as TAKRAF GmbH in 2006, it was, 1 year later, acquired by the international Techint Group, operating within Tenova SpA under the brand name Tenova TAKRAF. TAKRAF is short for Tagebergbau-Ausrüstungen, Krane und Förderanlagen (surface mining equipment, cranes and conveying equipment). 2010 and the years following saw the development of the first TAKRAF mobile crushing plant, the double-roll crusher for oil sands and sizer technology, including the X-TREME class sizer range for hard rock processing. The integration, in 2014, of DELKOR, specialising in mineral processing, added liquid/solid separation and wet processing capabilities to the product line. A contract for the world’s most powerful conveyor system was awarded in 2015, a project which features the application of gearless drive conveyor (GDC) technology as well as a number of other innovations. In 2017, High Pressure Grinding Roll (HPGR) technology was developed for specific comminution requirements and, in 2020, the MAXGen mechanism was incorporated within the DELKOR BQR Flotation Cell to optimize metallurgical performance. Following a rebranding in 2020, simply calling itself TAKRAF Group, the entity has continued to establish important milestones across the globe, including supply of the first Dry Stack Tailings (DST) system, for the environmentally friendly and safe disposal of tailings, to Brazil. In the same year, 2021, TAKRAF supplied its first maintenance cart to a USA operation. The maintenance cart was provided as part of a crusher relocation project on a copper mine to facilitate the safe and efficient replacement of idlers on a steep 26 % downhill conveyor. In 2022, the first TAKRAF Sizers to be supplied into the India market were ordered by a major global steel producer, while 2023 saw the award of one of the largest single orders in the Group’s history. The contract covers the design, fabrication and delivery of an advanced and integrated IPCC (In-Pit Crushing & Conveying) and material handling system for the Simandou iron ore complex in Guinea. The entity is headquartered in Leipzig, Germany and has several representations worldwide, including global competence, fabrication and research & development facilities. Product and Service Centers TAKRAF Group has Product and Service Centers in Lauchhammer, Germany and Bengaluru, India, both of which include in-house fabrication facilities and are located close to major international transport routes to facilitate dispatch of fabricated components and equipment worldwide. With a covered production area of 4,000 m2 and crane capacities of up to 60 t, the Lauchhammer Center is dedicated to engineering, fabrication and testing of TAKRAF’s comminution products and high-value mechanical components for mining equipment including spare and wear parts. The in-house minerals laboratory is equipped to conduct material tests to determine factors such as crushability and abrasiveness, while technology to increase the wear life of critical parts is continuously developed. The facilities are certified for compliance with relevant ISO quality (9001:2015; 14001:2015) and ISO 45001:2018 occupational safety, health and environmental protection standards. The Bengaluru Center focuses on engineering and fabrication of DELKOR liquid/solid separation products, as well as a variety of metal and mineral processing equipment. The facility comprises more than 10,000 m2 of covered space and is fully equipped with cranes ranging from 2 t to 25 t. The facility is certified for ISO 9001:2015 compliance. Projects Utkal flight conveyor A contract for an approximately 19 km overland conveyor system was awarded for the greenfield Utkal Alumina project in Tikri, Raigada. It included the longest single flight conveyor system to be installed to date within Indian territory. The overland conveyor system transports bauxite from the mines to a 4.5 mtpa alumina plant, and traverses highly undulating topography over almost its entire route. As a result of the topography and due to the conveyor length, the conveyors were designed with head and tail drives, and multiple, very tight compound horizontal and vertical curves. With an installed power of 6 x 850 kW and 2 x 850 kW, the conveyor system features 6 drives at the tail end and 4 at the head end on the longer conveyor, while the shorter conveyor has 2 drives at the head end only. Each conveyor features a fail-safe hydraulic disc brake at the tail end. A take-up winch with capstan brake arrangement has been provided at the head end of both conveyors. The intermediate transfer point between the 2 conveyors is located in hilly terrain and, since the 4 head end drives of the longer conveyor are also located here, the conveyor drive and take-up area are mounted on a portal steel structure. These lightweight but high strength structures provide the design flexibility to accommodate the terrain. To facilitate maintenance, approach roads and a mine road were made available along the entire conveyor length, with cage ladders provided on the elevated structures enabling ease of access. Flotation Technology The new generation DELKOR BQR Flotation Cell, which is equipped with the proprietary MAXGen mechanism, was developed to enhance metallurgical performance with a view to increasing the sustainable recovery of minerals, combined with greater ease of maintenance and lower cost of ownership. DELKOR BQR cells are used in roughing, scavenging, cleaning and re-cleaning applications to process copper, zinc, Platinum Group Metals (PGMs), phosphates, graphite, slag and effluents. In its first commercial application, the cell was applied to maximize limestone recovery for one of India’s leading manufacturers and suppliers of cement, enabling some 50% recovery of the limestone from the tailings. Following the success of this first commercial application, the cell is being applied across a range of commodities. Example applications include processing fluorspar at a Spanish operation and for iron ore recovery at plants in Honduras and South Africa, as well as installations at two gold mines and a nickel restart project in Australia. Maintenance As a high-risk industry, mining places considerable focus on maintenance of equipment to help decrease incidents and fatalities, as well as to improve efficiency, productivity and performance. However, maintenance work itself is prone to risk, requiring workers to take special care, resulting in operations believing that attention to safety detracts from the efficiency of the maintenance operation – in fact that safety and efficiency are inherently in conflict with one other. But, as demonstrated by TAKRAF Group’s holistic and intelligent maintenance philosophy, this is not necessarily true and a smart approach to maintenance will not only cater to both objectives but actually enable them to complement and enhance one other. In-pit crushing and conveying (IPCC) systems TAKRAF’s IPCC system comprises in-pit crushing stations, incorporating the TAKRAF Sizer and roll crusher equipment, connected to a network of conveyors and spreaders or stackers, with each component designed for ease of maintainability. Maintenance cart Conducting maintenance on belt conveyors involves the replacement of worn or damaged idlers, work that is especially challenging on the steep slopes, in tunnels or on the elevated structures so typical of mining sites. TAKRAF’s maintenance cart was therefore developed to protect the safety of personnel and for the efficient replacement of idlers by being able to access any location along a conveyor belt. The replacement of idlers in both the top and return strand of a belt conveyor is possible in less than 15 minutes, as the cart is fitted with a belt lifting device that lifts the belt away from the idler to be replaced. A maintenance cart at a major copper project in North America services a 4,350 m overland conveyor with more than 13,000 rolls, of which a steep, 26 % decline section, 1,250 m in length, boasts more than 3,700 rolls. TAKRAF Automatic Belt Training System (ABTS) Tube (also referred to as pipe) conveyors are prone to belt twisting, which, in a worst-case scenario, leads to conveyor collapse. In addition, the belt overlap, where the tube belt opens to discharge the material, needs to be precisely controlled and, if necessary, adjusted for accurate and efficient operation. As a result, TAKRAF developed the ABTS, a patented measurement, control and training device for ensuring the correct overlap position at the discharge area of a tube conveyor. The system has been fitted to a variety of global TAKRAF tube conveyor installations. The ABTS automatically determines the belt overlap position via ultrasonic sensors. If the overlap exceeds the tolerance limit, servomotors are activated that rotate the tube profile through individual idlers into the desired position via targeted tilting adjustments. Known Products Type Es 3750 bucket chain excavator Type ERs 500 bucket chain excavator Type ERs (K) 800 bucket chain excavator Type SRs 8000 bucket-wheel excavator Type SRs 2000 bucket-wheel excavator Type SRs(H) 1050.23/2.0 compact bucket-wheel excavator Overburden Conveyor Bridge F60 See also Overburden Conveyor Bridge F60 Lauchhammer works References External links TAKRAF Group website TAKRAF Group YouTube channel TAKRAF Group LinkedIn page TAKRAF Group X page Manufacturing companies established in 1958 1958 establishments in East Germany Companies of East Germany MAN SE Mining equipment companies
Takraf GmbH
Engineering
2,443
73,138,924
https://en.wikipedia.org/wiki/Neostigmine/glycopyrronium%20bromide
Neostigmine/glycopyrronium bromide, sold under the brand name Prevduo , is a fixed-dose combination medication used for the reversal of the effects of non-depolarizing neuromuscular blocking agents after surgery. It contains neostigmine as the methylsulfate, a cholinesterase inhibitor, and glycopyrronium bromide, an antimuscarinic agent. Neostigmine/glycopyrronium bromide was approved for medical use in the United Kingdom in 2007, and in the United States in February 2023. Medical uses Neostigmine/glycopyrronium bromide is indicated for the reversal of the effects of non-depolarizing neuromuscular blocking agents after surgery, while decreasing the peripheral muscarinic effects (e.g., bradycardia and excessive secretions) associated with cholinesterase inhibition following non-depolarizing neuromuscular blocking agent reversal administration. References Acetylcholinesterase inhibitors Combination drugs Muscarinic antagonists
Neostigmine/glycopyrronium bromide
Chemistry
229
353,673
https://en.wikipedia.org/wiki/FOX%20proteins
FOX (forkhead box) proteins are a family of transcription factors that play important roles in regulating the expression of genes involved in cell growth, proliferation, differentiation, and longevity. Many FOX proteins are important to embryonic development. FOX proteins also have pioneering transcription activity by being able to bind condensed chromatin during cell differentiation processes. The defining feature of FOX proteins is the forkhead box, a sequence of 80 to 100 amino acids forming a motif that binds to DNA. This forkhead motif is also known as the winged helix, due to the butterfly-like appearance of the loops in the protein structure of the domain. Forkhead proteins are a subgroup of the helix-turn-helix class of proteins. Biological roles Many genes encoding FOX proteins have been identified. For example, the FOXF2 gene encodes forkhead box F2, one of many human homologues of the Drosophila melanogaster transcription factor forkhead. FOXF2 is expressed in the lung and placenta. Some FOX genes are downstream targets of the hedgehog signaling pathway, which plays a role in the development of basal cell carcinomas. Members of class O (FOXO- proteins) regulate metabolism, cellular proliferation, stress tolerance and possibly lifespan. The activity of FoxO is controlled by post-translational modifications, including phosphorylation, acetylation and ubiquitination. Discovery The founding member and namesake of the FOX family is the fork head transcription factor in Drosophila, discovered by German biologists Detlef Weigel and Herbert Jäckle. Since then a large number of family members have been discovered, especially in vertebrates. Originally, they were given vastly different names (such as HFH, FREAC, and fkh), but in 2000 a unified nomenclature was introduced that grouped the FOX proteins into subclasses (FOXA-FOXS) based on sequence conservation. Genes FOXA1, FOXA2, FOXA3 (See also Hepatocyte nuclear factors.) FOXB1, FOXB2 FOXC1 (associated with glaucoma), FOXC2 (varicose veins) FOXD1, FOXD2, FOXD3 (vitiligo), FOXD4, FOXD4L1, FOXD4L3, FOXD4L4, FOXD4L5, FOXD4L6 FOXE1 (thyroid), FOXE3 (lens) FOXF1 (lung), FOXF2 FOXG1 (brain) FOXH1 (widely expressed) FOXI1 (ear), FOXI2, FOXI3 FOXJ1 (cilia), FOXJ2 (erythroid), FOXJ3 FOXK1, FOXK2 (HIV, IL-2, adrenal) FOXL1 (ovary), FOXL2 FOXM1 (cell cycle, erythroid, cancer) FOXN1 (hair, thymus), FOXN2, FOXN3 (cell cycle checkpoints; widely expressed), FOXN4 FOXO1 (widely expressed: muscle, liver, pancreas), FOXO3 (apoptosis, erythroid, longevity), FOXO4 (widely expressed), FOXO6 (liver, skeletal muscle, brain) FOXP1 (pluripotency then brain, heart and lung), FOXP2 (widely expressed? brain; language), FOXP3 (T cells), FOXP4 – may be ancestrally responsible for motor learning, based on insect studies (where there's only one FoxP) FOXQ1 FOXR1, FOXR2 FOXS1 Cancer A member of the FOX family, FOXD2, has been detected progressively overexpressed in human-papillomavirus-positive neoplastic keratinocytes derived from uterine cervical preneoplastic lesions at different levels of malignancy. For this reason, this gene is likely to be associated with tumorigenesis and may be a potential prognostic marker for uterine cervical preneoplastic lesions progression. References External links Aging-related proteins
FOX proteins
Biology
859
2,556,580
https://en.wikipedia.org/wiki/Paul%20Bernays
Paul Isaac Bernays (17 October 1888 – 18 September 1977) was a Swiss mathematician who made significant contributions to mathematical logic, axiomatic set theory, and the philosophy of mathematics. He was an assistant and close collaborator of David Hilbert. Biography Bernays was born into a distinguished German-Jewish family of scholars and businessmen. His great-grandfather, Isaac ben Jacob Bernays, served as chief rabbi of Hamburg from 1821 to 1849. Bernays spent his childhood in Berlin, and attended the Köllnische Gymnasium, 1895–1907. At the University of Berlin, he studied mathematics under Issai Schur, Edmund Landau, Ferdinand Georg Frobenius, and Friedrich Schottky; philosophy under Alois Riehl, Carl Stumpf and Ernst Cassirer; and physics under Max Planck. At the University of Göttingen, he studied mathematics under David Hilbert, Edmund Landau, Hermann Weyl, and Felix Klein; physics under Voigt and Max Born; and philosophy under Leonard Nelson. In 1912, the University of Berlin awarded him a Ph.D. in mathematics for a thesis, supervised by Landau, on the analytic number theory of binary quadratic forms. That same year, the University of Zurich awarded him habilitation for a thesis on complex analysis and Picard's theorem. The examiner was Ernst Zermelo. Bernays was Privatdozent at the University of Zurich, 1912–1917, where he came to know George Pólya. His collected communications with Kurt Gödel span many decades. Starting in 1917, David Hilbert employed Bernays to assist him with his investigations of the foundation of arithmetic. Bernays also lectured on other areas of mathematics at the University of Göttingen. In 1918, that university awarded him a second habilitation for a thesis on the axiomatics of the propositional calculus of Principia Mathematica. In 1922, Göttingen appointed Bernays extraordinary professor without tenure. His most successful student there was Gerhard Gentzen. After Nazi Germany enacted the Law for the Restoration of the Professional Civil Service in 1933, the university fired Bernays because of his Jewish ancestry. After working privately for Hilbert for six months, Bernays and his family moved to Switzerland, whose nationality he had inherited from his father, and where the ETH Zurich employed him on occasion. He also visited the University of Pennsylvania and was a visiting scholar at the Institute for Advanced Study in 1935–36 and again in 1959–60. Mathematical work His habilitation thesis was written under the supervision of Hilbert himself, on the topic of the axiomatisation of propositional logic in Whitehead and Russell's Principia Mathematica. It contains the first known proof of semantic completeness of propositional logic, which was reproved independently also by Emil Post later on. Bernays's collaboration with Hilbert culminated in the two volume work, Grundlagen der Mathematik (English: Foundations of Mathematics) published in 1934 and 1939, which is discussed in Sieg and Ravaglia (2005). A proof in this work that a sufficiently strong consistent theory cannot contain its own reference functor is known as the Hilbert–Bernays paradox. In seven papers, published between 1937 and 1954 in the Journal of Symbolic Logic (republished in Müller 1976), Bernays set out an axiomatic set theory whose starting point was a related theory John von Neumann had set out in the 1920s. Von Neumann's theory took the notions of function and argument as primitive. Bernays recast von Neumann's theory so that classes and sets were primitive. Bernays's theory, with modifications by Kurt Gödel, is known as von Neumann–Bernays–Gödel set theory. Publications Notes References . Kneebone, Geoffrey, 1963. Mathematical Logic and the Foundation of Mathematics. Van Nostrand. Dover reprint, 2001. A gentle introduction to some of the ideas in the Grundlagen der Mathematik. External links Hilbert Bernays Project Paul Bernays: A Short Biography (1976) 1888 births 1977 deaths 20th-century Swiss philosophers Institute for Advanced Study visiting scholars Jewish philosophers Jewish scientists Mathematical logicians Philosophers of mathematics Set theorists Swiss Ashkenazi Jews Swiss mathematicians Swiss philosophers Academic staff of ETH Zurich
Paul Bernays
Mathematics
861
70,967,383
https://en.wikipedia.org/wiki/Mimetic%20interpolation
In mathematics, mimetic interpolation is a method for interpolating differential forms. In contrast to other interpolation methods, which estimate a field at a location given its values on neighboring points, mimetic interpolation estimates the field's -form given the field's projection on neighboring grid elements. The grid elements can be grid points as well as cell edges or faces, depending on . Mimetic interpolation is particularly relevant in the context of vector and pseudo-vector fields as the method conserves line integrals and fluxes, respectively. Interpolation of integrated forms Let be a differential -form, then mimetic interpolation is the linear combination where is the interpolation of , and the coefficients represent the strengths of the field on grid element . Depending on , can be a node (), a cell edge (), a cell face () or a cell volume (). In the above, the are the interpolating -forms, which are centered on and decay away from in a way similar to the tent functions. Examples of are the Whitney forms for simplicial meshes in dimensions. An important advantage of mimetic interpolation over other interpolation methods is that the field strengths are scalars and thus coordinate system invariant. Interpolating forms In many cases, it is desirable for the interpolating forms to pick the field's strength on particular grid elements without interference from other . This allows one to assign field values to specific grid elements, which can then be interpolated in-between. A common case is linear interpolation for which the interpolating functions (-forms) are zero on all nodes except on one, where the interpolating function is one. A similar construct can be applied to mimetic interpolation That is, the integral of is zero on all cell elements, except on where the integral returns one. For this amounts to where is a grid point. For the integral is over edges and hence the integral is zero expect on edge . For the integral is over faces and for over cell volumes. Conservation properties Mimetic interpolation respects the properties of differential forms. In particular, Stokes' theorem is satisfied with denoting the interpolation of . Here, is the exterior derivative, is any manifold of dimensionality and is the boundary of . This confers to mimetic interpolation conservation properties, which are not generally shared by other interpolation methods. Commutativity between the interpolation operator and the exterior derivative To be mimetic, the interpolation must satisfy where is the interpolation operator of a -form, i.e. . In other words, the interpolation operators and the exterior derivatives commute. Note that different interpolation methods are required for each type of form (), . The above equation is all that is needed to satisfy Stokes' theorem for the interpolated form Other calculus properties derive from the commutativity between interpolation and . For instance, , The last step gives zero since when integrated over the boundary . Projection The interpolated is often projected onto a target, -dimensional, oriented manifolds ,For the target is a point, for it is a line, for an area, etc. Applications Many physical fields can be represented as -forms. When discretizing fields in numerical modeling, each type of -form often acquires its own staggering in accordance with numerical stability requirements, e.g. the need to prevent the checkerboard instability. This led to the development of the exterior finite element and discrete exterior calculus methods, both of which rely on a field discretization that are compatible with the field type. The table below lists some examples of physical fields, their type, their corresponding form and interpolation method, as well as software that can be leveraged to interpolate, remap or regrid the field: Example Consider quadrilateral cells in two dimensions with their node indexed in the counterclockwise direction. Further, let and be the parametric coordinates of each cell (). Then are the bilinear interpolating forms of in the unit square (). The corresponding edge interpolating forms are were we assumed the edges to be indexed in counterclockwise direction and with the edges pointing to the east and north. At lowest order, there is only one interpolating form for , where is the wedge product. We can verify that the above interpolating forms satisfy the mimetic conditions and . Take for instance, where , , and are the field values evaluated at the corners of the quadrilateral in the unit square space. Likewise, we have with , , being the 1-form projected onto edge . Note that is also known as the pullback. If is the map that parametrizes edge , , , then where the integration is performed in space. Consider for instance edge , then with and denoting the start and points. For a general 1-form , one gets . References Interpolation Differential forms
Mimetic interpolation
Engineering
1,039
40,499,878
https://en.wikipedia.org/wiki/Dendrobine
Dendrobine is an alkaloid found in Dendrobium nobile at an average of 0.5% by weight. It is a colorless solid at room temperature. It is related to the picrotoxin family of natural products. When given a fatal dose, death is usually caused by convulsions. It possesses a molecular structure that attracted interest in its total synthesis by organic chemists. Synthesis There have been 3 successful enantioselective syntheses of dendrobine reported with yields ranging from 0.2-4.0%. The structure of dendrobine is intriguing due to its tetracyclic ring system with seven contiguous stereocenters. Most recently, a full synthesis of (-)-dendrobine was carried out by Kreis et al. with a yield of 4.0%. The novelty of Kreis' synthesis is the key reaction cascade with an amine functioning as the linchpin that initiates the sequence of reactions while embedding itself in the target structure. This reaction cascade occurs stereoselectively only when it is carried out without isolation of intermediates. The cascade successfully installs a key quaternary center while simultaneously designating a stereosymetric carbon center. Effects While dendrobine's effects on humans have not been studied extensively, studies of its pharmacological effects on various small animals were conducted in 1935 by Chen and Chen. It was concluded that dendrobine exhibited a weak analgesic effect when administered to mice (5–15 mg/kg), and an antipyretic effect when administered to rabbits (8.5 mg/kg). Hypotensive effects were reported in experiments with frogs, cats, and a dog. Toxicity The minimum lethal doses by intravenous injection are 20 mg/kg for mice and rats, 22 mg/kg in guinea pigs, and 17 mg/kg in rabbits. Dendrobine is a strongly selective competitive antagonist of β-alanine, taurine, and glycine. While structurally related to picrotoxinin, dendrobine is not an antagonist of GABA. References Antipyretics Glycine receptor antagonists Alkaloids Plant toxins Isopropyl compounds Lactones Convulsants Neurotoxins
Dendrobine
Chemistry
474
41,597,819
https://en.wikipedia.org/wiki/SpaceEngine
SpaceEngine is an interactive 3D planetarium and astronomy software initially developed by Russian astronomer and programmer Vladimir Romanyuk. Development is now continued by Cosmographic Software, an American company founded by Romanyuk and the SpaceEngine Team in February 2022, based in Connecticut. SpaceEngine creates a 1:1 scale three-dimensional planetarium representing the entire observable universe, combining real astronomical data with scientifically accurate procedural generation algorithms. Users can travel through space in any direction or at any speed and can move forwards or backwards in time. SpaceEngine is currently in beta status. Up to version 0.9.8.0E, released in August 2017, it was available as freeware for Microsoft Windows. Version 0.990 beta, the first paid edition, was released on Steam in June 2019. The program fully supports VR headsets. Properties of objects, such as temperature, mass, radius, and spectrum, are presented on the HUD and in an accessible information window. Users can observe a wide range of celestial objects, from small asteroids and moons to large galaxy clusters, similar to other simulators like Celestia, OpenSpace, Gaia Sky, and Nightshade NG. The default version of SpaceEngine includes over 130,000 real objects, featuring stars from the Hipparcos catalog, galaxies from the NGC and IC catalogs, many well-known nebulae, and all known exoplanets and their stars. Functionality The proclaimed goal of SpaceEngine is scientific realism, and to reproduce every type of known astronomical phenomenon. It uses star catalogs along with procedural generation to create a cubical universe over 10 billion parsecs (32.6 billion light-years) on each side, roughly centered on the barycenter of the Solar System. Within the software, users can use search tools to filter through astronomical objects based on certain characteristics. In the case of planets and moons, specific environmental types, surface temperatures, and pressures can be used to filter through the vast amount of different procedurally generated worlds. SpaceEngine also has a built-in flight simulator (currently in Alpha) which allows for users to spawn in a selection of fictional spacecraft which can be flown in an accurate model of orbital mechanics and also an atmospheric flight model when entering the atmospheres of the various planets and moons. The spacecraft range from small SSTO spaceplanes, to large interstellar spacecraft which are all designed with realism in mind, featuring radiators, fusion rockets, and micrometeorite shields. Interstellar spacecraft simulate the hypothetical Alcubierre drive, including the relativistic effects that would occur in reality. Catalog objects The real objects that SpaceEngine includes are the Hipparcos catalog for stars, the NGC and IC catalogs for galaxies, all known exoplanets, and prominent star clusters, nebulae, and Solar System objects including some comets and asteroids. Procedurally generated objects Objects that are procedurally generated in Space Engine are aimed to be as realistic as possible. The objects include galaxies, star clusters (open and globular), nebulae and individual stars, containing terrestrial planets and gas giants and moons. These objects, like non-procedurally generated objects, can be saved manually by the user and searched for. Wiki and locations The software has its own built-in "wiki" database which gives detailed information on all celestial objects and enables a player to create custom names and descriptions for them. It also has a locations database where a player can save any position and time in the simulation and load it again in the future. Extensions SpaceEngine has a fairly large modding community dedicated to expanding on the program's current catalogues, improving things like texture quality, and even improving the program's terrain and cloud generation as a whole (See Rodrigo's Mod). Some SE add-on creators create fictional star systems for their worldbuilding project, others do 3D modelling for spacecraft add-ons, and some do completely different things. These extensions are all available for download from SpaceEngine's Web Forums. Limitations Although objects that form part of a planetary system move, and stars rotate about their axes and orbit each other in multiple star systems, stellar proper motion or precession is not simulated, and galaxies are at fixed locations and do not rotate. Most real-world spacecraft such as Voyager 2 are not provided with SpaceEngine. The few spacecraft that are included do not use real trajectories or accurate orientations. Interstellar light absorption is not modeled in SpaceEngine. Intrinsic variable stars are not supported by SpaceEngine. In fact, most, if not all, simulators do not support intrinsic variable stars. Gravity is not simulated in SpaceEngine outside the orbits of moons, planets and stars in a system, with the exception of the controllable spacecraft. Development Development of SpaceEngine began in 2005, with its first public release in June 2010. The software is written in C++. The engine uses OpenGL as its graphical API and uses shaders written in GLSL. As of the release of version 0.990, the shaders have been encrypted to protect against plagiarism. Plans have been made to start opening them in a way that allows the community to develop special content for the game, with ship engine effects being made available to users who have purchased the game. On May 27, 2019, the Steam store page for SpaceEngine was made public in preparation for the release of the first paid version, 0.990 beta. SpaceEngine is currently only available for Windows PCs; however, there are plans for the software to support macOS and Linux in the future. Even though SpaceEngine only natively supports Windows, the Steam version can be run on Linux via Steam's Proton compatibility tool. See also Celestia Space flight simulation game List of space flight simulation games Planetarium software List of observatory software List of games with Oculus Rift support Gravity (software) Google Earth References External links Russian language website SpaceEngine Forum 2010 video games Astronomy software Science software for Windows Steam Greenlight games Video game engines Articles containing video clips Video games developed in Russia Windows games Windows-only games
SpaceEngine
Astronomy
1,271
27,813
https://en.wikipedia.org/wiki/Systematics
Systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships, graphically represented in cladograms) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth. The word systematics is derived from the Latin word of Ancient Greek origin systema, which means systematic arrangement of organisms. Carl Linnaeus used 'Systema Naturae' as the title of his book. Branches and applications In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics. Biological systematics classifies species by using three specific branches. Numerical systematics, or biometry, uses biological statistics to identify and classify animals. Biochemical systematics classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus, organelles, and cytoplasm. Experimental systematics identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units. With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include: Studying the diversity of organisms and the differentiation between extinct and living creatures. Biologists study the well-understood relationships by making many different diagrams and "trees" (cladograms, phylogenetic trees, phylogenies, etc.). Including the scientific names of organisms, species descriptions and overviews, taxonomic orders, and classifications of evolutionary and organism histories. Explaining the biodiversity of the planet and its organisms. The systematic study is that of conservation. Manipulating and controlling the natural world. This includes the practice of 'biological control', the intentional introduction of natural predators and disease. Definition and relation with taxonomy John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics". In 1970 Michener et al. defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relationship to one another as follows: Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above. The term "taxonomy" was coined by Augustin Pyramus de Candolle while the term "systematic" was coined by Carl Linnaeus the father of taxonomy. Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other. For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828, and in 1888 respectively. Some claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics, broadly dealing with the inferred hierarchy of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others. Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e. nomenclature) of organisms, while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms. Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist, a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them. Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics, which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. systematists generally make extensive use of molecular biology and of computer programs to study organisms. Taxonomic characters Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include: Morphological characters General external morphology Special structures (e.g. genitalia) Internal morphology (anatomy) Embryology Karyology and other cytological factors Physiological characters Metabolic factors Body secretions Genic sterility factors Molecular characters Immunological distance Electrophoretic differences Amino acid sequences of proteins DNA hybridization DNA and RNA sequences Restriction endonuclease analyses Other molecular differences Behavioral characters Courtship and other ethological isolating mechanisms Other behavior patterns Ecological characters Habit and habitats Food Seasonal variations Parasites and hosts Geographic characters General biogeographic distribution patterns Sympatric-allopatric relationship of populations See also Cladistics – a methodology in systematics Evolutionary systematics – a school of systematics Global biodiversity Phenetics – a methodology in systematics that does not infer phylogeny Phylogeny – the historical relationships between lineages of organism 16S ribosomal RNA – an intensively studied nucleic acid that has been useful in phylogenetics Phylogenetic comparative methods – use of evolutionary trees in other studies, such as biodiversity, comparative biology. adaptation, or evolutionary mechanisms References Notes Further reading Brower, Andrew V. Z. and Randall T. Schuh. 2021. Biological Systematics: Principles and Applications, 3rd edn. Simpson, Michael G. 2005. Plant Systematics. Wiley, Edward O. and Bruce S. Lieberman. 2011. "Phylogenetics: Theory and Practice of Phylogenetic Systematics, 2nd edn." External links Society of Australian Systematic Biologists Society of Systematic Biologists The Willi Hennig Society Evolutionary biology Biological classification
Systematics
Biology
1,559
53,537,785
https://en.wikipedia.org/wiki/Edmund%20Storms
Edmund Storms is a nuclear chemist known for his work in cold fusion. Career He is a nuclear chemist who worked at Los Alamos National Lab for more than 30 years. He established Kiva Labs in Santa Fe where he continues exploration of evidence of his model of cold fusion. Storms is also a Science Advisor to Cold Fusion Now. Storm's work is listed in the Atomic Energy of Canada Ltd. 2013 Report on cold fusion, which identifies 25 theories on the mechanisms behind cold fusion, but notes that "What was apparent from this review is that there has been a plethora of investigations and theories for CF/LENR/CMNS over the last 20 years, but a relative shortage of credible, peer-reviewed information sources." Publications Storms has published more than a hundred journal articles and several books. He has spoken on his work at conferences of the ACS, APS, and ICCF Selected publications Storms, E. (2007). Science Of Low Energy Nuclear Reaction, The: A Comprehensive Compilation Of Evidence And Explanations About Cold Fusion. Singapore: World Scientific Publishing Company. References 21st-century American chemists Living people 1948 births People from Camp Hill, Pennsylvania Nuclear chemists 20th-century American chemists
Edmund Storms
Chemistry
246
40,442,433
https://en.wikipedia.org/wiki/Acetylfentanyl
Acetylfentanyl (acetyl fentanyl) is an opioid analgesic drug that is an analog of fentanyl. Studies have estimated acetylfentanyl to be 15 times more potent than morphine, which would mean that despite being somewhat weaker than fentanyl, it is nevertheless still several times stronger than pure heroin. It has never been licensed for medical use and instead has only been sold on the illicit drug market. Acetylfentanyl was discovered at the same time as fentanyl itself and had only rarely been encountered on the illicit market in the late 1980s. However, in 2013, Canadian police seized 3 kilograms of acetylfentanyl. As a μ-opioid receptor agonist, acetylfentanyl may serve as a direct substitute for oxycodone, heroin or other opioids. Common side effects of fentanyl analogs are similar to those of fentanyl itself, which include itching, nausea, and potentially fatal respiratory depression. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear. Deaths Europe Acetylfentanyl has been analytically confirmed in 32 fatalities in four European member states between 2013 and August 2015, Germany (2), Poland (1), Sweden (27), and the United Kingdom (2). Russia Twelve deaths have been associated with acetylfentanyl in Russia since 2012. United States The Centers for Disease Control and Prevention (CDC) issued a health alert to report that between March 2013 and May 2013, 14 overdose deaths related to injected acetylfentanyl had occurred among intravenous drug users (ages between 19 and 57 years) in Rhode Island. After confirming five overdoses in one county, including a fatality, Pennsylvania asked coroners and medical examiners across the state to screen for acetylfentanyl. As a result of this investigation, Pennsylvania confirmed at least one acetylfentanyl overdose death and attributed at least 50 fatalities to either fentanyl or acetylfentanyl during the first half of 2013. In July 2015, the DEA informed about 52 confirmed fatalities involving acetylfentanyl in the United States between 2013 and 2015. Japan One fatal poisoning caused by intravenous injection of a "bath salt" product containing acetylfentanyl mixed with 4'-Methoxy-α-pyrrolidinopentiophenone (a substituted cathinone) has been reported in 2016. Legal status Canada As an analog of fentanyl, acetylfentanyl is a Schedule I controlled drug. China As of October 2015, acetylfentanyl is a controlled substance in China. United States Acetylfentanyl is a Schedule I controlled substance as of May 2015. Switzerland , acetylfentanyl is a controlled substance in Switzerland. United Kingdom Acetylfentanyl was made a class A drug as an analogue of fentanyl in 1986. Overdose Acetylfentanyl overdosage has been reported to closely resemble heroin overdosage clinically. Additionally, while naloxone (Narcan) is effective in treating acetylfentanyl overdose, larger than normal doses of the antidote may be required. Detection in body fluids Acetylfentanyl may be quantitated in blood, plasma, or urine by liquid chromatography-mass spectrometry to confirm a diagnosis of poisoning in hospitalized patients or to provide evidence in a medicolegal death investigation. Postmortem peripheral blood acetylfentanyl concentrations have been in a range of 89–945 μg/L in victims of acute overdosage. See also 3-Methylbutyrfentanyl 3-Methylfentanyl 4-Fluorofentanyl α-Methylfentanyl Butyrfentanyl Furanylfentanyl Homofentanyl List of fentanyl analogues References Further reading General anesthetics Synthetic opioids Piperidines Anilides Acetamides Mu-opioid receptor agonists Janssen Pharmaceutica Belgian inventions Euphoriants Fentanyl
Acetylfentanyl
Chemistry
895
30,522,000
https://en.wikipedia.org/wiki/Beaver%20Hills%20%28Alberta%29
The Beaver Hills (), also known as the Beaver Hills Moraine and the Cooking Lake Moraine, are a rolling upland region in Central Alberta, just to the east of Edmonton, the provincial capital. It consists of of "knob and kettle" terrain, containing many glacial moraines and depressions filled with small lakes. The landform lies partly within five different counties, Strathcona, Leduc, Beaver, Lamont and Camrose. The area is relatively undeveloped compared to the surrounding region, and is protected in part by Elk Island National Park, the Cooking Lake–Blackfoot Provincial Recreation Area, the Ministik Lake Game Bird Sanctuary, Miquelon Lake Provincial Park and a number of smaller provincial natural areas. Since 2016 Beaver Hills has been a UNESCO-designated biosphere reserve. Natural history The "hills" are very low and not very prominent, as the region is actually just a slight rise above the surrounding region which also happens to be rough and rolling due to a different history during the end of the last ice age. Being at a slightly higher elevation, the bedrock in what would become the hills was only briefly covered by glacial Lake Edmonton, which deposited a thick layer of silt on the rest of the region (the basis of the modern agricultural soils now found in the areas around the hills), but left mostly gravel and boulder-sized debris on the hills, along with much water in the depressions left behind by ice and stone during the preceding glacial era. The vegetation is typically of part of the dry mixedwood boreal forest natural subregion, a transitional zone on the south edge of the boreal forest, but is surrounded by aspen parkland. This island of boreal forest in the south means that both boreal animal species (moose, black bear, Canada lynx) and grassland animal species (sharp-tailed grouse, mule deer) live in the region. Nearby landscapes include Beaverhill Lake just to the east, and the North Saskatchewan River to the north. Human history Indigenous peoples and fur trade history As a well-wooded and watered area near to more open grasslands, the Beaver Hills were an important camping place for nomadic peoples making a seasonal migration between the plains and the hills. It was a place that Indigenous people "could replenish and recoup after spending extended periods on the plains, a place where they could hunt, fish, and gather other needed resources". Because the hills were not ploughed under, unlike the rest of region, much archaeological evidence remains here, including 227 Indigenous sites recorded by Parks Canada in Elk Island Park alone. The Sarcee are the first ethnic group known to have inhabited the region in the period after European contact (and thus the beginning of a written historical record). Sometime before 1800 Cree people migrating from the east displaced the Sarcee from the hills onto the plains. In Cree the region is called , which literally means "beaver hills" and is the origin of the region's later names in French, and then English. The Cree pursued an economy based around trapping and trading with Euro-Canadian fur companies as well as the more traditional forms of hunting gathering, and fishing. The Cree also adopted buffalo hunting techniques from plains peoples to the south, including the use of buffalo pounds. The beaver and other game species in the area eventually became trapped out, and they largely abandoned the area as a permanent home, though continued to travel through the area. Two major Indigenous and fur trade trails border the hills, the Victoria Trail to the north and the Battle River Trail to the south. The Beaver Hills are mentioned in Euro-Canadian records as early as Peter Fidler's sketches of 1793. David Thompson's map of 1814 mentions the hills prominently as place of refuge for both the Sarcee and Cree. They are also reported on by the Palliser Expedition of the 1850s and by Joseph B. Tyrell of the Geological Survey of Canada in the 1880s. Initial reserve development This is one of oldest protected areas in Canada, having originally been a forest reserve set aside by the federal Department of the Interior in 1892, during the homesteading era. It was formalized as the Cooking Lake Forest Reserve in 1899, the first such reserve in Canada. A part of the reserve was given further protection in 1906 as Elk Park, later to become Elk Island National Park. In 1930, Crown lands in Alberta passed from the federal government to the provincial government, Elk Park became formalized as a national park while the rest of the Cooking Lake Forest Reserve became a provincial responsibility. Later development In 2002 the Beaver Hills Initiative was created to coordinate land-use planning in the municipalities in the area surrounding the protected parks. This resulted in a scheme of tradable development credits. In 2006 the area became recognized as a dark sky preserve by the Royal Astronomical Society of Canada. In 2016 it was named a UNESCO Biosphere Reserve. See also List of glacial moraines Terminal moraine References Further reading Hills of Alberta Dark-sky preserves in Canada Moraines of Canada Taiga and boreal forests Great Plains Edmonton Metropolitan Region Biosphere reserves of Canada Forests of Alberta Wetlands of Alberta
Beaver Hills (Alberta)
Astronomy
1,033
9,467,708
https://en.wikipedia.org/wiki/Room%20box
A room box is a display box used for three-dimensional miniature scale environments, or scale models. Although the name would suggest room boxes generally only represent typical rooms such as those found in houses or other buildings (bedrooms, kitchens, offices, etc.), room boxes are used for all sorts of environments – exterior views as well as interior ones, realistic ones as well as fantastical ones. While some miniaturists concentrate their efforts specifically on room boxes, many use them to take a break from larger projects, such as dollhouses or miniature villages, to create a smaller environment on a different theme. A room box can be tailored to one’s interests or mirror an important step in life - for example, a bakery or restaurant scene might be created by or for a baker or cook, and a wedding dress storefront might be created for a bride to be or as a reminiscence of one's wedding. Making a room box is often a first step to learning new techniques in miniature making; such projects are popular at miniaturists' events where attendees have only 1–2 days to make and finish a project. Once techniques are perfected in these smaller settings, craftspersons and hobbyists often reapply them to larger projects. Room boxes are a cost- and time-effective way to make miniature settings without attempting larger setups such as a dollhouse or train set. Commercially bought room boxes tend to be made of wood, pressed wood products or plywood, with the top and front window made of removable clear acrylic that lets in light and enables access and viewing from two perspectives. Dimensions usually meet standard dollhouse proportions ("1:12 scale" in dollhouse speak means that 1" in the dollhouse world represents 1' in the real world), but anyone can make a room box from a leftover shoebox, orange crate, etc. and adapt an idea to suit the box's scale. Since any material can be used, whether leftover or new, people of all economic classes express themselves through this craft. One elaborate example of 1:12 scale miniature rooms are the 68 miniature Thorne Rooms, each with a different theme. They were designed by Narcissa Niblack Thorne and furniture for them was created by craftsmen in the 1930s and 1940s. They are now at the Art Institute of Chicago, Phoenix Art Museum. As evidenced in the recent increase in craft book and magazine publishing on different types of miniatures, interest in making room-boxes for miniature settings has steadily grown since the 1990s. Room boxes have even found a place during prime-time television: the winter 2007 season of CSI: Crime Scene Investigation included a clever storyline recurring throughout the season, where a murderer named The Miniature Killer leaves clues for investigators in the form of intricately made 3-D room boxes showing scenes of the crimes she committed, reproduced in scale miniature. See also Model Scale model Dollhouse Model building References Scale modeling Dollhouses
Room box
Physics
600
7,757,474
https://en.wikipedia.org/wiki/Pyrolytic%20coating
Pyrolytic coating is a thin film coating applied at high temperatures and sprayed onto the glass surface during the float glass process. Advantages Relatively durable coating. Can be tempered after coating application. Can be used in single glazing applications. Applications Pyrolytic coating can be used as a protective or decorative coating on equipment parts, energy-insulator on window glasses, anti-friction agent in moulding applications. See also Pyrolytic chromium carbide coating References External links Welsh Insulating Glass EnduroShield Coating Information Pyrolytic Chromium Carbide Coating Glass coating and surface modification
Pyrolytic coating
Chemistry
129
22,999,259
https://en.wikipedia.org/wiki/Kawamata%E2%80%93Viehweg%20vanishing%20theorem
In algebraic geometry, the Kawamata–Viehweg vanishing theorem is an extension of the Kodaira vanishing theorem, on the vanishing of coherent cohomology groups, to logarithmic pairs, proved independently by Viehweg and Kawamata in 1982. The theorem states that if L is a big nef line bundle (for example, an ample line bundle) on a complex projective manifold with canonical line bundle K, then the coherent cohomology groups Hi(L⊗K) vanish for all positive i. References Theorems in algebraic geometry
Kawamata–Viehweg vanishing theorem
Mathematics
117
221,530
https://en.wikipedia.org/wiki/De%20Rham%20cohomology
In mathematics, de Rham cohomology (named after Georges de Rham) is a tool belonging both to algebraic topology and to differential topology, capable of expressing basic topological information about smooth manifolds in a form particularly adapted to computation and the concrete representation of cohomology classes. It is a cohomology theory based on the existence of differential forms with prescribed properties. On any smooth manifold, every exact form is closed, but the converse may fail to hold. Roughly speaking, this failure is related to the possible existence of "holes" in the manifold, and the de Rham cohomology groups comprise a set of topological invariants of smooth manifolds that precisely quantify this relationship. Definition The de Rham complex is the cochain complex of differential forms on some smooth manifold , with the exterior derivative as the differential: where is the space of smooth functions on , is the space of -forms, and so forth. Forms that are the image of other forms under the exterior derivative, plus the constant function in , are called exact and forms whose exterior derivative is are called closed (see Closed and exact differential forms); the relationship then says that exact forms are closed. In contrast, closed forms are not necessarily exact. An illustrative case is a circle as a manifold, and the -form corresponding to the derivative of angle from a reference point at its centre, typically written as (described at Closed and exact differential forms). There is no function defined on the whole circle such that is its derivative; the increase of in going once around the circle in the positive direction implies a multivalued function . Removing one point of the circle obviates this, at the same time changing the topology of the manifold. One prominent example when all closed forms are exact is when the underlying space is contractible to a point or, more generally, if it is simply connected (no-holes condition). In this case the exterior derivative restricted to closed forms has a local inverse called a homotopy operator. Since it is also nilpotent, it forms a dual chain complex with the arrows reversed compared to the de Rham complex. This is the situation described in the Poincaré lemma. The idea behind de Rham cohomology is to define equivalence classes of closed forms on a manifold. One classifies two closed forms as cohomologous if they differ by an exact form, that is, if is exact. This classification induces an equivalence relation on the space of closed forms in . One then defines the -th de Rham cohomology group to be the set of equivalence classes, that is, the set of closed forms in modulo the exact forms. Note that, for any manifold composed of disconnected components, each of which is connected, we have that This follows from the fact that any smooth function on with zero derivative everywhere is separately constant on each of the connected components of . de Rham cohomology computed One may often find the general de Rham cohomologies of a manifold using the above fact about the zero cohomology and a Mayer–Vietoris sequence. Another useful fact is that the de Rham cohomology is a homotopy invariant. While the computation is not given, the following are the computed de Rham cohomologies for some common topological objects: The -sphere For the -sphere, , and also when taken together with a product of open intervals, we have the following. Let , and be an open real interval. Then The -torus The -torus is the Cartesian product: . Similarly, allowing here, we obtain We can also find explicit generators for the de Rham cohomology of the torus directly using differential forms. Given a quotient manifold and a differential form we can say that is -invariant if given any diffeomorphism induced by , we have . In particular, the pullback of any form on is -invariant. Also, the pullback is an injective morphism. In our case of the differential forms are -invariant since . But, notice that for is not an invariant -form. This with injectivity implies that Since the cohomology ring of a torus is generated by , taking the exterior products of these forms gives all of the explicit representatives for the de Rham cohomology of a torus. Punctured Euclidean space Punctured Euclidean space is simply with the origin removed. The Möbius strip We may deduce from the fact that the Möbius strip, , can be deformation retracted to the -sphere (i.e. the real unit circle), that: de Rham theorem Stokes' theorem is an expression of duality between de Rham cohomology and the homology of chains. It says that the pairing of differential forms and chains, via integration, gives a homomorphism from de Rham cohomology to singular cohomology groups de Rham's theorem, proved by Georges de Rham in 1931, states that for a smooth manifold , this map is in fact an isomorphism. More precisely, consider the map defined as follows: for any , let be the element of that acts as follows: The theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. The exterior product endows the direct sum of these groups with a ring structure. A further result of the theorem is that the two cohomology rings are isomorphic (as graded rings), where the analogous product on singular cohomology is the cup product. Sheaf-theoretic de Rham isomorphism For any smooth manifold M, let be the constant sheaf on M associated to the abelian group ; in other words, is the sheaf of locally constant real-valued functions on M. Then we have a natural isomorphism between the de Rham cohomology and the sheaf cohomology of . (Note that this shows that de Rham cohomology may also be computed in terms of Čech cohomology; indeed, since every smooth manifold is paracompact Hausdorff we have that sheaf cohomology is isomorphic to the Čech cohomology for any good cover of M.) Proof The standard proof proceeds by showing that the de Rham complex, when viewed as a complex of sheaves, is an acyclic resolution of . In more detail, let m be the dimension of M and let denote the sheaf of germs of -forms on M (with the sheaf of functions on M). By the Poincaré lemma, the following sequence of sheaves is exact (in the abelian category of sheaves): This long exact sequence now breaks up into short exact sequences of sheaves where by exactness we have isomorphisms for all k. Each of these induces a long exact sequence in cohomology. Since the sheaf of functions on M admits partitions of unity, any -module is a fine sheaf; in particular, the sheaves are all fine. Therefore, the sheaf cohomology groups vanish for since all fine sheaves on paracompact spaces are acyclic. So the long exact cohomology sequences themselves ultimately separate into a chain of isomorphisms. At one end of the chain is the sheaf cohomology of and at the other lies the de Rham cohomology. Related ideas The de Rham cohomology has inspired many mathematical ideas, including Dolbeault cohomology, Hodge theory, and the Atiyah–Singer index theorem. However, even in more classical contexts, the theorem has inspired a number of developments. Firstly, the Hodge theory proves that there is an isomorphism between the cohomology consisting of harmonic forms and the de Rham cohomology consisting of closed forms modulo exact forms. This relies on an appropriate definition of harmonic forms and of the Hodge theorem. For further details see Hodge theory. Harmonic forms If is a compact Riemannian manifold, then each equivalence class in contains exactly one harmonic form. That is, every member of a given equivalence class of closed forms can be written as where is exact and is harmonic: . Any harmonic function on a compact connected Riemannian manifold is a constant. Thus, this particular representative element can be understood to be an extremum (a minimum) of all cohomologously equivalent forms on the manifold. For example, on a -torus, one may envision a constant -form as one where all of the "hair" is combed neatly in the same direction (and all of the "hair" having the same length). In this case, there are two cohomologically distinct combings; all of the others are linear combinations. In particular, this implies that the 1st Betti number of a -torus is two. More generally, on an -dimensional torus , one can consider the various combings of -forms on the torus. There are choose such combings that can be used to form the basis vectors for ; the -th Betti number for the de Rham cohomology group for the -torus is thus choose . More precisely, for a differential manifold , one may equip it with some auxiliary Riemannian metric. Then the Laplacian is defined by with the exterior derivative and the codifferential. The Laplacian is a homogeneous (in grading) linear differential operator acting upon the exterior algebra of differential forms: we can look at its action on each component of degree separately. If is compact and oriented, the dimension of the kernel of the Laplacian acting upon the space of -forms is then equal (by Hodge theory) to that of the de Rham cohomology group in degree : the Laplacian picks out a unique harmonic form in each cohomology class of closed forms. In particular, the space of all harmonic -forms on is isomorphic to The dimension of each such space is finite, and is given by the -th Betti number. Hodge decomposition Let be a compact oriented Riemannian manifold. The Hodge decomposition states that any -form on uniquely splits into the sum of three components: where is exact, is co-exact, and is harmonic. One says that a form is co-closed if and co-exact if for some form , and that is harmonic if the Laplacian is zero, . This follows by noting that exact and co-exact forms are orthogonal; the orthogonal complement then consists of forms that are both closed and co-closed: that is, of harmonic forms. Here, orthogonality is defined with respect to the inner product on : By use of Sobolev spaces or distributions, the decomposition can be extended for example to a complete (oriented or not) Riemannian manifold. See also Hodge theory Integration along fibers (for de Rham cohomology, the pushforward is given by integration) Sheaf theory -lemma for a refinement of exact differential forms in the case of compact Kähler manifolds. Citations References External links Idea of the De Rham Cohomology in Mathifold Project Cohomology theories Differential forms
De Rham cohomology
Engineering
2,290
3,173,222
https://en.wikipedia.org/wiki/Exemestane
Exemestane, sold under the brand name Aromasin among others, is a medication used to treat breast cancer. It is a member of the class of antiestrogens known as aromatase inhibitors. Some breast cancers require estrogen to grow. Those cancers have estrogen receptors (ERs), and are called ER-positive. They may also be called estrogen-responsive, hormonally-responsive, or hormone-receptor-positive. Aromatase is an enzyme that synthesizes estrogen. Aromatase inhibitors block the synthesis of estrogen. This lowers the estrogen level, and slows the growth of cancers. Medical uses Exemestane is indicated for the adjuvant treatment of postmenopausal women with estrogen-receptor positive early breast cancer who have received two to three years of tamoxifen and are switched to it for completion of a total of five consecutive years of adjuvant hormonal therapy. US FDA approval was in October 1999. Exemestane is also indicated for the treatment of advanced breast cancer in postmenopausal women whose disease has progressed following tamoxifen therapy. For premenopausal women with hormone-receptor–positive breast cancer, adjuvant treatment with ovarian suppression plus the aromatase inhibitor exemestane, as compared with ovarian suppression plus tamoxifen, provides a new treatment option that reduces the risk of recurrence. The TEXT and SOFT trials demonstrated improved disease free survival in patients treated with exemestane and ovarian suppression compared to the tamoxifen and ovarian suppression group. Premenopausal women who receive ovarian suppression may now benefit from an aromatase inhibitor, a class of drugs that until now has been recommended only for postmenopausal women. Contraindications The drug is contraindicated in premenopausal women, which of course includes pregnant and lactating women. Side effects The most common side effects (more than 10% of patients) are hot flashes and sweating, which are typical of estrogen deficiency as caused by exemestane, and also insomnia, headache, and joint pain. Nausea and fatigue are mainly observed in patients with advanced breast cancer. An occasional decrease in lymphocytes has been observed in approximately 20% of patients receiving Aromasin, particularly in patients with pre-existing lymphopenia. Exemestane has androgenic properties similarly to formestane and can produce androgenic side effects such as acne and weight gain, although these are generally associated with supratherapeutic dosages of the drug. Overdose Single doses of up to at least 32-fold (800 mg), as well as continuous therapy with 24-fold (600 mg) the usual daily dose are well tolerated. No life-threatening overdosing is known in humans, but only in animal studies with 2000- to 4000-fold doses (adjusted to body surface area). Interactions Exemestane is metabolized by the liver enzyme CYP3A4. While the CYP3A4 inhibitor ketoconazole had no significant effect on exemestane levels in a clinical trial, the strong CYP3A4 inductor rifampicin significantly cut exemenstane levels about in half (AUC −54%, Cmax −41% for a single dose), potentially compromising its effectiveness. Other 3A4 inductors such as carbamazepine and St John's Wort are expected to have similar effects. The clinical relevance of this effect has not been investigated. Estrogens probably reduce exemestane effectiveness: It would usually be counter-productive to reduce the body's estrogen synthesis with exemestane and then substitute estrogen with pharmaceuticals. Pharmacology Pharmacodynamics Exemestane is an oral steroidal aromatase inhibitor that is used in ER-positive breast cancer in addition to surgery and/or radiation in post-menopausal women. The main source of estrogen is the ovaries in premenopausal women, while in post-menopausal women most of the body's estrogen is produced via the conversion of androgens into estrogen by the aromatase enzyme in the peripheral tissues (i.e. adipose tissue like that of the breast) and a number of sites in the brain. Estrogen is produced locally via the actions of the aromatase enzyme in these peripheral tissues where it acts locally. Any circulating estrogen in post-menopausal women as well as men is the result of estrogen escaping local metabolism and entering the circulatory system. Exemestane is an irreversible, steroidal aromatase inactivator of type I, structurally related to the natural substrate 4-androstenedione. It acts as a false substrate for the aromatase enzyme, and is processed to an intermediate that binds irreversibly to the active site of the enzyme causing its inactivation, an effect also known as "suicide inhibition." By being structurally similar to enzyme targets, exemestane permanently binds to the enzymes, preventing them from converting androgen into estrogen. Type II aromatase inhibitors such as anastrozole and letrozole, by contrast, are not steroids and work by interfering with the aromatase's heme. A study conducted on young adult males found that the estrogen suppression rate for exemestane varied from 35% for estradiol (E2) to 70% for estrone (E1). Pharmacokinetics Exemestane is quickly absorbed from the gut, but undergoes a strong first-pass effect in the liver. Highest blood plasma concentrations are reached after 1.2 hours in breast cancer patients and after 2.9 hours in healthy subjects. Maximal aromatase inhibition occurs after two to three days. 90% of the absorbed substance are bound to plasma proteins. The liver enzyme CYP3A4 oxidizes the methylidene group in position 6, and the 17-keto group (on the five-membered ring) is reduced by aldo-keto reductases to an alcohol. Of the resulting metabolites, 40% are excreted via the urine and 40% via the feces within a week. The original substance accounts for only 1% of excretion in the urine. The terminal half-life is 24 hours. Chemistry Exemestane is known chemically as 6-methylideneandrosta-1,4-diene-3,17-dione. Like the aromatase inhibitors formestane and atamestane, exemestane is a steroid that is structurally similar to 4-androstenedione, the natural substrate of aromatase. It is distinguished from the natural substance only by the methylidene group in position 6 and an additional double bond in position 1. Pure exemestane is a white to off-white powder that is soluble in DMSO to at least 20 mg/mL. Optical rotation [α]D is +250 to 300° (per g/100 cm3 and decimetre at 589 nm wavelength). Society and culture Performance enhancement Exemestane has been used in doping to raise luteinizing hormone (LH) and follicle stimulating hormone (FSH) levels, which in turn increases the ratio of male over female sexual hormones and so improves performance. The drug also counteracts gynecomastia as well as fat and water retention following excessive aromatase production due to testosterone doping. It is also used by steroid users to lower female sexual horomone levels following a cycle of steroids, often called a "post-cycle therapy", it is also used alongside Selective estrogen receptor modulators in this. Rarely, it is used recreationally by teenagers to delay epiphyseal plate closure and increase adult height, particularly among members of the Looksmaxxing and incel communities, where its use is documented on their forums. However, its effectiveness for this purpose is debatable. Along with other aromatase inhibitors, exemestane is on the World Anti-Doping Agency's list of prohibited substances. Research Oral exemestane 25 mg/day for 2–3 years of adjuvant therapy was generally more effective than 5 years of continuous adjuvant tamoxifen in the treatment of postmenopausal women with early-stage estrogen receptor-positive/unknown receptor status breast in a large well-designed trial. Preliminary data from the open-label TEAM trial comparing exemestane with tamoxifen indicated in 2009 that exemestane 25 mg/day is also effective in the primary adjuvant treatment of early-stage breast cancer in postmenopausal women. Interim phase III trial results in 2011 showed that adding everolimus to exemestane therapy against advanced breast cancer can significantly improve progression-free survival compared with exemestane therapy alone. A Phase III trial was reported in 2011, concluding that the use of exemestane in postmenopausal women at an increased risk for breast cancer reduced the incidence of invasive breast cancer. In 4,560 women, after 35 months, the administration of exemestane at a dose of 25 mg/day resulted in a 65% reduction in the risk of breast cancer compared with placebo; annual incidence rates were 0.19% and 0.55%, respectively (hazard ratio: 0.35; 95% CI [0.18-0.70]; p = 0.002). References External links Aromasin official website Aromasin prescribing information Anabolic–androgenic steroids Androstanes Aromatase inhibitors Diketones Enantiopure drugs Hormonal antineoplastic drugs Drugs developed by Pfizer
Exemestane
Chemistry
2,064
45,464,923
https://en.wikipedia.org/wiki/Penicillium%20emmonsii
Penicillium emmonsii is a species of the genus of Penicillium. See also List of Penicillium species References emmonsii Fungi described in 1979 Fungus species
Penicillium emmonsii
Biology
40
63,088,283
https://en.wikipedia.org/wiki/PANSAT
PANSAT (Petite Amateur Navy Satellite, also known as OSCAR 34) was an amateur radio satellite. It was launched by Space Shuttle Discovery during the STS-95 mission as part of the third International Extreme Ultraviolet Hitchhiker (IEH-3) mission, on 30 October 1998 from Kennedy Space Center, Florida. The satellite was built by students from the Naval Postgraduate School in Monterey, California. It offered the possibility of packet radio transmission in BPSK or Direct-Sequence Spread Spectrum in the 70 cm band. The satellite was configured in a sphere-like shape, featuring 26 sides used for solar cell and antenna placement. The spacecraft supplied direct-sequence, spread-spectrum modulation with an operating center frequency of 436.5 MHz, a bit rate of 9600 bit/s and 9 MB of message storage. References Satellites orbiting Earth Amateur radio satellites Spacecraft launched by the Space Shuttle Spacecraft launched in 1998
PANSAT
Astronomy
182
25,911,698
https://en.wikipedia.org/wiki/Pluteus%20brunneidiscus
Pluteus brunneidiscus is a species of agaric fungus in the family Pluteaceae. It was first described scientifically by American mycologist William Alphonso Murrill in 1917. It is found in Europe (Spain) and North America. Description Pileus and stipe without blue-green tinges. Specimens are small to medium-sized and have a brown pileus which is usually darker at the center. Habitat and distribution Solitary, on wood of broad-leaved trees. Found in the U.S. and in Spain from June to November. Chemistry These mushrooms contain psilocybin. See also List of Pluteus species References External links Fungi described in 1917 Fungi of Europe Fungi of North America brunneidiscus Psychoactive fungi Psychedelic tryptamine carriers Fungus species
Pluteus brunneidiscus
Biology
169
5,742,067
https://en.wikipedia.org/wiki/Biohydrogen
Biohydrogen is H2 that is produced biologically. Interest is high in this technology because H2 is a clean fuel and can be readily produced from certain kinds of biomass, including biological waste. Furthermore some photosynthetic microorganisms are capable to produce H2 directly from water splitting using light as energy source. Besides the promising possibilities of biological hydrogen production, many challenges characterize this technology. First challenges include those intrinsic to H2, such as storage and transportation of an explosive noncondensible gas. Additionally, hydrogen producing organisms are poisoned by O2 and yields of H2 are often low. Biochemical principles The main reactions driving hydrogen formation involve the oxidation of substrates to obtain electrons. Then, these electrons are transferred to free protons to form molecular hydrogen. This proton reduction reaction is normally performed by an enzyme family known as hydrogenases. In heterotrophic organisms, electrons are produced during the fermentation of sugars. Hydrogen gas is produced in many types of fermentation as a way to regenerate NAD+ from NADH. Electrons are transferred to ferredoxin, or can be directly accepted from NADH by a hydrogenase, producing H2. Because of this most of the reactions start with glucose, which is converted to acetic acid. C6H12O6 + 2 H2O -> 2 CH3COOH + 2 CO2 + 4 H2 A related reaction gives formate instead of carbon dioxide: C6H12O6 + 2 H2O -> 2 CH3COOH + 2 HCOOH + 2 H2 These reactions are exergonic by 216 and 209 kcal/mol, respectively. It has been estimated that 99% of all organisms utilize or produce dihydrogen (H2). Most of these species are microbes and their ability to use or produce H2 as a metabolite arises from the expression of H2 metalloenzymes known as hydrogenases. Enzymes within this widely diverse family are commonly sub-classified into three different types based on the active site metal content: [FeFe]-hydrogenases (iron-iron), [NiFe]-hydrogenases (nickel-iron) hydrogenases, and [Fe]-hydrogenases (iron-only). Many organisms express these enzymes. Notable examples are members of the genera Clostridium, Desulfovibrio, Ralstonia or the pathogen Helicobacter, being most of them strict-anaerobes or facultative microorganisms. Other microorganisms such green algae also express highly active hydrogenases, as it is the case for members of the genera Chlamydomonas.Due to the extreme diversity of hydrogenase enzymes, on-going efforts are focused on screening for novel enzymes with improved features, as well as engineering already characterized hydrogenases to confer them more desirable characteristics. Production by algae The biological hydrogen production with algae is a method of photobiological water splitting which is done in a closed photobioreactor based on the production of hydrogen as a solar fuel by algae. Algae produce hydrogen under certain conditions. In 2000 it was discovered that if C. reinhardtii algae are deprived of sulfur they will switch from the production of oxygen, as in normal photosynthesis, to the production of hydrogen. Green algae express [FeFe] hydrogenases, being some of them considered the most efficient hydrogenases with turnover rates superior to 104 s−1. This remarkable catalytic efficiency is nonetheless shadowed by its extreme sensitivity to oxygen, being irreversibly inactivated by O2. When the cells are deprived from sulfur, oxygen evolution stops due to photo-damage of photosystem II, in this state the cells start consuming O2 and provide the ideal anaerobic environment for the native [FeFe] hydrogenases to catalyze H2 production. Photosynthesis Photosynthesis in cyanobacteria and green algae splits water into hydrogen ions and electrons. The electrons are transported over ferredoxins. Fe-Fe-hydrogenases (enzymes) combine them into hydrogen gas. In Chlamydomonas reinhardtii Photosystem II produces in direct conversion of sunlight 80% of the electrons that end up in the hydrogen gas. In 2020 scientists reported the development of algal-cell based micro-emulsion for multicellular spheroid microbial reactors capable of producing hydrogen alongside either oxygen or CO2 via photosynthesis in daylight under air. Enclosing the microreactors with synergistic bacteria was shown to increase levels of hydrogen production via reduction of O2 concentrations. Improving production by light harvesting antenna reduction The chlorophyll (Chl) antenna size in green algae is minimized, or truncated, to maximize photobiological solar conversion efficiency and H2 production. It has been shown that Light-harvesting complex photosystem II light-harvesting protein LHCBM9 promotes efficient light energy dissipation. The truncated Chl antenna size minimizes absorption and wasteful dissipation of sunlight by individual cells, resulting in better light utilization efficiency and greater photosynthetic efficiency when the green alga are grown as a mass culture in bioreactors. Economics With current reports for algae-based biohydrogen, it would take about 25,000 square kilometre algal farming to produce biohydrogen equivalent to the energy provided by gasoline in the US alone. This area represents approximately 10% of the area devoted to growing soya in the US. Bioreactor design issues Restriction of photosynthetic hydrogen production by accumulation of a proton gradient. Competitive inhibition of photosynthetic hydrogen production by carbon dioxide. Requirement for bicarbonate binding at photosystem II (PSII) for efficient photosynthetic activity. Competitive drainage of electrons by oxygen in algal hydrogen production. Economics must reach competitive price to other sources of energy and the economics are dependent on several parameters. A major technical obstacle is the efficiency in converting solar energy into chemical energy stored in molecular hydrogen. Attempts are in progress to solve these problems via bioengineering. Production by cyanobacteria Biological hydrogen production is also observed in nitrogen-fixing cyanobacteria. This microorganisms can grow forming filaments. Under nitrogen-limited conditions some cells can specialize and form heterocysts, which ensures an anaerobic intracellular space to ease N2 fixation by the nitrogenase enzyme expressed also inside. Under nitrogen-fixation conditions, the nitrogenase enzyme accepts electrons and consume ATP to break the triple dinitrogen bond and reduce it to ammonia. During the catalytic cycle of the nitrogenase enzyme, molecular hydrogen is also produced. N2 + 8 H+ + 8NAD(P)H + 16 ATP-> 2 NH3 + H2 + 16 ADP + 16 Pi + 8 NAD(P)+ Nevertheless, since the production of H2 is an important loss of energy for the cells, most of nitrogen fixing cyanobacteria also feature at least one uptake hydrogenase. Uptake hydrogenases exhibit a catalytic bias towards oxygen oxidation, thus can assimilate the produced H2 as a way to recover part of the energy invested during the nitrogen fixation process. History In 1933, Marjory Stephenson and her student Stickland reported that cell suspensions catalysed the reduction of methylene blue with H2. Six years later, Hans Gaffron observed that the green photosynthetic alga Chlamydomonas reinhardtii, would sometimes produce hydrogen. In the late 1990s Anastasios Melis discovered that deprivation of sulfur induces the alga to switch from the production of oxygen (normal photosynthesis) to the production of hydrogen. He found that the enzyme responsible for this reaction is hydrogenase, but that the hydrogenase lost this function in the presence of oxygen. Melis also discovered that depleting the amount of sulfur available to the algae interrupted their internal oxygen flow, allowing the hydrogenase an environment in which it can react, causing the algae to produce hydrogen. Chlamydomonas moewusii is also a promising strain for the production of hydrogen. Industrial hydrogen Competing for biohydrogen, at least for commercial applications, are many mature industrial processes. Steam reforming of natural gas - sometimes referred to as steam methane reforming (SMR) - is the most common method of producing bulk hydrogen at about 95% of the world production. CH4 + H2O <-> CO + 3 H2 See also References External links DOE - A Prospectus for Biological Production of Hydrogen FAO Maximizing Light Utilization Efficiency and Hydrogen Production in Microalgal Cultures DIY Algae/Hydrogen Bioreactor 2004 EERE-CYCLIC PHOTOBIOLOGICAL ALGAL H2-PRODUCTION Anaerobic digestion Biodegradable waste management Biodegradation Biofuels Biotechnology products Fuel gas Fuels Hydrogen Hydrogen biology Hydrogen economy Hydrogen production Waste management
Biohydrogen
Chemistry,Engineering,Biology
1,843
46,477,850
https://en.wikipedia.org/wiki/Abell%202162
Abell 2162 is a galaxy cluster in the Abell catalogue located in the constellation Corona Borealis. It is a member of the Hercules Superclusters, the redshifts of the member galaxies of which lie between 0.0304 and 0.0414. The cluster hosts a massive Type-cD galaxy called NGC 6086. See also Abell catalogue List of Abell clusters X-ray astronomy References Further reading 2162 Galaxy clusters Corona Borealis Hercules Superclusters
Abell 2162
Astronomy
102
19,874,353
https://en.wikipedia.org/wiki/Surface-extended%20X-ray%20absorption%20fine%20structure
Surface-extended X-ray absorption fine structure (SEXAFS) is the surface-sensitive equivalent of the EXAFS technique. This technique involves the illumination of the sample by high-intensity X-ray beams from a synchrotron and monitoring their photoabsorption by detecting in the intensity of Auger electrons as a function of the incident photon energy. Surface sensitivity is achieved by the interpretation of data depending on the intensity of the Auger electrons (which have an escape depth of ~1–2 nm) instead of looking at the relative absorption of the X-rays as in the parent method, EXAFS. The photon energies are tuned through the characteristic energy for the onset of core level excitation for surface atoms. The core holes thus created can then be filled by nonradiative decay of a higher-lying electron and communication of energy to yet another electron, which can then escape from the surface (Auger emission). The photoabsorption can therefore be monitored by direct detection of these Auger electrons to the total photoelectron yield. The absorption coefficient versus incident photon energy contains oscillations which are due to the interference of the backscattered Auger electrons with the outward propagating waves. The period of this oscillations depends on the type of the backscattering atom and its distance from the central atom. Thus, this technique enables the investigation of interatomic distances for adsorbates and their coordination chemistry. This technique benefits from long range order not being required, which sometimes becomes a limitation in the other conventional techniques like LEED (about 10 nm). This method also largely eliminates the background from the signal. It also benefits because it can probe different species in the sample by just tuning the X-ray photon energy to the absorption edge of that species. Joachim Stöhr played a major role in the initial development of this technique. Experimental setup Synchrotron radiation sources Normally, the SEXAFS work is done using synchrotron radiation as it has highly collimated, plane-polarized and precisely pulsed X-ray sources, with fluxes of 1012 to 1014 photons/sec/mrad/mA and greatly improves the signal-to-noise ratio over that obtainable from conventional sources. A bright source X-ray source is illuminating the sample and the transmission is being measured as the absorption coefficient as where I is the transmitted and Io is the incident intensity of the X-rays. Then it is plotted against the energy of the incoming X-ray photon energy. Electron detectors In SEXAFS, an electron detector and a high-vacuum chamber is required to calculate the Auger yields instead of the intensity of the transmitted X-ray waves. The detector can be either an energy analyzer, as in the case of Auger measurements, or an electron multiplier, as in the case of total or partial secondary electron yield. The energy analyzer gives rise to better resolution while the electron multiplier has larger solid angle acceptance. Signal-to-noise ratio The equation governing the signal-to-noise ratio is where μA is the absorption coefficient; In is the nonradiative contribution in electron counts/sec; Ib is the background contribution in electron counts/sec; μA is the absorption by the SEXAFS-producing element; μT is the total absorption by all the elements; Io is the incident intensity; n is the attenuation length; Ω/(4π) is the solid angle acceptance for the detector; εn is the nonradiative yield which is the probability that the electron will not decay radiatively and will actually get emitted as an Auger electron. Physics Basics The absorption of an X-ray photon by the atom excites a core level electron, thus generating a core hole. This generates a spherical electron wave with the excited atom as the center. The wave propagates outwards and get scattered off from the neighbouring atoms and is turned back towards the central ionized atom. The oscillatory component of the photoabsorption originates from the coupling of this reflected wave to the initial state via the dipole operator Mfs as in (1). The Fourier transform of the oscillations gives the information about the spacing of the neighboring atoms and their chemical environment. This phase information is carried over to the oscillations in the Auger signal because the transition time in Auger emission is of the same order of magnitude as the average time for a photoelectron in the energy range of interest. Thus, with a proper choice of the absorption edge and characteristic Auger transition, measurement of the variation of the intensity in a particular Auger line as a function of incident photon energy would be a measure of the photoabsorption cross section. This excitation also triggers various decay mechanisms. These can be of radiative (fluorescence) or nonradiative (Auger and Coster–Kronig) nature. The intensity ratio between the Auger electron and X-ray emissions depends on the atomic number Z. The yield of the Auger electrons decreases with increasing Z. Theory of EXAFS The cross section of photoabsorption is given by Fermi's golden rule, which, in the dipole approximation, is given as where the initial state, i with energy Ei, consists of the atomic core and the Fermi sea, and the incident radiation field, the final state, ƒ with energy Eƒ (larger than the Fermi level), consists of a core hole and an excited electron. ε is the polarization vector of the electric field, e the electron charge, and ħω the x-ray photon energy. The photoabsorption signal contains a peak when the core level excitation is neared. It is followed by an oscillatory component which originates from the coupling of that part of the electron wave which upon scattering by the medium is turned back towards the central ionized atom, where it couples to the initial state via the dipole operator, Mi. Assuming single-scattering and small-atom approximation for kRj >> 1, where Rj is the distance from the central excited atom to the jth shell of neighbors and k is the photoelectrons wave vector, where ħωT is the absorption edge energy and Vo is the inner potential of the solid associated with exchange and correlation, the following expression for the oscillatory component of the photoabsorption cross section (for K-shell excitation) is obtained: where the atomic scattering factor in a partial wave expansion with partial wave phase-shifts δl is given by Pl(x) is the lth Legendre polynomial, γ is an attenuation coefficient, exp(−2σi2k2) is a Debye–Waller factor and weight Wj is given in terms of the number of atoms in the jth shell and their distance as The above equation for the χ(k) forms the basis of a direct, Fourier transform, method of analysis which has been successfully applied to the analysis of the EXAFS data. Incorporation of EXAFS-Auger The number of electrons arriving at the detector with an energy of the characteristic WαXY Auger line (where Wα is the absorption edge core-level of element α, to which the incident x-ray line has been tuned) can be written as where NB(ħω) is the background signal and is the Auger signal we are interested in, where where is the probability that an excited atom will decay via WαXY Auger transition, ρα(z) is the atomic concentration of the element α at depth z, λ(WαXY) is the mean free path for an WαXY Auger electron, θ is the angle that the escaping Auger electron makes with the surface normal and κ is the photon emission probability which is dictated the atomic number. As the photoabsorption probability, is the only term that is dependent on the photon energy, the oscillations in it as a function of energy would give rise to similar oscillations in the . Notes References Stöhr, J. (1988) "SEXAFS: Everything you always wanted to know about SEXAFS but were afraid to ask" , in X-Ray Absorption: Principles, Applications, Techniques of EXAFS, SEXAFS and XANES, Edits. D. Koningsberger and R. Prins, Wiley, 1988 External links Details about SEXAFS X-ray absorption spectroscopy
Surface-extended X-ray absorption fine structure
Chemistry,Materials_science,Engineering
1,757
12,296,703
https://en.wikipedia.org/wiki/HIPASS
The H I Parkes All Sky Survey (HIPASS) is a large survey for neutral atomic hydrogen (H I). Most of the data was taken between 1997 and 2002 using CSIRO's 64 m Parkes Telescope. HIPASS covered 71% of the sky and identified more than 5000 galaxies; the major galaxy catalogs are: the "HIPASS Bright Galaxy Catalog" (HIPASS BGC), the southern HIPASS catalog (HICAT), and the northern HIPASS catalog (NHICAT) Discoveries include over 5000 galaxies (incl. several new galaxies), the Leading Arm of the Magellanic Stream and a few gas clouds devoid of stars. Survey HIPASS covers a velocity range of −1,280 to 12,700 km/s. It was the first blind HI survey to cover the entire southern sky and the northern sky up to +25°. Technical overview, calibration and imaging (Barnes et al. 2001). Southern Sky observations Observations of the southern sky started in February 1997, and were completed in March 2000, consisting of 23,020 eight-degree scans of each of 9 minutes duration. HIPASS scanned the entire southern sky five times. The southern HIPASS galaxy catalog (HICAT) contains 4315 HI sources. Northern Sky observations Northern HIPASS extended the survey into the northern sky. The entire Virgo Cluster region was observed in Northern HIPASS. NHICAT, the catalogue of the northern extension of HIPASS contains 1,002 H I sources. CHIPASS Archival data from HIPASS and the HI Zone of Avoidance (HIZOA) survey were reprocessed to make a new 20cm confusion-limited continuum map of the sky south of declination +25°. Its relatively high sensitivity and resolution (compared to other single-dish surveys) and low level of artefacts has made this survey invaluable, particularly for merging with interferometric data such as WALLABY to improve the coverage of extended structure. Multibeam Receiver Observations for HIPASS were taken using the Parkes 21-cm Multibeam Receiver. The instrument consists of a focal-plane array of 13 individual receivers arranged in a hexagonal pattern. Built in a collaboration between numerous institutions, it was funded by the Australian Research Council (ARC) and the Australia Telescope National Facility (ATNF) to undertake the HIPASS and ZOA surveys. Discoveries Leading arm of Magellanic Stream HIPASS discovered the Leading Arm of the Magellanic Stream. This is an extension of the Magellanic Stream beyond the Magellanic clouds. The existence of the Leading Arm is predicted by models of a tidal interaction between the Magellanic Clouds and the Milky Way. HIPASS J0731-69 HIPASS J0731-69 is a cloud of gas devoid of any stars. It is associated with the asymmetric spiral galaxy NGC 2442. It is likely that HIPASS J0731-69 was torn loose from NGC 2442 by a companion. HIPASS J1712-64 HIPASS J1712-64 is an isolated extragalactic cloud of neutral hydrogen with no associated stars. The cloud is a binary system and is not dense enough to form stars. HIPASS J1712-64 was probably ejected during an interaction between the Magellanic clouds and the Milky way. New galaxies in the Centaurus A/M83 Group Ten new galaxies were identified in the Centaurus A/M83 Group, bringing the total (at the time) to 31 galaxies. See also HIJASS, the H I Jodrell All Sky Survey References Astronomical imaging Astronomical surveys
HIPASS
Astronomy
740
649,356
https://en.wikipedia.org/wiki/Putty
Putty is a material with high plasticity, similar in texture to clay or dough, typically used in domestic construction and repair as a sealant or filler. Although some types of putty (typically those using linseed oil) slowly polymerise and become stiff, many putties can be reworked indefinitely, in contrast to other types of filler which typically set solid relatively rapidly. Chemical composition Putty, or lime putty, is made from a mixture of calcium oxide (CaO) and water (H2O) in proportions of 38% and 62% by weight respectively, as result, the solution forms hydrated lime (Ca(OH)2) which takes up about a half of the weight. The other putty mixture may be a calcium carbonate (CaCO3, 750-850 parts) based with an admixture of CaO (ash calcium, 120-180 parts), white cement (40-60 parts), and talc powders in much lower concentrations (fractions). Applications Use in construction Putty has been used extensively in glazing for fixing and sealing panes of glass into wooden frames (or sashes), although its use is decreasing with the prevalence of PVC and metal window frames which use synthetic sealants such as silicone. Glazing putty is traditionally made by mixing a base of whiting (finely ground chalk) with linseed oil in various proportions. Historically, white lead was sometimes mixed with the whiting. There are a number of synthetic alternatives such as polybutene-based putties, where the polybutene is a low molecular weight oligomer replacing the linseed oil. Butyl rubber is also added to the mixture to provide some strength and flexibility. Painter's putty is typically a linseed oil-based product used for filling holes, minor cracks, and defacements in wood only. Putties can also be made intumescent, in which case they are used for firestopping as well as for padding of electrical outlet boxes in fire-resistance rated drywall assemblies. In the latter case, hydrates in the putty produce an endothermic reaction to mitigate heat transfer to the unexposed side. In woodworking, water-based putties are more commonly used, as these emit very little odour, are more easily cleaned up and are compatible with water-based and latex sealers. Two-part hardening putties Polyester putty and epoxy putty are thermosetting polymers that can be molded by hand, but become permanently rigid after curing. Pratley Putty is an epoxy putty used primarily for steel bonding. Milliput is another popular multipurpose epoxy putty. Bondo is a polyester-based automotive body filler, which is commonly used in collision repair. Plumber's putty Plumber's putty is the common name encompassing a variety of products of completely different compositions, all used for making watertight seals in plumbing. It is a pliable substance used to make watertight seals around faucets and drains. The putty is a basic component of a plumber's toolkit and is often used when replacing plumbing fixtures. Plumber's putty formulations vary but commonly include powdered clay and linseed oil. Other formulas use limestone, talc, or fish oil. RTV silicone or epoxy sealants may be used in place of putty. Plumber's putty contains mineral oils and/or vegetable oils so it can stain porous materials such as marble or some plastics. The oils can also react chemically with some plastics, slowly making them brittle. Other uses Certain types of putty also have use in the field of terminal ballistics, where the putty can accurately represent the average density of the human body. As such it can be used, for instance, to test the penetrative power of projectiles, or the stopping power of body armour. Modeling clay and play putty, such as Plasticine and Silly Putty are common toys. See also Blu Tack Caulk Grain filler Rope caulk Spackling paste Wood putty Whitewash References Putty & Mastic at wiki.DIY FAQ.org.uk Materials Passive fire protection Firestops
Putty
Physics
899
28,182,720
https://en.wikipedia.org/wiki/Twist%20%28software%29
Twist is a test automation and functional testing solution built by Thoughtworks Studios, the software division of ThoughtWorks. It uses Behavior Driven Development (BDD) and Test-driven development (TDD) for functional testing of the application. It is a part of the Adaptive ALM solution consisting of Twist for Agile testing by ThoughtWorks Studios, Mingle for Agile project management and Go for Agile release management. Twist is no longer supported by ThoughtWorks. Features Twist allows test specifications to be written in English or any UTF-8 supported language. Test implementation is done using Java or Groovy. Twist's IDE supports manual, automated and hybrid testing. Twist can be used with any Java based driver. It provides support for Selenium and Sahi for testing web-based applications SWTBot for testing Eclipse/SWT applications Frankenstein for testing Java Swing applications Calabash for testing Android and iOS applications Fast Script Development Consolidation of redundant code (refactoring as “Concepts”) Type Ahead and Suggestion Team coding Version control, organization, and searching in Confluence Shared script libraries Tagging (Test/Production, Categories, etc.) with Filters Filter scripts based on Tags Run groups of tests based on Tags References External links Twist Community Automation software Graphical user interface testing 2008 software
Twist (software)
Engineering
260
42,627,439
https://en.wikipedia.org/wiki/Gas%E2%80%93liquid%20contactor
A gas–liquid contactor is a particular chemical equipment used to realize the mass and heat transfer between a gas phase and a liquid phase. Gas–liquid contactors can be used in separation processes (e.g. distillation, absorption) or as gas–liquid reactors or to achieve both purposes within the same device (e.g. reactive distillation). Typologies They are divided into two main categories: differential gas–liquid contactors: the mass transfer happens within the entire length of the contactor and the vapor–liquid equilibrium is not reached in any point of the equipment; stagewise gas–liquid contactors: the vapor–liquid equilibrium is reached within each stage of the equipment and mass transfer happens in a part only of the volume of each stage. Examples of differential gas–liquid contactors are: falling-film column packed column bubble column spray tower gas–liquid agitated vessel. Examples of stagewise gas–liquid contactors are: plate column rotating disc contactor Venturi tube. Pro and cons Some important factors to take into account to choice the typology of gas–liquid contactor more suitable for a particular application are: liquid hold-up surface area of the gas–liquid interface. In particular heat and mass transfer velocity is higher for equipment with higher values of gas–liquid interface surface area, so gas–liquid contactors with high surface area (e.g. packed column, spray tower) are often preferred when it is important to lower the cost of the equipment. Liquid hold-up is also an important factor for the economy of the process, because for low values of liquid hold-up a bigger equipment is needed to have the same heat and mass transfer velocity. For this reason, gas–liquid contactors with low liquid-hold-up (e.g. falling-film column) in general are not used at industrial scale. Notes Bibliography Robert Perry, Don W. Green, Perry's Chemical Engineers' Handbook, 8th ed., McGraw-Hill, 2007. Chemical equipment
Gas–liquid contactor
Chemistry,Engineering
415
33,823,495
https://en.wikipedia.org/wiki/Brunel%20Award
The Brunel Awards are given to railway companies, to encourage outstanding visual design in railway architecture, graphics, industrial design and art, technical infrastructure and environmental integration, and rolling stock. The name is assigned to them in honour of Isambard Kingdom Brunel, founder of the Great Western Railway, and designer of the giant ship . History The Brunel Awards were first awarded in 1985, during the celebrations marking the 150th anniversary of the Great Western Railway. Her Majesty Queen Elizabeth II of the United Kingdom presented the inaugural awards, at a ceremony in Bristol, England. Categories Beginning with the 2011 award ceremony, there have been five categories of award; the third category is new. Category 1: rail stations Category 2: technical infrastructure Category 3: freight and railroad support buildings Category 4: industrial design, corporate branding, graphics, furnishing Category 5: rolling stock See also List of architecture prizes References External links The Watford Group website, Brunel Awards page The ninth Brunel Awards (2005) on the website of the organiser, the DSB. The tenth Brunel Awards (2008) on the website of the organiser the ÖBB. 1985 establishments in the United Kingdom Awards established in 1985 British awards Architecture awards Design awards Rail infrastructure in the United Kingdom Rail transport industry awards Isambard Kingdom Brunel Awards disestablished in 2014
Brunel Award
Engineering
276
22,040,831
https://en.wikipedia.org/wiki/Plucking%20post
A plucking post is a raised structure such as a tree stump which is used regularly by a bird of prey to dismember its prey, removing feathers and various other inedible parts before eating it. Purpose The elevated nature of the post allows for a safer landing with the heavy load of the prey, as well as being a good vantage point to scan for other predators, while the bird is vulnerable, involved in the relatively complex process of plucking and feeding on its prey. Many owls use plucking posts for prey that has been caught on the ground. Barred owls often use old nests for the purpose. Plucking posts are used by barn owls which hunt by flying low and slowly over an area of open ground, hovering over spots that conceal potential prey. The barn owl feeds primarily on small vertebrates, particularly rodents. The common buzzard is another user of plucking posts and has an even more varied diet than the barn owl. The sparrowhawk flies low over the ground, skimming hedges and fences, but staying close to cover so that it can rapidly pounce on its victims. In woodland its agility enables it to fly swiftly between the trunks and branches. In New Zealand the New Zealand falcon takes its catch to a plucking post to dislocate the bird's neck using the notch on its bill that all falcons have. It then plucks the feathers before eating the entire bird. Plucking posts are ideal places for setting up bird hides, thus allowing a close observation of bird of prey feeding behaviour. Function The post provides a firm surface for an effective grip by the bird's talons and sometimes crevices for helping with the mechanical separation of the prey. Natural tree stumps and man-made structures such as straining and fence posts. Boulders may be used, especially if they have a carpet covering of moss or are cracked or ribbed. Bird pellets are often found on or around plucking posts, composed of the indigestible items that were consumed by the predator. Plucking posts, surrounded by feathers and fur, may indicate that a raptor nesting site is nearby and these may be mainly used during the breeding season. Scientists can use the evidence of plucking posts to provide information about the feeding behaviour of relevant raptors. It has also been suggested that faeces marks and plucking may represent a widespread method for communicating current reproduction and territory to conspecifics. In secure or difficult surroundings the plucking post may be at ground level. Notes References Lynch, Wayne. (2007). Owls of the United States and Canada: a complete guide to their biology and behavior. JHU Press. External links Plucking post images Sparrowhawks and plucking posts Birds of prey True hawks Bird feeding Birds
Plucking post
Biology
574
14,800,628
https://en.wikipedia.org/wiki/Ectodysplasin%20A%20receptor
Ectodysplasin A receptor (EDAR) is a protein that in humans is encoded by the EDAR gene. EDAR is a cell surface receptor for ectodysplasin A which plays an important role in the development of ectodermal tissues such as the skin. It is structurally related to members of the TNF receptor superfamily. Function EDAR and other genes provide instructions for making proteins that work together during embryonic development. These proteins form part of a signaling pathway that is critical for the interaction between two cell layers, the ectoderm and the mesoderm. In the early embryo, these cell layers form the basis for many of the body's organs and tissues. Ectoderm-mesoderm interactions are essential for the proper formation of several structures that arise from the ectoderm, including the skin, hair, nails, teeth, and sweat glands. Clinical significance Mutation in this gene have been associated with hypohidrotic ectodermal dysplasia, a disorder characterized by a lower density of sweat glands. Derived EDAR allele A derived G-allele point mutation (SNP) with pleiotropic effects in EDAR, 370A or rs3827760, is found in ancient and modern East Asians, North Asians, Southeast Asians, Nepalese, and Native Americans but not common in African or European populations. Experimental research in mice has linked the derived allele to a number of traits, including greater hair shaft diameter, more numerous sweat glands, smaller mammary fat pad, and increased mammary gland density. A 2008 study stated that EDAR is a genetic determinant for hair thickness, and also contributed to variations in hair thickness among Asian populations. Derived variants of EDAR are associated with multiple facial and dental characteristics, such as shovel-shaped incisors. This mutation is also implicated in ear morphology differences and reduced chin protrusion. A 2013 study suggested that the EDAR variant (370A) arose about 35,000 years ago in central China, a period during which the region was then quite warm and humid. A subsequent study from 2021, based on ancient DNA samples, has suggested that the derived variant became dominant among Ancient Northern East Asians shortly after the Last Glacial Maximum in Northeast Asia, around 19,000 years ago. Ancient remains from Northern East Asia, such as the Tianyuan Man (40,000 years old) and the AR33K (33,000 years old) specimen lacked the derived EDAR allele, while ancient East Asian remains after the LGM carry the derived EDAR allele. It has been hypothesized that natural selection favored this allele during the last ice age in a population of people living in isolation in Beringia, as it may play a role in the synthesis of Vitamin D-rich breast milk in dark environments. One study suggested that because the EDAR mutation arose in a cool and dry environment, it may have been adaptive by increasing skin lubrication, thus reducing dryness in exposed facial structures. The frequency of 370A is most highly elevated in modern North Asian and East Asian populations, followed by Native American populations, but is virtually absent in other populations around the world. In a study of 222 Korean and 265 Japanese subjects, the 370A mutation was found in 86.9% Korean (Busan) and 77.5% Japanese (Tokyo) subjects. Many Native Americans today have significant European admixture and Europeans lack this EDAR variant entirely, so it is likely that the occurrence of 370A among Native Americans was originally much higher prior to the European colonization of the Americas. The derived G-allele is a variation of the A-allele in earlier hominids, the version found in most modern non-East Asian and non-Native American populations and is found in 100% of Native American skeletal remains within all Native American haplogroups which studies have been done on prior to all contact from foreign population from Africa, Europe, or Asia. The derived allele was present in both the Tibeto-Burman (Magar and Newar) and Indo-European (Brahmin) populations of Nepal. The highest 1540C allele frequency was observed in Magar (71%), followed by Newar (30%) and Brahmin (20%). 50% of ancient DNA samples (7,900-7,500 BP) from Motala, Sweden; two (3300–3000 BC) from the Afanasevo culture and one (400–200 BC) Scythian sample were found to carry the rs3827760 mutation. According to a 2018 study, several ancient DNA samples from the Americas, including USR1 from the Upward Sun River site, Anzick-1, and the 9,600 BP individual from Lapa do Santo, were found to not carry the derived allele. This suggests that the increased frequency of the derived allele occurred independently in both East Asia and the Americas. A 2021 study analyzed the DNA of 6 Jomon remains from Japan and found that none of them carried the derived EDAR allele that is fixed in modern East Asian populations. See also Ectodysplasin A2 receptor References Further reading External links GeneReview/NIH/UW entry on Hypohidrotic Ectodermal Dysplasia Proteins Cell-surface receptors Ectoderm TNF receptor family
Ectodysplasin A receptor
Chemistry
1,124
4,682,452
https://en.wikipedia.org/wiki/Motivational%20therapy
Motivational therapy (or MT) is a combination of humanistic treatment and enhanced cognitive-behavioral strategies, designed to treat substance use disorders. It is similar to motivational interviewing and motivational enhancement therapy. Method The focus of motivational therapy is to encourage a patient to develop a negative view of their substance use (contemplation), along with a desire to change their behavior (determination to change). A motivational therapist does not explicitly advocate change and tends to avoid directly contradicting their patient, but instead expresses empathy, rolls with resistance, and supports self-efficacy. Relapses in addictive behaviors are part of the treatment and are not considered a step back or a failure to advance in treatment. Often, a methadone or similar program is used in conjunction with motivational therapy. Some suggest that the success of motivational therapy is highly dependent on the quality of the therapist involved and, like all therapies, has no guaranteed result. Others explain the frequent successes of motivational therapy by noting that the patient is the ultimate source of change, choosing to reduce their dependency on drugs. Motivational therapies are focused specifically on a person's needs, or on what their problems may be. Sessions are usually short the first time you see a patient, but time can vary the next few sessions. During these times there are different methods and techniques used by the therapist. Techniques consist of: brief solution focused therapy cognitive behavioural therapy schema focused therapy interpersonal therapy compassion focused therapy compassionate mind training hypnosis. History First publicized by Miller and Rollnick in 1991, motivational therapy is now seen as a highly effective treatment strategy for substance use disorders, especially in the case of opiate and euphoric-enhancement drugs, where users tend to resist traditional negative reinforcement strategies. Motivational Therapy was brought to public awareness by William Miller in a 1983 article published in Behavioural Psychotherapy. In 1991, Miller and Stephen Rollnick expanded on the fundamental approaches and concepts, while making more detailed descriptions of procedures in the clinical setting. He later defined it as a directive, client-centered counseling style for eliciting behavior change by helping clients to explore and resolve ambivalence. Compared with non-directive counseling, Motivational Therapy is more focused and goal-directed. The examination and resolution of ambivalence is its central purpose, and the counselor is intentionally directive in pursuing this goal. Since Miller and Rollnick, other psychologists have introduced models and various techniques to try to implement within the Motivational Therapy realm to help with substance use. Carlo DiClemente introduced models that linked motivation with change, proposing the Stages of Change Model, and using it to explain relapse, and the struggle of addiction being a matter of behavior change. The model states seven different stages of change, and a brief description of each stage: PrecontemplationNot ready to change ContemplationThinking about change PreparationGetting ready to make a change, planning and commitment ActionMaking the change, implementing the plan, taking the action MaintenanceSustaining behavior change until integrated into lifestyle, maintaining, integrating Relapse/recyclingSlipping back to previous behavior and re-entering the cycle of change TerminationLeaving the cycle of change The models, along with the techniques formulated by Rollnick and Miller have helped create a client-driven form of therapy that has been known to help clients with substance use and different caliber athletes in achieving success. Motivational Therapy was designed to be less confrontational than other therapies that encourage clients to realize that they have a problem that they need to confront in order to change. MT is different from those therapies that: Argue that the person has a problem and needs to change Offer direct advice or prescribes solutions to the problem without the person's permission or without actively encouraging the person to make his or her own choices Use an authoritative/expert stance leaving the client in a passive role Do most of the talking, or functions as a unidirectional information delivery system Impose a diagnostic label Behave in a punitive or coercive manner The aforementioned therapy techniques are known to violate the essential spirit of motivational therapy. MT is designed to be an interpersonal style of therapy that is not restricted to formal counseling settings. It focuses on the understanding of what initiates change while utilizing a guiding philosophy, and fosters a balance of components that are both directed and client-centered. Intervention Motivational intervention is described as a directive, patient-centered counseling style that enhances motivation for change by helping patients clarify and resolve ambivalence about behavior change. This type of therapy helps patients refocus on their goals in life and restructure the important things in their life. Motivational problems are increasing in addiction treatment settings, as more patients are identified by early interventions, and are court-ordered, ambivalent, and unmotivated. The earlier the intervention occurs, the less the motivation. Early intervention allows people to set realistic goals for their recovery. Recovery can take a while, so it is ideal that the patients receives the therapy as soon as possible. the sooner the better because it allows the patients to have confidence in the recovery process and the help that they are receiving. One of the most motivational to change interventions and evidence based were the principles of the Transtheorical Model of Prochaska & Diclemente (1983). Substance use disorders Motivational therapy is not only helpful to the person using substances but also helpful towards their family as well. There has been an equally growing understanding and concern not only for people who use substances but also for their family and friends. Current literature assessments have consistently identified three main findings: (1) involvement of family members during the pre-treatment phase significantly improves engagement of people who use substances in treatment; (2) involvement of the family also improves retention in treatment, and (3) long-term outcomes are more positive when families and/or social networks are components of the treatment approach. Within motivational therapy, specific models have been introduced relating to various reasons for treatment. The Systematic Motivational Therapy (SMT) Model is used for treatment of substance use. The emphasis of this model is the focus on family relationships. This model does not only show the happiness and appreciation of the family in these relationships but also the complications and ambivalent relationships that comes with substance use. There are two distinct versions of the SMT model. Version one of the model includes the family approach towards substance use; emphasizing four different principles: assessment, detoxification, relapse prevention, and rehabilitation. When being addressed, the entire family is present and attentive. Version 2 of the SMT model uses motivational interviewing approaches and combines these with family systems by using five basic principles that are critical in shaping therapist behavior: expressing empathy about the patients condition(s), developing discrepancies regarding the patients beliefs about his or her behavior, avoiding arguments about continued substance use; rolling with resistance to change and supporting patient self-efficacy regarding decisions about behavior change. Differences from other therapies Although very often used in similar contexts, motivational therapy, motivational interviewing and motivational enhancement theory/therapy have their differences. Motivational interviewing (MI) is similar to motivational therapy in the sense that it attempts not to create change within an individual but give foundation and support to the change the individual finds within him or her self. As a treatment for individuals with all types of substance use disorders, motivational interview therapists focus on trying to erase any type of ambivalence the individual may have towards their use. Similar to MET, motivational interviewing finds 'change talk' very important and the clinician interacts with the patient through open-ended questions, affirmations, reflections, and summaries. There are three key elements that build the foundation of motivational interviewing; collaboration, evocation and autonomy. Evocation is expressed through the clinician's responsibility to "draw out" the opinions and commitment to change of the client, rather than suggesting or imposing ideas. The client and the therapist, through collaboration, work together to build a trusting relationship, as opposed to the therapist taking the expert or higher role between the two. While Motivational Therapy is a method to treat substance use, Motivational Enhancement Therapy (MET) is also a very common way to treat alcohol use disorder. MET is very focused on the individual or patient taking responsibility for their use and speaking about the actions needed to evoke change in their life. Through this therapy, patients learn alternative routes to deal with such a huge change in their lifestyle. Similar to MT, therapists attempt throughout MET to evoke a feeling of optimism within patients, but unlike motivational therapy, therapists are very clear on their advice and suggestions for change. Without taking the back seat and just listening to their patients' thoughts, therapists of MET are more vocal in their feedback towards patient improvement. Like MT, there are five stages which set the stage for successful MET (in order, from beginning to end): Pre-contemplation, contemplation, determination, action, maintenance. If not permanently successful, there becomes a sixth stage to work through – relapse. References American Psychological Associates 2003. Authors: Burke, Brian L.; Arkowitz Hal; and Menchola, Marisa. The Efficacy of Motivational Interviewing: A Meta-Analysis of Controlled Clinical Trials. Retrieved April 9, 2006. Advances in Psychiatric Treatment, Volume 9 (pp. 280–288). Author Luty, Jason. What works in drug addiction. Retrieved October 24, 2020. American Psychological Associates 2004. Authors: Miller, William R.; Yahne, Carolina E.; Moyers, Theresa B.; Martinez James; and Pirritano, Matthew. A Randomised Trial of Methods to Help Clinicians Learn Motivational Interviewing. Retrieved April 9, 2006. DiClemente C. Motivational enhancement therapy. Program and abstracts of the American. Society of Addiction Medicine 2003 The State of the Art in Addiction. Medicine; October 30 – November 1, 2003; Washington, DC. Session I. Miller WR, Rollnick S. What Is Motivational Interviewing? Behavioural and Cognitive Psychotherapy. 1995; 23, 325–334. Miller WR, Rollnick S. Motivational Interviewing: Preparing People for Change, 2nd edition. New York: Guilford Press; 2002. https://web.archive.org/web/20120305174039/http://www.stephenrollnick.com/index.php/all-commentary/64-what-is-motivational-interviewing American Psychological Associates 2003. Authors: Burke, Brian L.; Arkowitz Hal; and Menchola, Marisa. The Efficacy of Motivational Interviewing: A Meta-Analysis of Controlled Clinical Trials. Retrieved April 9, 2006. Advances in Psychiatric Treatment, Volume 9 (pp. 280–288). Author Luty, Jason. What works in drug addiction?. Retrieved April 9, 2006. American Psychological Associates 2004. Authors: Miller, William R.; Yahne, Carolina E.; Moyers, Theresa B.; Martinez James; and Pirritano, Matthew. A Randomised Trial of Methods to Help Clinicians Learn Motivational Interviewing. Retrieved April 9, 2006. Elizabeth Howell, MD. (2004). Motivation Therapy. Medscape Today. William R. Miller, PhD (2009). An Overview of Motivational Interviewing. MI. Edwards and Steinglass, 1995; Miller et al., 1999; O’Farrell and Fals-Stewart, 2003; Rowe and Liddle, 2003; Stanton and Heath, 2005; Thomas and Corcoran, 2001. Motivation Psychotherapy by type Substance-related disorders
Motivational therapy
Biology
2,407
70,219,542
https://en.wikipedia.org/wiki/Edgelord
An edgelord is someone, typically on the Internet, who tries to impress or shock by posting exaggerated opinions such as nihilism or extremist views. According to the Merriam-Webster.com Dictionary, the first known usage with this meaning was in 2015. It was added to Webster's in September 2023. Webster gave the following example: Edgelords were characterised by author and journalist Rachel Monroe in her account of criminal behaviour, Savage Appetites: It is frequently associated with the forum site 4chan. The renegade rhetoric of the edgelord is often intentionally employed by the far-right to troll leftist targets. See also Épater la bourgeoisie Schadenfreude Sealioning Shock jock Shock site References Internet terminology Internet trolling Pejorative terms for people
Edgelord
Technology
160
45,831
https://en.wikipedia.org/wiki/Tetanus
Tetanus (), also known as lockjaw, is a bacterial infection caused by Clostridium tetani and characterized by muscle spasms. In the most common type, the spasms begin in the jaw and then progress to the rest of the body. Each spasm usually lasts for a few minutes. Spasms occur frequently for three to four weeks. Some spasms may be severe enough to fracture bones. Other symptoms of tetanus may include fever, sweating, headache, trouble swallowing, high blood pressure, and a fast heart rate. The onset of symptoms is typically 3 to 21 days following infection. Recovery may take months; about 10% of cases prove to be fatal. C. tetani is commonly found in soil, saliva, dust, and manure. The bacteria generally enter through a break in the skin, such as a cut or puncture wound caused by a contaminated object. They produce toxins that interfere with normal muscle contractions. Diagnosis is based on the presenting signs and symptoms. The disease does not spread between people. Tetanus can be prevented by immunization with the tetanus vaccine. In those who have a significant wound and have had fewer than three doses of the vaccine, both vaccination and tetanus immune globulin are recommended. The wound should be cleaned, and any dead tissue should be removed. In those who are infected, tetanus immune globulin, or, if unavailable, intravenous immunoglobulin (IVIG) is used. Muscle relaxants may be used to control spasms. Mechanical ventilation may be required if a person's breathing is affected. Tetanus occurs in all parts of the world but is most frequent in hot and wet climates where the soil has a high organic content. In 2015, there were about 209,000 infections and about 59,000 deaths globally. This is down from 356,000 deaths in 1990. In the US, there are about 30 cases per year, almost all of which were in people who had not been vaccinated. An early description of the disease was made by Hippocrates in the 5th century BC. The cause of the disease was determined in 1884 by Antonio Carle and Giorgio Rattone at the University of Turin, and a vaccine was developed in 1924. Signs and symptoms Tetanus often begins with mild spasms in the jaw muscles—also known as lockjaw. Similar spasms can also be a feature of trismus. The spasms can also affect the facial muscles, resulting in an appearance called risus sardonicus. Chest, neck, back, abdominal muscles, and buttocks may be affected. Back muscle spasms often cause arching, called opisthotonus. Sometimes, the spasms affect muscles utilized during inhalation and exhalation, which can lead to breathing problems. Prolonged muscular action causes sudden, powerful, and painful contractions of muscle groups, called tetany. These episodes can cause fractures and muscle tears. Other symptoms include fever, headache, restlessness, irritability, feeding difficulties, breathing problems, burning sensation during urination, urinary retention, and loss of stool control. Even with treatment, about 10% of people who contract tetanus die. The mortality rate is higher in unvaccinated individuals, and in people over 60 years of age. Incubation period The incubation period of tetanus may be up to several months but is usually about ten days. In general, the farther the injury site is from the central nervous system, the longer the incubation period. However, shorter incubation periods will have more severe symptoms. In trismus nascentium (i.e. neonatal tetanus), symptoms usually appear from 4 to 14 days after birth, averaging about 7 days. On the basis of clinical findings, four different forms of tetanus have been described. Generalized tetanus Generalized tetanus is the most common type of tetanus, representing about 80% of cases. The generalized form usually presents with a descending pattern. The first sign is trismus or lockjaw, then facial spasms (called risus sardonicus), followed by stiffness of the neck, difficulty in swallowing, and rigidity of pectoral and calf muscles. Other symptoms include elevated temperature, sweating, elevated blood pressure, and episodic rapid heart rate. Spasms may occur frequently and last for several minutes, with the body shaped into a characteristic form called opisthotonos. Spasms continue for up to four weeks, and complete recovery may take months. Neonatal tetanus Neonatal tetanus (trismus nascentium) is a form of generalized tetanus that occurs in newborns, usually those born to mothers who themselves have not been vaccinated. If the mother has been vaccinated against tetanus, the infants acquire passive immunity, and are thus protected. It usually occurs through infection of the unhealed umbilical stump, particularly when the stump is cut with a non-sterile instrument. As of 1998, neonatal tetanus was common in many developing countries, and was responsible for about 14% (215,000) of all neonatal deaths. In 2010, the worldwide death toll was approximately 58,000 newborns. As the result of a public health campaign, the death toll from neonatal tetanus was reduced by 90% between 1990 and 2010, and by 2013, the disease had been largely eliminated from all but 25 countries. Neonatal tetanus is rare in developed countries. Local tetanus Local tetanus is an uncommon form of the disease, in which people have persistent contraction of muscles in the same anatomic area as the injury. The contractions may persist for many weeks before gradually subsiding. Local tetanus is generally milder; only about 1% of cases are fatal, but it may precede the onset of generalized tetanus. Cephalic tetanus Cephalic tetanus is the rarest form of the disease (0.9–3% of cases), and is limited to muscles and nerves in the head. It usually occurs after trauma to the head area, including: skull fracture, laceration, eye injury, dental extraction, and otitis media, but it has been observed from injuries to other parts of the body. Paralysis of the facial nerve is most frequently implicated, which may cause lockjaw, facial palsy, or ptosis, but other cranial nerves can also be affected. Cephalic tetanus may progress to a more generalized form of the disease. Due to its rarity, clinicians may be unfamiliar with the clinical presentation, and may not suspect tetanus as the illness. Treatment can be complicated, as symptoms may be concurrent with the initial injury that caused the infection. Cephalic tetanus is more likely than other forms of tetanus to be fatal, with the progression to generalized tetanus carrying a 15–30% case fatality rate. Cause Tetanus is caused by the tetanus bacterium, Clostridium tetani. The disease is an international health problem, as C. tetani endospores are ubiquitous. Endospores can be introduced into the body through a puncture wound (penetrating trauma). Due to C. tetani being an anaerobic bacterium, it and its endospores thrive in environments that lack oxygen, such as a puncture wound. With the changes in oxygen levels, the turkey drumstick-shaped endospore can quickly spread. The disease occurs almost exclusively in people who are inadequately immunized. It is more common in hot, damp climates with soil rich in organic matter. Manure-treated soils may contain spores, as they are widely distributed in the intestines and feces of many animals, such as horses, sheep, cattle, dogs, cats, rats, guinea pigs, and chickens. In agricultural areas, a significant number of human adults may harbor the organism. The spores can also be found on skin surfaces and in contaminated heroin. Rarely, tetanus can be contracted through surgical procedures, intramuscular injections, compound fractures, and dental infections. Animal bites can transmit tetanus. Tetanus is often associated with rust, especially rusty nails. Although rust itself does not cause tetanus, objects that accumulate rust are often found outdoors or in places that harbor soil bacteria. Additionally, the rough surface of rusty metal provides crevices for dirt containing C. tetani, while a nail affords a means to puncture the skin and deliver endospores deep within the body at the site of the wound. An endospore is a non-metabolizing survival structure that begins to metabolize and cause infection once in an adequate environment. Hence, stepping on a nail (rusty or not) may result in a tetanus infection, as the low-oxygen (anaerobic) environment may exist under the skin, and the puncturing object can deliver endospores to a suitable environment for growth. It is a common misconception that rust itself is the cause; a related misconception is that a puncture from a rust-free nail is not a risk. Pathophysiology Tetanus neurotoxin (TeNT) binds to the presynaptic membrane of the neuromuscular junction, is internalized, and is transported back through the axon until it reaches the central nervous system. Here, it selectively binds to and is transported into inhibitory neurons via endocytosis. It then leaves the vesicle for the neuron cytosol, where it cleaves vesicle associated membrane protein (VAMP) synaptobrevin, which is necessary for membrane fusion of small synaptic vesicles (SSV's). SSV's carry neurotransmitter to the membrane for release, so inhibition of this process blocks neurotransmitter release. Tetanus toxin specifically blocks the release of the neurotransmitters GABA and glycine from inhibitory neurons. These neurotransmitters keep overactive motor neurons from firing and also play a role in the relaxation of muscles after contraction. When inhibitory neurons are unable to release their neurotransmitters, motor neurons fire out of control, and muscles have difficulty relaxing. This causes the muscle spasms and spastic paralysis seen in tetanus infection. The tetanus toxin, tetanospasmin, is made up of a heavy chain and a light chain. There are three domains, each of which contributes to the pathophysiology of the toxin. The heavy chain has two of the domains. The N-terminal side of the heavy chain helps with membrane translocation, and the C-terminal side helps the toxin locate the specific receptor site on the correct neuron. The light chain domain cleaves the VAMP protein once it arrives in the inhibitory neuron cytosol. There are four main steps in tetanus's mechanism of action: binding to the neuron, internalization of the toxin, membrane translocation, and cleavage of the target VAMP. Neurospecific binding The toxin travels from the wound site to the neuromuscular junction through the bloodstream, where it binds to the presynaptic membrane of a motor neuron. The heavy chain C-terminal domain aids in binding to the correct site, recognizing and binding to the correct glycoproteins and glycolipids in the presynaptic membrane. The toxin binds to a site that will be taken into the neuron as an endocytic vesicle that will travel down the axon, past the cell body, and down the dendrites to the dendritic terminal at the spine and central nervous system. Here, it will be released into the synaptic cleft, and allowed to bind with the presynaptic membrane of inhibitory neurons in a similar manner seen with the binding to the motor neuron. Internalization Tetanus toxin is then internalized again via endocytosis, this time, in an acidic vesicle. In a mechanism not well understood, depolarization caused by the firing of the inhibitory neuron causes the toxin to be pulled into the neuron inside vesicles. Membrane translocation The toxin then needs a way to get out of the vesicle and into the neuron cytosol for it to act on its target. The low pH of the vesicle lumen causes a conformational change in the toxin, shifting it from a water-soluble form to a hydrophobic form. With the hydrophobic patches exposed, the toxin can slide into the vesicle membrane. The toxin forms an ion channel in the membrane that is nonspecific for Na+, K+, Ca2+, and Cl− ions. There is a consensus among experts that this new channel is involved in the translocation of the toxin's light chain from the inside of the vesicle to the neuron cytosol, but the mechanism is not well understood or agreed upon. It has been proposed that the channel could allow the light chain (unfolded from the low pH environment) to leave through the toxin pore, or that the pore could alter the electrochemical gradient enough, by letting in or out ions, to cause osmotic lysis of the vesicle, spilling the vesicle's contents. Enzymatic target cleavage The light chain of the tetanus toxin is zinc-dependent protease. It shares a common zinc protease motif (His-Glu-Xaa-Xaa-His) that researchers hypothesized was essential for target cleavage until this was more recently confirmed by experiment: when all zinc was removed from the neuron with heavy metal chelators, the toxin was inhibited, only to be reactivated when the zinc was added back in. The light chain binds to VAMP, and cleaves it between Gln76 and Phe77. Without VAMP, vesicles holding the neurotransmitters needed for motor neuron regulation (GABA and glycine) cannot be released, causing the above-mentioned deregulation of motor neurons and muscle tension. Diagnosis There are currently no blood tests for diagnosing tetanus. The diagnosis is based on the presentation of tetanus symptoms and does not depend upon isolation of the bacterium, which is recovered from the wound in only 30% of cases and can be isolated from people without tetanus. Laboratory identification of C. tetani can be demonstrated only by the production of tetanospasmin in mice. Having recently experienced head trauma may indicate cephalic tetanus if no other diagnosis has been made. The "spatula test" is a clinical test for tetanus that involves touching the posterior pharyngeal wall with a soft-tipped instrument and observing the effect. A positive test result is the involuntary contraction of the jaw (biting down on the "spatula"), and a negative test result would normally be a gag reflex attempting to expel the foreign object. A short report in The American Journal of Tropical Medicine and Hygiene states that, in an affected subject research study, the spatula test had a high specificity (zero false-positive test results) and a high sensitivity (94% of infected people produced a positive test). Prevention Unlike many infectious diseases, recovery from naturally acquired tetanus does not usually result in immunity. This is due to the extreme potency of the tetanospasmin toxin. Tetanospasmin will likely be lethal before it will provoke an immune response. Tetanus can be prevented by vaccination with tetanus toxoid. The CDC recommends that adults receive a booster vaccine every ten years, and standard care practice in many places is to give the booster to any person with a puncture wound who is uncertain of when they were last vaccinated, or if they have had fewer than three lifetime doses of the vaccine. The booster may not prevent a potentially fatal case of tetanus from the current wound, however, as it can take up to two weeks for tetanus antibodies to form. In children under the age of seven, the tetanus vaccine is often administered as a combined vaccine, DPT/DTaP vaccine, which also includes vaccines against diphtheria and pertussis. For adults and children over seven, the Td vaccine (tetanus and diphtheria) or Tdap (tetanus, diphtheria, and acellular pertussis) is commonly used. The World Health Organization certifies countries as having eliminated maternal or neonatal tetanus. Certification requires at least two years of rates of less than 1 case per 1,000 live births. In 1998 in Uganda, 3,433 tetanus cases were recorded in newborn babies; of these, 2,403 died. After a major public health effort, Uganda was certified as having eliminated maternal and neonatal tetanus in 2011. Post-exposure prophylaxis Tetanus toxoid can be given in case of suspected exposure to tetanus. In such cases, it can be given with or without tetanus immunoglobulin (also called tetanus antibodies or tetanus antitoxin). It can be given as intravenous therapy or by intramuscular injection. The guidelines for such events in the United States for people at least 11 years old (and not pregnant) are as follows: Treatment Mild tetanus Mild cases of tetanus can be treated with: Tetanus immunoglobulin (TIG), also called tetanus antibodies or tetanus antitoxin. It can be given as intravenous therapy or by intramuscular injection. Antibiotic therapy to reduce toxin production. Metronidazole intravenous (IV) is a preferred treatment. Benzodiazepines can be used to control muscle spasms. Options include diazepam and lorazepam, oral or IV. Severe tetanus Severe cases will require admission to intensive care. In addition to the measures listed above for mild tetanus: Human tetanus immunoglobulin injected intrathecally (which increases clinical improvement from 4% to 35%). Tracheotomy and mechanical ventilation for 3 to 4 weeks. Tracheotomy is recommended for securing the airway, because the presence of an endotracheal tube is a stimulus for spasm. Magnesium sulfate, as an intravenous infusion, to control spasm and autonomic dysfunction. Diazepam as a continuous IV infusion. The autonomic effects of tetanus can be difficult to manage (alternating hyper- and hypotension hyperpyrexia/hypothermia), and may require IV labetalol, magnesium, clonidine, or nifedipine. Drugs, such as diazepam or other muscle relaxants, can be given to control the muscle spasms. In extreme cases, it may be necessary to paralyze the person with curare-like drugs, and use a mechanical ventilator. To survive a tetanus infection, the maintenance of an airway and proper nutrition are required. An intake of and at least 150 g of protein per day is often given in liquid form through a tube directly into the stomach (percutaneous endoscopic gastrostomy), or through a drip into a vein (parenteral nutrition). This high-caloric diet maintenance is required because of the increased metabolic strain brought on by the increased muscle activity. Full recovery takes 4 to 6 weeks because the body must regenerate destroyed nerve axon terminals. The antibiotic of choice is metronidazole. It can be given intravenously, by mouth, or by rectum. Of likewise efficiency is penicillin, but some raise the concern of provoking spasms because it inhibits GABA receptor, which is already affected by tetanospasmin. Epidemiology In 2013, it caused about 59,000 deaths—down from 356,000 in 1990. Tetanus, notably the neonatal form, remains a significant public health problem in non-industrialized countries, with 59,000 newborns dying worldwide in 2008 as a result of neonatal tetanus. In the United States, from 2000 through 2007, an average of 31 cases were reported per year. Nearly all of the cases in the United States occur in unimmunized individuals, or individuals who have allowed their inoculations to lapse. In animals Tetanus is found primarily in goats and sheep. The following are clinical symptoms found in affected goats and sheep. Extended head and neck, tail rigors (tail becomes rigid and straight), abnormal gait (walking becomes stiff and abnormal), arched back, stiffness of the jaw muscles, lockjaw, twitching of eyes, drooping eyelids, difficulty swallowing, difficulty or inability to eat and drink, abdominal bloat, spasms (uncontrolled muscular contractions) before death. Death sometimes is due to asphyxiation, secondary to respiratory paralysis. History Tetanus was well known to ancient civilizations, who recognized the relationship between wounds and fatal muscle spasms. In 1884, Arthur Nicolaier isolated the strychnine-like toxin of tetanus from free-living, anaerobic soil bacteria. The etiology of the disease was further elucidated in 1884 by Antonio Carle and Giorgio Rattone, two pathologists of the University of Turin, who demonstrated the transmissibility of tetanus for the first time. They produced tetanus in rabbits by injecting pus from a person with fatal tetanus into their sciatic nerves, and testing their reactions while tetanus was spreading. In 1891, C. tetani was isolated from a human victim by Kitasato Shibasaburō, who later showed that the organism could produce disease when injected into animals and that the toxin could be neutralized by specific antibodies. In 1897, Edmond Nocard showed that tetanus antitoxin induced passive immunity in humans, and could be used for prophylaxis and treatment. Tetanus toxoid vaccine was developed by P. Descombey in 1924, and was widely used to prevent tetanus induced by battle wounds during World War II. Etymology The word tetanus comes from the , which is further from the . Research There is insufficient evidence that tetanus can be treated or prevented by vitamin C. This is at least partially due to the fact that the historical trials that were conducted in attempts to look for a possible connection between vitamin C and alleviating tetanus patients were of poor quality. See also Renshaw cell Tetanized state References External links Tetanus Information from Medline Plus Tetanus Surveillance -- United States, 1998-2000 (Data and Analysis) Bacterial diseases Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Vaccine-preventable diseases
Tetanus
Biology
4,807
63,598,831
https://en.wikipedia.org/wiki/Bebaru%20virus
Bebaru virus is an RNA virus in the genus Alphavirus. References External links Wikispecies https://wwwn.cdc.gov/arbocat/VirusDetails.aspx?ID=56&SID=9 https://www.atcc.org/products/all/VR-1240AF.aspx https://www.genome.jp/virushostdb/59305 Alphaviruses RNA viruses
Bebaru virus
Biology
100
37,433,049
https://en.wikipedia.org/wiki/Intraoperative%20MRI
Intraoperative magnetic resonance imaging (iMRI) is an operating room configuration that enables surgeons to image the patient via an MRI scanner while the patient is undergoing surgery, particularly brain surgery. iMRI reduces the risk of damaging critical parts of the brain and helps confirm that the surgery was successful or if additional resection is needed before the patient's head is closed and the surgery completed. Equipment and operating suite configuration Compared to other imaging types, high-field iMRI requires the additional cost of specialized operating suites, instrumentation and longer anesthesia and operating room time; however, published studies show use of iMRI increases physicians’ ability to detect residual tumor leading toward an improved rate of procedural success. iMRI is available in a range of strengths. Low-field units, less than 1 Tesla (T), have the advantage of small size, simpler operating theater preparation and portability but are disadvantaged by relatively poor image resolution. Higher field strengths, currently available in 1.5 and 3T options, provide better spatial and contrast resolution enabling surgeons to more accurately evaluate the findings on an image. High-field iMRI operating suites are configured in one of two ways. Both require that the MRI magnet be stored in an adjacent room. One configuration requires that the patient be moved to the magnet to obtain an image. The second configuration (only offered by IMRIS, Inc.) moves the MRI magnet to the patient via ceiling-mounted rails to obtain the image. The latter approach has the advantage of not moving the patient from the operating theater during the surgery and enhances workflow and safety in terms of airway control, monitoring and head fixation. Applications The most prevalent application for iMRI is neurosurgery, especially for the removal of brain tumors. The system is also used for interventional neurovascular procedures. By providing iMRI during neurosurgery, clinicians can distinguish between tumor tissue and normal tissue, minimize disturbance of healthy tissue or critical areas of the brain, evaluate and confirm their results and make adjustments during a procedure without moving the patient (in the case of the rail-mounted configuration). Published clinical evidence shows the higher percentage of tumor removed the better the outcome. Use of an iMRI suite makes it more likely that surgeons will remove the entire tumor than if surgery is performed in a conventional operating room where iMRI is not used. References External links An updated history of intraoperative MRI and outcomes Magnetic resonance imaging Surgery Neurosurgery
Intraoperative MRI
Chemistry
500
34,556,315
https://en.wikipedia.org/wiki/Whispering-gallery%20wave
Whispering-gallery waves, or whispering-gallery modes, are a type of wave that can travel around a concave surface. Originally discovered for sound waves in the whispering gallery of St Paul's Cathedral, they can exist for light and for other waves, with important applications in nondestructive testing, lasing, cooling and sensing, as well as in astronomy. Introduction Whispering-gallery waves were first explained for the case of St Paul's Cathedral circa 1878 by Lord Rayleigh, who revised a previous misconception that whispers could be heard across the dome but not at any intermediate position. He explained the phenomenon of travelling whispers with a series of specularly reflected sound rays making up chords of the circular gallery. Clinging to the walls the sound should decay in intensity only as the inverse of the distance — rather than the inverse square as in the case of a point source of sound radiating in all directions. This accounts for the whispers being audible all round the gallery. Rayleigh developed wave theories for St Paul's in 1910 and 1914. Fitting sound waves inside a cavity involves the physics of resonance based on wave interference; the sound can exist only at certain pitches as in the case of organ pipes. The sound forms patterns called modes, as shown in the diagram. Many other monuments have been shown to exhibit whispering-gallery waves, such as the Gol Gumbaz in Bijapur and the Temple of Heaven in Beijing. Acoustic waves Whispering-gallery waves for sound exist in a wide variety of systems. Examples include the vibrations of the whole Earth or stars. Such acoustic whispering-gallery waves can be used in nondestructive testing in the form of waves that creep around holes filled with liquid, for example. They have also been detected in solid cylinders and spheres, with applications in sensing, and visualized in motion on microscopic discs . Whispering gallery waves are more efficiently guided in spheres than in cylinders because the effects of acoustic diffraction (lateral wave spreading) are then completely compensated. Electromagnetic waves Whispering-gallery waves exist for light waves. They have been produced in microscopic glass spheres or tori and in soap bubbles, for example, with applications as optical resonators for lasing, optomechanical cooling, frequency comb generation and optical sensing. The light waves are guided around almost perfectly by total internal reflection, leading to Q factors in excess of 1010 being achieved. This is far greater than the best values, about 104, that can be obtained in acoustics. Optical modes in a whispering gallery resonator experience some loss due to a mechanism similar to quantum tunneling, even in theoretically ideal conditions. This loss has been known from research on optical waveguide theory and is dubbed tunneling ray attenuation in the field of fiber optics. The Q factor is proportional to the decay time of the waves, which in turn is inversely proportional to both the surface scattering rate and the wave absorption in the medium making up the gallery.  Whispering-gallery waves for light have been investigated in chaotic galleries, whose cross-sections deviate from a circle. Such waves have been used in quantum information applications. Whispering-gallery waves have also been demonstrated for other electromagnetic waves such as radio waves, microwaves, terahertz radiation, infrared radiation, ultraviolet waves and x-rays. More recently, with the rapid development of microfluidic technologies, many integrated whispering gallery mode sensors, by combining the portability of lab‐on‐chip devices and the high sensitivity of whispering gallery mode resonators have emerged. The capabilities of efficient sample handling and multiplexed analyte detection offered by these systems have led to many biological and chemical sensing applications, especially for the detection of single particle or biomolecule. Other systems Whispering-gallery waves have been seen in the form of matter waves for neutrons, and electrons, and they have been proposed as an explanation for vibrations of a single nucleus. Whispering gallery waves have also been observed in the vibrations of soap films as well as in the vibrations of thin plates Analogies of whispering-gallery waves also exist for gravitational waves at the event horizon of black holes. A hybrid of waves of light and electrons known as surface plasmons has been demonstrated in the form of whispering-gallery waves, and likewise for exciton-polaritons in semiconductors. Galleries simultaneously containing both acoustic and optical whispering-gallery waves have also been made, exhibiting very strong mode coupling and coherent effects. Hybrid solid-fluid-optical whispering-gallery structures have been observed as well. See also Whispering gallery Optical ring resonator Resonator Architectural acoustics References External links Investigations of Whisper Gallery Mirrors for EUV and Soft X-Rays, T.Y. Hung and P.L. Hagelstein Acoustics Waves
Whispering-gallery wave
Physics
958
77,388,442
https://en.wikipedia.org/wiki/Friday%20the%2013th%3A%20Church%20of%20the%20Divine%20Psychopath
Friday the 13th: Church of the Divine Psychopath is a 2005 British horror novel written by Scott S. Phillips and published by Black Flame. A tie-in to the Friday the 13th series of American horror films, it is the first in a series of five Friday the 13th novels published by Black Flame and revolves around government operatives coming into conflict with a cult that worships undead killer Jason Voorhees. Plot Camp Crystal Lake, the hunting ground of undead killer Jason Voorhees, has been leased to the Ministry of the Heavenly Vessel, a fringe Christian group led by Father Eric Long. Long has discovered Jason lying dormant in Crystal Lake and plans on reviving him, deludedly believing Jason to be an avenging angel who judges and kills sinners at the behest of God. Long's congregants include Kelly Mills, a troubled twenty-six-year-old with a history of being abused both physically and sexually, including being gangraped as a child, and her friend Meredith Host, a closeted teenage lesbian who has a crush on Kelly. A few days after the Ministry moves into the camp, a group of government Operators set up nearby, having been assigned to locate and kill Jason. Walter Hobb, a member of the unit living in disgrace since his involvement in a meth lab raid that went awry, is convinced the mission is a Snipe hunt. Long uses electricity to resuscitate Jason, who murders several of Long's disciples, with Long dismissing the victims as sinners rightfully punished by Jason. Kelly flees the church and seeks aid from the Operators. Jason begins picking the Operators off one by one, assisted in his rampage by Long, who has ordered his followers to kill the Operators. One of the slain Operators is the group's leader and Hobb's best friend, Jeff Townsend. Meredith, distraught over Long's increasingly megalomaniacal behavior, the lecherous advances of Long's second-in-command, a disabled Marine named Curtis Rickles, and her belief her sexuality was the reason Jason murdered her parents, runs away from the camp in search of Jason, but is found and snapped back to her senses by Hobb and Kelly. After Long refuses to surrender, the remaining Operators lay siege to Camp Crystal Lake. Jason joins the fray, killing combatants on both sides; during the battle, Rickles sexually assaults Meredith and is shot by Hobb, who is unable to save Meredith from Jason. In the aftermath, the only ones left alive are Hobb, Kelly, Jason, and Long. Long, having missed the conflict due to passing out after abandoning his three wives and engaging in frenzied self-flagellation, denounces his cultists before supplicating himself before Jason to be "judged" by him; he is killed while declaring, "Praise God in all His wisdom." Hobb arms himself with a pair of grenade launchers, while Kelly, in a bid to lure Jason out into the open, strips to her underwear and prances through the ruins of Camp Crystal Lake. Jason takes the bait and chases Kelly into the cafeteria, where he is ambushed by Hobb. During their fight, Hobb knocks Jason into a pit where the Ministry had been dumping the dead, including Townsend. Hobb blows Jason and the mass grave up, recovers Jason's body parts and hockey mask to place in government custody, and drives off with Kelly. Publication Author Scott S. Phillips has stated he had "a great time" writing the book and that he was "pretty much left alone" while authoring it; the only parameter Black Flame had given him to follow was "to make it R-rated." However, Phillips has also declared, "After a truly unpleasant experience with the editor of my novel Friday the 13th: Church of the Divine Psychopath, I decided to take a stab at self-publishing, and I've never looked back." Black Flame "goofed up" and did not credit Phillips with the "S" initial he used to avoid being confused with another author named Scott Phillips. Phillips celebrated the book's release with a signing at the Dark Delicacies bookstore in Burbank, California, on August 20, 2005. Reception Nat Brehmer of Wicked Horror felt the novel was "pretty decent" with an intriguing premise and a "great" villain in the form of Father Eric Long. In a review written for Rue Morgue, Joel Harley praised the book, opining that it added "a new dimension to the franchise in a way that the movies could never have" and was "one of the franchise's most vibrant and exciting entries to date." Brehmer, in an expanded review written for Medium, reiterated that Church of the Divine Psychopath was "a crude, hyper-violent, exceptional splatterpunk horror novel" that, despite being "gleefully mean-spirited" with a "jet-black" sense of humor, did not shy away from serious and traumatic topics, which, in Brehmer's opinion, contributed to it being the best of the five Friday the 13th novels published by Black Flame. References External links 2005 British novels 2005 debut novels 2005 LGBTQ-related literary works 2000s horror novels 2000s LGBTQ novels Action novels Black Flame books British horror novels British LGBTQ novels Debut horror novels Domestic violence in fiction Fiction about casual sex Fiction about Christianity Fiction about child murder Fiction about gang rape Fiction about masturbation Fiction about mother–daughter relationships Fiction about polyamory Fiction about self-harm Friday the 13th (franchise) mass media Grief in fiction Juvenile delinquency in fiction Juvenile sexuality in books Lakes in fiction LGBTQ-related horror literature Novels about child abuse Novels about child sexual abuse Novels about cults Novels about disability Novels about drugs Novels about dysfunctional families Novels about friendship Novels about mass murder Novels about moving Novels about orphans Novels about rape Novels about revenge Novels about serial killers Novels about suicide Novels about the United States Marine Corps Novels based on films Novels set in abandoned buildings and structures Novels set in bookstores Novels set in churches Novels set in forests Novels set in New Jersey Novels set in Ohio Novels set in summer camps Novels set in the 2000s Novels set in Virginia Novels set in West Virginia Novels with lesbian themes Novels with multiple narrators Splatterpunk novels Supernatural novels Third-person narrative novels Works about atonement Works about LGBTQ and Christianity Works about single parent families Works about stalking Works about the illegal drug trade Works about veterans Works about widowhood Zombie novels
Friday the 13th: Church of the Divine Psychopath
Biology
1,332
73,404,288
https://en.wikipedia.org/wiki/Bego%C3%B1a%20Vitoriano
Begoña Vitoriano Villanueva (born 1967) is a Spanish applied mathematician and operations researcher whose work concerns the logistics of humanitarian aid and disaster relief. She is an associate professor in the Department of Statistics and Operational Research at the Complutense University of Madrid, and the president of the Spanish Statistics and Operations Research Society. Education and career Vitoriano, who is Spanish, was born in 1967. She studied mathematics and operations research at the Complutense University of Madrid. Despite difficulties caused by the death of her father in the first year of her studies, the need to support herself through private tutoring, and the birth of two children during her studies, she earned a bachelor's degree there in 1990 and completed her Ph.D. in 1994. She became an assistant professor in the Department of Statistics and Operational Research at the Complutense University of Madrid from 1990 to 1997. In 1995, when she traveled to El Salvador as part of an international collaboration to set up a master's program there, and witnessed the devastation and poverty caused in part by the recently ended Salvadoran Civil War. From 1997 to 2006 she worked as an assistant and then associate professor in the Department of Industrial Organisation and Institute for Technological Research at Comillas Pontifical University in Madrid, a private Jesuit school conflicting with her belief in public education, but with an emphasis on social justice that fit well with her research agenda. It was during this time that she changed her research focus from the management of electrical grids to disaster relief. She returned to Complutense University as an untenured associate professor in 2006, and was granted tenure in 2009. In 2021, she was elected president of the Spanish Statistics and Operations Research Society for a three-year term, beginning in 2022. Selected publications References External links Home page 1967 births Living people Spanish mathematicians Spanish women mathematicians Applied mathematicians Academic staff of the Complutense University of Madrid Academic staff of Comillas Pontifical University Complutense University of Madrid alumni
Begoña Vitoriano
Mathematics
407
39,928,510
https://en.wikipedia.org/wiki/LinguaSys
LinguaSys, Inc. was a company headquartered in Boca Raton, Florida. LinguaSys provided multilingual human language software and services to financial, banking, hospitality, Customer Relations Management, technology, forensics and telecommunications blue chip enterprises, and the government and military. History LinguaSys was co-founded by chief executive officer Brian Garr in Boca Raton, Florida, USA; Chief Technology Officer Vadim Berman in Melbourne, Australia; and Vice President of Development and Architecture Can Unal in Darmstadt, Germany in 2010. CEO Brian Garr was formerly CTO of Globalink from 1995 to 1998 and is a recipient of the Smithsonian Institution's "Heroes in Technology" award for his work in Machine Translation. Billionaire Mark Cuban began investing in LinguaSys, Inc., in 2012. Also in 2012, LinguaSys partnered with Salesforce.com, adding multilingual text analytics abilities to the company's social marketing services. In 2014, LinguaSys made their technology available in a public cloud. In 2015, LinguaSys added NLUI Server, which enables building Siri-like natural language applications rapidly in a variety of languages, to the products available in the public cloud. In August 2015, LinguaSys was acquired by Aspect Software. Products and services LinguaSys uses interlingual natural language processing software to provide multilingual text, sentiment, relevance and conceptual understanding and analysis. LinguaSys trademarked its proprietary interlingual technology called Carabao Linguistic Virtual Machine. LinguaSys' multilingual software solutions are customized by clients and used via SaaS and behind the firewall. LinguaSys is an IBM Business Partner. LinguaSys' multilingual technology is used on enterprise servers and consumer smartphones. LinguaSys has developed an app TGPhoto which allows the user to snap a photo of some text and show a translation to one of fifty languages. The software works on Android, and Blackberry smartphones. References External links 2010 establishments in Florida Companies based in Boca Raton, Florida Language software Software companies based in Florida Technology companies established in 2010 Defunct software companies of the United States
LinguaSys
Technology
450
14,763,333
https://en.wikipedia.org/wiki/BRD2
Bromodomain-containing protein 2 is a protein that in humans is encoded by the BRD2 gene. BRD2 is part of the Bromodomain and Extra-Terminal motif (BET) protein family that also contains BRD3, BRD4, and BRDT in mammals Early descriptions demonstrated that BRD2 gene product is a mitogen-activated kinase which localizes to the nucleus. The gene maps to the major histocompatibility complex (MHC) class II region on chromosome 6p21.3 but sequence comparison suggests that the protein is not involved in the immune response. Homology to the Drosophila gene female sterile homeotic suggests that this human gene may be part of a signal transduction pathway involved in growth control. Functions BRD2 has been implicated in cancer. BRD2 loss in mice causes obesity without diabetes for unknown reasons. BRD2 may have functional overlap with close homolog BRD3. BRD2 function is blocked by BET inhibitors. Interactions BRD2 has been shown to interact with E2F2, and many transcription factors including GATA1. References External links Further reading
BRD2
Chemistry
235
7,668,612
https://en.wikipedia.org/wiki/Marlborough%20gems
The Marlborough gems were a large collection of jewels (cameos and intaglios) assembled by several Dukes of Marlborough. The collection was composed of more than 730 carved gemstones, including garnets, sapphires, emeralds and many cameos. The most famous cameo, and the Duke's favourite, was 'The Marriage of Cupid and Psyche'. A comprehensive catalogue was published in 1870 by Nevil Story Maskelyne. He made impressions and electrotypes, now in the Beazley Archive in Oxford, which have been published. The Marlborough gems were sold by the 7th Duke of Marlborough at auction in 1875 to raise money for the maintenance of Blenheim Palace, the ancestral home. References Gemstones
Marlborough gems
Physics
149
14,795,437
https://en.wikipedia.org/wiki/Discoidin%20domain-containing%20receptor%202
Discoidin domain-containing receptor 2, also known as CD167b (cluster of differentiation 167b), is a protein that in humans is encoded by the DDR2 gene. Discoidin domain-containing receptor 2 is a receptor tyrosine kinase (RTK). Function RTKs play a key role in the communication of cells with their microenvironment. These molecules are involved in the regulation of cell growth, differentiation, and metabolism. In several cases the biochemical mechanism by which RTKs transduce signals across the membrane has been shown to be ligand induced receptor oligomerization and subsequent intracellular phosphorylation. In the case of DDR2, the ligand is collagen which binds to its extracellular discoidin domain. This autophosphorylation leads to phosphorylation of cytosolic targets as well as association with other molecules, which are involved in pleiotropic effects of signal transduction. DDR2 has been associated with a number of diseases including fibrosis and cancer. Structure RTKs have a tripartite structure with extracellular, transmembrane, and cytoplasmic regions. This gene encodes a member of a novel subclass of RTKs and contains a distinct extracellular region encompassing a factor VIII-like domain. Gene Alternative splicing in the 5' UTR of the DDR2 gene results in multiple transcript variants encoding the same protein. Interactions DDR2 (gene) has been shown to interact with SHC1 and phosphorylate Shp2. DDR2 also interacts with Integrin α1β1 and α2β1 by promoting their adhesion to collagen. References Further reading Clusters of differentiation Tyrosine kinase receptors
Discoidin domain-containing receptor 2
Chemistry
369
6,033,314
https://en.wikipedia.org/wiki/Claviceps%20purpurea
Claviceps purpurea is an ergot fungus that grows on the ears of rye and related cereal and forage plants. Consumption of grains or seeds contaminated with the survival structure of this fungus, the ergot sclerotium, can cause ergotism in humans and other mammals. C. purpurea most commonly affects outcrossing species such as rye (its most common host), as well as triticale, wheat and barley. It affects oats only rarely. Life cycle An ergot kernel called Sclerotium clavus develops when a floret of flowering grass or cereal is infected by an ascospore of C. purpurea. The infection process mimics a pollen grain growing into an ovary during fertilization. Because infection requires access of the fungal spore to the stigma, plants infected by C. purpurea are mainly outcrossing species with open flowers, such as rye (Secale cereale) and Alopecurus. The proliferating fungal mycelium then destroys the plant ovary and connects with the vascular bundle originally intended for feeding the developing seed. The first stage of ergot infection manifests itself as a white soft tissue (known as Sphacelia segetum) producing sugary honeydew, which often drops out of the infected grass florets. This honeydew contains millions of asexual spores (conidia) which are dispersed to other florets by insects or rain. Later, the Sphacelia segetum convert into a hard dry Sclerotium clavus inside the husk of the floret. At this stage, alkaloids and lipids (e.g. ricinoleic acid) accumulate in the Sclerotium. When a mature Sclerotium drops to the ground, the fungus remains dormant until proper conditions trigger its fruiting phase (onset of spring, rain period, need of fresh temperatures during winter, etc.). It germinates, forming one or several fruiting bodies with head and stipe, variously colored (resembling a tiny mushroom). In the head, threadlike sexual spores (ascospores) are formed in perithecia, which are ejected simultaneously, when suitable grass hosts are flowering. Ergot infection causes a reduction in the yield and quality of grain and hay produced, and if infected grain or hay is fed to livestock it may cause a disease called ergotism. Polistes dorsalis, a species of social wasps, have been recorded as a vector of the spread of this particular fungus. During their foraging behavior, particles of the fungal conidia get bound to parts of this wasp's body. As P. dorsalis travels from source to source, it leaves the fungal infection behind. Insects, including flies and moths, have also been shown to carry conidia of Claviceps species, but if insects play a role in spreading the fungus from infected to healthy plants is unknown. Intraspecific variations Early scientists have observed Claviceps purpurea on other Poaceae as Secale cereale. 1855, Grandclement described ergot on Triticum aestivum. During more than a century scientists aimed to describe specialized species or specialized varieties inside the species Claviceps purpurea. Claviceps microcephala Tul. (1853) Claviceps wilsonii Cooke (1884) Later scientists tried to determine host varieties as Claviceps purpurea var. agropyri Claviceps purpurea var. purpurea Claviceps purpurea var. spartinae Claviceps purpurea var. wilsonii. Molecular biology has not confirmed this hypothesis but has distinguished three groups differing in their ecological specificity. G1—land grasses of open meadows and fields; G2—grasses from moist, forest, and mountain habitats; G3 (C. purpurea var. spartinae)—salt marsh grasses (Spartina, Distichlis). Morphological criteria to distinguish different groups: The shape and the size of sclerotia are not good indicators because they strongly depend on the size and shape of the host floret. The size of conidia can be an indication but it is weak and it is necessary to pay attention to that, due to osmotic pressure, it varies significantly if the spores are observed in honeydew or in water. The sclerotial density can be used as the groups G2 and G3 float in water. The compound of alkaloids is also used to differentiate the strains. Host range Pooideae Agrostis canina, Alopecurus myosuroides (G2), Alopecurus arundinaceus (G2), Alopecurus pratensis, Bromus arvensis, Bromus commutatus, Bromus hordeaceus (G2), Bromus inermis, Bromus marginatus, Elymus tsukushiense, Festuca arundinacea, Elymus repens (G1), Nardus stricta, Poa annua (G2), Phleum pratense, Phalaris arundinacea (G2), Poa pratensis (G1), Stipa. Arundinoideae Danthonia, Molinia caerulea. Chloridoideae Spartina, Distichlis (G3) Panicoideae Setaria Epidemiology Claviceps purpurea has been known to humankind for a long time, and its appearance has been linked to extremely cold winters that were followed by rainy springs. The sclerotial stage of C. purpurea conspicuous on the heads of ryes and other such grains is known as ergot. Sclerotia germinate in spring after a period of low temperature. A temperature of 0-5 °C for at least 25 days is required. Water before the cold period is also necessary. Favorable temperatures for stroma production are in the range of 10-25 °C. Favorable temperatures for mycelial growth are in the range of 20-30 °C with an optimum at 25 °C. Sunlight has a chromogenic effect on the mycelium with intense coloration. Effects The disease cycle of the ergot fungus was first described in 1853, but the connection with ergot and epidemics among people and animals was reported already in a scientific text in 1676. The ergot sclerotium contains high concentrations (up to 2% of dry mass) of the alkaloid ergotamine, a complex molecule consisting of a tripeptide-derived cyclol-lactam ring connected via amide linkage to a lysergic acid (ergoline) moiety, and other alkaloids of the ergoline group that are biosynthesized by the fungus. Ergot alkaloids have a wide range of biological activities including effects on circulation and neurotransmission. Ergotism is the name for sometimes severe pathological syndromes affecting humans or animals that have ingested ergot alkaloid-containing plant material, such as ergot-contaminated grains. Monks of the order of St. Anthony the Great specialized in treating ergotism victims with balms containing tranquilizing and circulation-stimulating plant extracts; they were also skilled in amputations. The common name for ergotism is "St. Anthony's Fire", in reference to monks who cared for victims as well as symptoms, such as severe burning sensations in the limbs. These are caused by effects of ergot alkaloids on the vascular system due to vasoconstriction of blood vessels, sometimes leading to gangrene and loss of limbs due to severely restricted blood circulation. The neurotropic activities of the ergot alkaloids may also cause hallucinations and attendant irrational behaviour, convulsions, and even death. Other symptoms include strong uterine contractions, nausea, seizures, and unconsciousness. Since the Middle Ages, controlled doses of ergot were used to induce abortions and to stop maternal bleeding after childbirth. Ergot alkaloids are also used in products such as Cafergot (containing caffeine and ergotamine or ergoline) to treat migraine headaches. Ergot extract is no longer used as a pharmaceutical preparation. Ergot contains no lysergic acid diethylamide (LSD) but rather ergotamine, which is used to synthesize lysergic acid, an analog of and precursor for synthesis of LSD. Moreover, ergot sclerotia naturally contain some amounts of lysergic acid. Culture Potato dextrose agar, wheat seeds or oat flour are suitable substrates for growth of the fungus in the laboratory. Agricultural production of Claviceps purpurea on rye is used to produce ergot alkaloids. Biological production of ergot alkaloids is also carried out by saprophytic cultivations. Speculations During the Middle Ages, human poisoning due to the consumption of rye bread made from ergot-infected grain was common in Europe. These epidemics were known as Saint Anthony's fire, or ignis sacer. Gordon Wasson proposed that the psychedelic effects were the explanation behind the festival of Demeter at the Eleusinian Mysteries, where the initiates drank kykeon. Linnda R. Caporael posited in 1976 that the hysterical symptoms of young women that had spurred the Salem witch trials had been the result of consuming ergot-tainted rye. However, her conclusions were later disputed by Nicholas P. Spanos and Jack Gottlieb, after a review of the historical and medical evidence. Other authors have likewise cast doubt on ergotism having been the cause of the Salem witch trials. The Great Fear in France during the Revolution has also been linked by some historians to the influence of ergot. British author John Grigsby claims that the presence of ergot in the stomachs of some of the so-called 'bog-bodies' (Iron Age human remains from peat bogs N E Europe such as Tollund Man), reveals that ergot was once a ritual drink in a prehistoric fertility cult akin to the Eleusinian Mysteries cult of ancient Greece. In his book Beowulf and Grendel he argues that the Anglo-Saxon poem Beowulf is based on a memory of the quelling of this fertility cult by followers of Odin. He states that Beowulf, which he translates as barley-wolf, suggests a connection to ergot which in German was known as the 'tooth of the wolf'. An outbreak of violent hallucinations among hundreds of residents of Pont St. Esprit in 1951 in the south of France has also been attributed to ergotism. Shortly after the event at least four people had been declared dead, although some claim the total number of deaths to be five or seven. See also Ergot Smut (fungus) References External links Claviceps purpurea - Ergot Alkaloid Ergot article from North Dakota State University, 2002 PBS Secrets of the Dead: "The Witches Curse" (concerning the Salem trials and ergot) New England Journal of Medicine - Dopamine Agonists and the Risk of Cardiac-Valve Regurgitation Linnda Caporeal's article "Ergotism: The Satan Loosed in Salem? Clavicipitaceae Food microbiology Fungi of Europe Psychoactive fungi Natural sources of lysergamides Medicinal fungi Parasitic fungi Fungal plant pathogens and diseases Cereal diseases Barley diseases Rye diseases Triticale diseases Wheat diseases Abortifacients Soma (drink) Fungi described in 1823 Taxa named by Elias Magnus Fries Fungus species
Claviceps purpurea
Biology
2,489
18,787,145
https://en.wikipedia.org/wiki/Visual%20appearance
The visual appearance of objects is given by the way in which they reflect and transmit light. The color of objects is determined by the parts of the spectrum of (incident white) light that are reflected or transmitted without being absorbed. Additional appearance attributes are based on the directional distribution of reflected (BRDF) or transmitted light (BTDF) described by attributes like glossy, shiny versus dull, matte, clear, turbid, distinct, etc. Since "visual appearance" is a general concept that includes also various other visual phenomena, such as color, visual texture, visual perception of shape, size, etc., the specific aspects related to how humans see different spatial distributions of light (absorbed, transmitted and reflected, either regularly or diffusely) have been given the name cesia. It marks a difference (but also a relationship) with color, which could be defined as the sensation arising from different spectral compositions or distributions of light. Appearance of reflective objects The appearance of reflecting objects is determined by the way the surface reflects incident light. The reflective properties of the surface can be characterized by a closer look at the (micro)-topography of that surface. Structures on the surface and the texture of the surface are determined by typical dimensions between some 10 mm and 0.1 mm (the detection limit of the human eye is at ~0.07 mm). Smaller structures and features of the surface cannot be directly detected by the unaided eye, but their effect becomes apparent in objects or images reflected in the surface. Structures at and below 0.1 mm reduce the distinctness of image (DOI), structures in the range of 0.01 mm induce haze and even smaller structures affect the gloss of the surface. Definitiondiffusion, scattering: process by which the spatial distribution of a beam of radiation is changed in many directions when it is deviated by a surface or by a medium, without change of frequency of its monochromatic components. Basic types of light reflection Appearance of transmissive objects Terminology Reflective objects Reflectance factor, R Gloss reflectance factor, Rs Gloss (at least six types of gloss may be observed depending upon the character of the surface and the spatial (directional) distribution of the reflected light.) Specular gloss Distinctness of image gloss Sheen Reflection haze, H (for a specified specular angle), the ratio of (light) flux reflected at a specified angle (or angles) from the specular direction to the flux similarly reflected at the specular angle by a specified gloss standard. Transmissive objects Transmittance, T Haze (turbidity) Clarity See also Shading References R. S. Hunter, R. W. Harold: The Measurement of Appearance, 2nd Edition, Wiley-IEEE (1987) CIE No 38-1977: Radiometric and photometric characteristics of materials and their measurement CIE No 44-1979: Absolute methods for reflection measurements BRDF F. E. Nicodemus, et al., Geometric Considerations and Nomenclature for Reflectance, U.S. Dept. of Commerce, NBS Monograph 160 (1977) John C. Stover, Optical Scattering, Measurement and Analysis, SPIE Press (1995) Optics
Visual appearance
Physics,Chemistry
641
1,585,406
https://en.wikipedia.org/wiki/Boarding%20pass
A boarding pass or boarding card is a document provided by an airline during airport check-in, giving a passenger permission to enter the restricted area of an airport (also known as the airside portion of the airport) and to board the airplane for a particular flight. At a minimum, it identifies the passenger, the flight number, the date, and scheduled time for departure. A boarding pass may also indicate details of the perks a passenger is entitled to (e.g., lounge access, priority boarding) and is thus presented at the entrance of such facilities to show eligibility. In some cases, flyers can check in online and print the boarding passes themselves. There are also codes that can be saved to an electronic device or from the airline's app that are scanned during boarding. A boarding pass may be required for a passenger to enter a secure area of an airport. Generally, a passenger with an electronic ticket will only need a boarding pass. If a passenger has a paper airline ticket, that ticket (or flight coupon) may be required to be attached to the boarding pass for the passenger to board the aircraft. For "connecting flights", a boarding pass is required for each new leg (distinguished by a different flight number), regardless of whether a different aircraft is boarded or not. The paper boarding pass (and ticket, if any), or portions thereof, are sometimes collected and counted for cross-check of passenger counts by gate agents, but more frequently are scanned (via barcode or magnetic strip) and returned to the passengers in their entirety. The standards for bar codes and magnetic stripes on boarding passes are published by the IATA. The bar code standard (Bar Coded Boarding Pass) defines the 2D bar code printed on paper boarding passes or sent to mobile phones for electronic boarding passes. The magnetic stripe standard (ATB2) expired in 2010. Most airports and airlines have automatic readers that will verify the validity of the boarding pass at the jetway door or boarding gate. This also automatically updates the airline's database to show the passenger has boarded and the seat is used, and that the checked baggage for that passenger may stay aboard. This speeds up the paperwork process at the gate. During security screenings, the personnel will also scan the boarding pass to authenticate the passenger. Once an airline has scanned all boarding passes presented at the gate for a particular flight and knows which passengers actually boarded the aircraft, its database system can compile the passenger manifest for that flight. Bar-codes BCBP (bar-coded boarding pass) is the name of the standard used by more than 200 airlines. BCBP defines the 2-dimensional (2D) bar code printed on a boarding pass or sent to a mobile phone for electronic boarding passes. BCBP was part of the IATA Simplifying the Business program, which issued an industry mandate for all boarding passes to be barcoded. This was achieved in 2010. Airlines and third parties use a barcode reader to read the bar codes and capture the data. Reading the bar code usually takes place in the boarding process but can also happen when entering the airport security checkpoints, while paying for items at the check-out tills of airport stores or trying to access airline lounges. The standard was originally published in 2005 by IATA and updated in 2008 to include symbologies for mobile phones and in 2009 to include a field for a digital signature in the mobile bar codes. Future developments of the standard will include a near field communication format. Security concerns In recent years concerns have been raised both to the security of the boarding pass bar-codes, the data they contain and the PNR (Passenger Name Record) data that they link to. Some airline barcodes can be scanned by mobile phone applications to reveal names, dates of birth, source and destination airports and the PNR locator code, a 6-digit alphanumeric code also sometimes referred to as a booking reference number. This code plus the surname of the traveller can be used to log in to the airline's website, and access information on the traveller. In 2020, a photograph of a boarding pass posted by former Australian Prime Minister Tony Abbott on Instagram provided sufficient information to log in to Qantas's website. While not in and of itself problematic as the flight had happened in the past, the website (through its source code) unintentionally leaked private data not intended to be displayed directly, such as Abbott's passport number and Qantas's internal PNR remarks. Paper boarding passes Paper boarding passes are issued either by agents at a check-in counter, self-service kiosks, or by the airline's web check-in site. BCBP can be printed at the airport by an ATB (Automated Ticket & Boarding Pass) printer or a direct thermal printer, or by a personal inkjet or laser printer. The symbology for paper boarding passes is PDF417. IATA's Board of Governors' mandate stated that all the IATA member airlines would be capable of issuing BCBP by the end of 2008, and all boarding passes would contain the 2D bar code by the end of 2010. The BCBP standard was published in 2005. It has been progressively adopted by airlines: By the end of 2005, 9 airlines were BCBP capable; 32 by the end of 2006; 101 by the end of 2007; and 200 by the end of 2008. Mobile boarding passes Electronic boarding passes were 'the industry's next major technological innovation after e-ticketing'. According to SITA's Airline IT Trend Survey 2009, mobile BCBP accounted for 2.1% of use (vs. paper boarding passes), forecast rising to 11.6% in 2012. Overview Many airlines have moved to issuing electronic boarding passes, whereby the passenger checks in either online or via a mobile device, and the boarding pass is then sent to the mobile device as an SMS or e-mail. Upon completing an online reservation, the passenger can tick a box offering a mobile boarding pass. Most carriers offer two ways to get it: have one sent to mobile device (via e-mail or text message) when checking in online, or use an airline app to check in, and the boarding pass will appear within the application. In many cases, a passenger with a smartphone can add their boarding pass to their primary digital wallet app, such as Google Wallet, Samsung Wallet, or Apple Wallet. This way the passenger does not need to open the airline's dedicated app and shortly before the flight, the boarding pass appears on their device's home screen. Furthermore, a mobile boarding cards can be loaded into smart watches through the phones they are paired with. The mobile pass is equipped with the same bar code as a standard paper boarding pass, and it is completely machine readable. The gate attendant simply scans the code displayed on the phone. IATA's BCBP standard defines the three symbologies accepted for mobile phones: Aztec code, Datamatrix and QR code. The United Nations International Telecommunication Union expected mobile phone subscribers to hit the 4 billion mark by the end of 2008. Airlines using mobile boarding passes In September 2006, All Nippon Airways first began mobile boarding passes in Japan. Today, most major carriers offer mobile boarding passes at many airports. Airlines that issue electronic boarding passes include: In Europe, Lufthansa was one of the first airlines to launch Mobile BCBP in April 2008. In the US, the Transportation Security Administration runs a pilot program of a Boarding Pass Scanning System, using the IATA BCBP standard. On October 15, 2008, the TSA announced that scanners would be deployed within a year and scanning mobile BCBP would enable to better track wait times. The TSA keeps adding new pilot airports: Cleveland on October 23, 2008. On October 14, 2008, Alaska Airlines started piloting mobile boarding passes at Seattle Seatac Airport. On November 3, 2008, Air New Zealand launched the mpass, a boarding pass received on the mobile phone. On November 10, 2008, Qatar Airways launched their online check-in: passengers can have their boarding passes sent directly to their mobile phones. On November 13, 2008, American Airlines started offering mobile boarding passes at Chicago O'Hare Airport. On December 18, 2008, Cathay Pacific launched its mobile Check-in service, including the delivery of the barcode to the mobile phone. On February 24, 2009, Austrian Airlines begun offering paperless boarding passes to customers on selected routes. On April 16, 2009, SAS joined the mobile boarding pass bandwagon. On May 26, 2009, Air China offered its customers to receive a two-dimensional bar-code e-boarding pass on their mobile phone, with which they can go through security procedures at any channel at Beijing Airport Terminal 3, enabling a completely paperless check-in service. On October 1, 2009, Swiss introduced mobile boarding pass to its customers. On November 12, 2009, Finnair explained that "The mobile boarding pass system cuts passengers’ carbon footprint by removing the need for passengers to print out and keep track of a paper boarding pass". On March 15, 2010, United began to offer mobile boarding passes to customers equipped with smartphones. In July/August 2014, Ryanair became the latest airline to offer mobile boarding passes to customers equipped with smartphones. Benefits Practical: Travelers don’t always have access to a printer, while not all airlines automatically print boarding passes during check-in, so choosing a mobile boarding pass eliminates the hassle of stopping at a kiosk at the airport. Ecological: Issuing electronic boarding passes is much more environmentally friendly than constantly using paper for boarding passes. Drawbacks Using a mobile boarding pass is risky if one's phone battery runs out (rendering the boarding pass inaccessible) or if there are any problems reading the e-boarding pass. Using a mobile boarding pass can also be a challenge when traveling with multiple passengers on one reservation, because not all airline apps handle multiple mobile boarding passes. (However, some airlines, like Alaska Airlines, do allow users to switch between multiple boarding passes within their apps.) Some airlines (and even a few government authorities) may still require some paper portions of the boarding cards to be retained by staff. This is obviously not possible with a mobile boarding card. Some airlines need to stamp a boarding card after performing document verification checks on some passengers (e.g. Ryanair). Some airport authorities (e.g. Philippine immigration officers) also stamp the boarding card with the departure date. Passengers in turn have to present to staff their stamped boarding card at the gate to be allowed to board. As such, airlines may not extend the mobile boarding card feature to all its passengers within certain flights. Print-at-home boarding passes A print-at-home boarding pass is a document that a traveller can print at home, at their office, or anywhere with an Internet connection and printer, giving them permission to board an airplane for a particular flight. British Airways CitiExpress, the first to pioneer this self-service initiative, piloted it on its London City Airport routes to minimize queues at check-in desks, in 1999. The CAA (Civil Aviation Authority) approved the introduction of the 3D boarding pass in February 2000. Early adoption with passengers was slow, except for Business Travellers. However, the advent of low-cost carriers that charged for not using print-at-home boarding passes was the catalyst to shift consumers away from traditional at-airport check-in functions. This paved the way for British Airways to become the first global airline to deploy self-service boarding passes using this now ubiquitous technology. Many airlines encourage travellers to check in online up to a month before their flight and obtain their boarding pass before arriving at the airport. Some carriers offer incentives for doing so (e.g., in 2015, US Airways offered 1000 bonus miles to anyone checking in online), while others charge fees for checking in or printing one's boarding pass at the airport. Benefits Cost efficient for the airline – Passengers who print their own boarding pass reduce airline and airport staff, and infrastructure costs for check-in Passengers without baggage to drop do not have to drop by the check-in desk or self-service check-in machines at the airport and can go straight to security checks. Exceptions for this may be international passengers that require document verification (e.g. those that require a visa for their destination). Problems Passengers have to remember to check-in in advance of their flight. Passengers need to have access to a printer and provide the paper and ink themselves or find printing points that already have them, to avoid being charged to print their boarding passes at the airport. Affordable access to printers equipped with paper and ink one can use to print one's boarding pass can be difficult to find while travelling away from home or their offices, although some airlines have responded by allowing passengers to check-in further in advance. Additionally, some hotels have computer terminals that allow passengers to access their airlines' website to print out boarding cards or passengers can email the boarding cards to the hotel's reception which can print it out for them. Some kinds of printers such as older dot matrix printers may not print the QR barcode portion legibly enough to be read accurately by the scanners. Some budget airlines which have moved towards passengers printing their boarding passes in advance may charge an unexpected hidden fee to print the boarding pass at the airport, often in excess of the cost of the flight itself. This, along with other such hidden costs, has led to allegations of false advertising and drip pricing being levelled towards the budget airlines in question. Print-at-home boarding pass advertising In a bid to boost ancillary revenue from other sources of in-flight advertising, many airlines have turned to targeted advertising technologies aimed at passengers from their departure city to their destination. Print-at-home boarding passes display adverts chosen specifically for given travellers based on their anonymised passenger information, which does not contain any personally identifiable data. Advertisers are able to target specific demographic information (age range, gender, nationality) and route information (origin and destination of flight). The same technology can also be used to serve advertising on airline booking confirmation emails, itinerary emails, and pre-departure reminders. Advantages of print-at-home boarding pass advertising Ability to use targeted advertising technologies to target messaging to relevant demographics and routes – providing travellers with offers that are likely to be relevant and useful High engagement level – research by the Global Passenger Survey has shown that on average, travellers look at their boarding pass over four times across 12 keytouch points in their journey The revenues airlines gain from advertising can help to offset operating costs and reduce ticket price rises for passengers Concerns of print-at-home boarding pass advertising Some passengers find the advertising intrusive The advertising uses additional quantities of the passenger's ink when printing at home See also Airline ticket Auto check-in Secondary Security Screening Selection (SSSS rating) References Bibliography Qantas boosts mobile device check-in options Northwest Airlines offer E-Boarding Pass functionality for their passengers Vueling: Now You Can Use Your Mobile as a Boarding Pass! Lufthansa offers mobile boarding pass worldwide Bar Coded Boarding Passes – Secure, Mobile and On the way Qatar launch mobile boarding pass service Mobile Boarding Pass Innovation Takes off with Qatar TSA Expands Paperless Boarding Pass Pilot Program to Additional Airports and Airlines Mobile boarding passes come to Barcelona Airport Spanair extend their mobile boarding pass service External links History of paper boarding passes from CNN (with photos) The Latest Development of paperless boarding pass technology International Air Transport Association (IATA) Airline tickets Civil aviation Encodings Automatic identification and data capture
Boarding pass
Technology
3,195
65,578,994
https://en.wikipedia.org/wiki/CYP18%20family
Cytochrome P450, family 18, also known as CYP18, is an animal cytochrome P450 family found in insect genomes. It is involved in insecticide resistance. The first member gene identified was CYP18A1, from a Drosophila melanogaster fly, acting as a dimethylnitrosamine demethylase. References Insect genes 18 Protein families
CYP18 family
Biology
85
36,862,950
https://en.wikipedia.org/wiki/20%20Vulpeculae
20 Vulpeculae is single star located around 1,170 light years away in the northern constellation of Vulpecula. It is visible to the naked eye as a dim, blue-white hued star with an apparent visual magnitude of 5.91. The object is moving closer to the Earth with a heliocentric radial velocity of −22 km/s. This is a Be star with a stellar classification of B7 Ve. It is spinning rapidly with a projected rotational velocity of 236 km/s (compared to a critical velocity of 332 km/s) and has an estimated polar inclination of 71.1°. The star has four times the mass of the Sun and is radiating around 460 times the Sun's luminosity from its photosphere at an effective temperature of 12,050 K. References External links B-type main-sequence stars Be stars Vulpecula Durchmusterung objects Vulpeculae, 20 192044 099531 7719
20 Vulpeculae
Astronomy
205
2,445,607
https://en.wikipedia.org/wiki/Formulario%20mathematico
Formulario Mathematico (Latino sine flexione: Formulary for Mathematics) is a book by Giuseppe Peano which expresses fundamental theorems of mathematics in a symbolic language developed by Peano. The author was assisted by Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti. The Formulario was first published in 1894. The fifth and last edition was published in 1908. Nicolas Bourbaki described Peano's notation in the Formulario as "following current mathematical usage, and introducing many well-chosen abbreviating symbols, his language succeeded moreover in being fairly readable, ..." Hubert Kennedy wrote "the development and use of mathematical logic is the guiding motif of the project". He also explains the variety of Peano's publication under the title: the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario! Peano believed that students needed only precise statement of their lessons. He wrote: Each professor will be able to adopt this Formulario as a textbook, for it ought to contain all theorems and all methods. His teaching will be reduced to showing how to read the formulas, and to indicating to the students the theorems that he wishes to explain in his course. Such a dismissal of the oral tradition in lectures at universities was the undoing of Peano's own teaching career. Notes References Ivor Grattan-Guinness (2000) The Search for Mathematical Roots 1870-1940. Princeton University Press. 1895 non-fiction books 1908 non-fiction books Mathematics books Mathematical terminology Mathematical logic Mathematical symbols
Formulario mathematico
Mathematics
454
17,842,616
https://en.wikipedia.org/wiki/Keith%20Campbell%20%28biologist%29
Keith Henry Stockman Campbell (23 May 1954 – 5 October 2012) was a British biologist who was a member of the team at Roslin Institute that in 1996 first cloned a mammal, a Finnish Dorset lamb named Dolly, from fully differentiated adult mammary cells. He was Professor of Animal Development at the University of Nottingham. In 2008, he received the Shaw Prize for Medicine and Life Sciences jointly with Ian Wilmut and Shinya Yamanaka for "their works on the cell differentiation in mammals". Education Campbell was born in Birmingham, England, to an English mother and Scottish father. He started his education in Perth, Scotland, but, when he was eight years old, his family returned to Birmingham, where he attended King Edward VI Camp Hill School for Boys. He obtained his Bachelor of Science degree in microbiology from the Queen Elizabeth College, University of London (now part of King's College London). In 1983 Campbell was awarded the Marie Curie Research Scholarship, which led to postgraduate studies and later his PhD from the University of Sussex (Brighton, England, UK). Research and career Campbell's interest in cloning mammals was inspired by work done by Karl Illmensee and John Gurdon. Working at the Roslin Institute since 1991, Campbell became involved with the cloning efforts led by Ian Wilmut. In July 1995 Keith Campbell and Bill Ritchie succeeded in producing a pair of lambs, Megan and Morag from embryonic cells, which had differentiated in culture. In 1996, a team led by Ian Wilmut with Keith Campbell as the main contributor, used the same technique and shocked the world by successfully cloning a sheep from adult mammary cells. Dolly, a Finn Dorset sheep named after the singer Dolly Parton, was born in 1996 and lived to be six years old (dying from a viral infection and not old age, as has been suggested). Campbell had a key role in the creation of Dolly, as he had the crucial idea of co-ordinating the stages of the "cell cycle" of the donor somatic cells and the recipient eggs and using diploid quiescent or "G0" arrested somatic cells as nuclear donors. In 2006, Ian Wilmut admitted that Campbell deserved "66 per cent" of the credit. In 1997, Ritchie and Campbell in collaboration with PPL (Pharmaceutical Proteins Limited) created another sheep named "Polly", created from genetically altered skin cells containing a human gene. In 2000, after joining PPL Ltd, Campbell and his PPL team (based in North America) were successful in producing the world's first piglets by Somatic-cell nuclear transfer (SCNT), the so-called cloning technique. Furthermore, the PPL teams based in Roslin, Scotland and Blacksburg (USA) used the technique to produce the first gene targeted domestic animals as well as a range of animals producing human therapeutic proteins in their milk. From November 1999, Campbell held the post of Professor of Animal Development, Division of Animal Physiology, School of Biosciences at the University of Nottingham where he continued to study embryo growth and differentiation. He supported the use of SCNT for the production of personalised stem cell therapies and for the study of human diseases and the use of cybrid embryo production to overcome the lack of human eggs available for research. Stem cells can be isolated from embryonic, fetal and adult derived material and more recently by overexpression of certain genes for the production of "induced pluripotent cells". Campbell believed all potential stem cell populations should be used for both basic and applied research which may provide basic scientific knowledge and lead to the development of cell therapies. Awards and honours In 2008, he received the Shaw Prize for Medicine and Life Sciences jointly with Ian Wilmut and Shinya Yamanaka. He was awarded the Pioneer Award from the International Embryo Transfer Society posthumously in 2015. Personal life Campbell died on 5 October 2012, aged 58, after accidentally hanging himself in his bedroom at his Ingleby, Derbyshire home, whilst heavily intoxicated. It was determined at the inquest that he had been behaving erratically at the time and had no actual intention to kill himself; the verdict was a death by misadventure. He was buried at Bretby Crematorium, Derbyshire. He is survived by his wife, Kathy, and two daughters, Claire and Lauren. References 1954 births 2012 deaths 20th-century British biologists 20th-century British inventors 21st-century British biologists Academics of the University of Nottingham Accidental deaths in England Alcohol-related deaths in England Alumni of the University of London Alumni of the University of Sussex Cloning Deaths by hanging People from Perth, Scotland Scientists from Birmingham, West Midlands
Keith Campbell (biologist)
Engineering,Biology
963
5,629,262
https://en.wikipedia.org/wiki/Cyclin%20E
Cyclin E is a member of the cyclin family. Cyclin E binds to G1 phase Cdk2, which is required for the transition from G1 to S phase of the cell cycle that determines initiation of DNA duplication. The Cyclin E/CDK2 complex phosphorylates p27Kip1 (an inhibitor of Cyclin D), tagging it for degradation, thus promoting expression of Cyclin A, allowing progression to S phase. Functions of Cyclin E Like all cyclin family members, cyclin E forms complexes with cyclin-dependent kinases. In particular, Cyclin E binds with CDK2. Cyclin E/CDK2 regulates multiple cellular processes by phosphorylating numerous downstream proteins. Cyclin E/CDK2 plays a critical role in the G1 phase and in the G1-S phase transition. Cyclin E/CDK2 phosphorylates retinoblastoma protein (Rb) to promote G1 progression. Hyper-phosphorylated Rb will no longer interact with E2F transcriptional factor, thus release it to promote expression of genes that drive cells to S phase through G1 phase. Cyclin E/CDK2 also phosphorylates p27 and p21 during G1 and S phases, respectively. Smad3, a key mediator of TGF-β pathway which inhibits cell cycle progression, can be phosphorylated by cyclin E/CDK2. The phosphorylation of Smad3 by cyclin E/CDK2 inhibits its transcriptional activity and ultimately facilitates cell cycle progression. CBP/p300 and E2F-5 are also substrates of cyclin E/CDK2. Phosphorylation of these two proteins stimulates the transcriptional events during cell cycle progression. Cyclin E/CDK2 can phosphorylate p220(NPAT) to promote histone gene transcription during cell cycle progression. Apart from the function in cell cycle progression, cyclin E/CDK2 plays a role in the centrosome cycle. This function is performed by phosphorylating nucleophosmin (NPM). Then NPM is released from binding to an unduplicated centrosome, thereby triggering duplication. CP110 is another cyclin E/CDK2 substrate which involves in centriole duplication and centrosome separation. Cyclin E/CDK2 has also been shown to regulate the apoptotic response to DNA damage via phosphorylation of FOXO1. Cyclin E and Cancer Over-expression of cyclin E correlates with tumorigenesis. It is involved in various types of cancers, including breast, colon, bladder, skin and lung cancer. DNA copy-number amplification of cyclin E1 is involved in brain cancer. Besides that, dysregulated cyclin E activity causes cell lineage-specific abnormalities, such as impaired maturation due to increased cell proliferation and apoptosis or senescence. Several mechanisms lead to the deregulated expression of cyclin E. In most cases, gene amplification causes the overexpression. Proteosome caused defected degradation is another mechanism. Loss-of-function mutations of FBXW7 were found in several cancer cells. FBXW7 encodes F-box proteins which target cyclin E for ubiquitination. Cyclin E overexpression can lead to G1 shortening, decrease in cell size or loss of serum requirement for proliferation. Dysregulation of cyclin E occurs in 18-22% of the breast cancers. Cyclin E is a prognostic marker in breast cancer, its altered expression increased with the increasing stage and grade of the tumor. Low molecular weight cyclin E isoforms have been shown to be of great pathogenetic and prognostic importance for breast cancer. These isoforms are resistant to CKIs, bind with CDK2 more efficiently and can stimulate the cell cycle progression more efficiently. They are proved to be a remarkable marker of the prognosis of early-stage-node negative breast cancer. Importantly, a recent research pointed out cyclin E overexpression is a mechanism of Trastuzumab resistance in HER2+ breast cancer patients. Thus, co-treatment of trastuzumab with CDK2 inhibitors may be a valid strategy. Cyclin E overexpression is implicated in carcinomas at various sites along the gastrointestinal tract. Among these carcinomas, cyclin E appears to be more important in stomach and colon cancer. Cyclin E overexpression was found in 50-60% of gastric adenomas and adenocarcinomas. In ~10% of colorectal carcinomas, cyclin E gene amplification is found, sometimes together with CDK2 gene amplification. Cyclin E is also a useful prognostic marker for lung cancer. There is significant association between cyclin E over-expression and the prognosis of lung cancer. It is believed increased expression of cyclin E correlated with poorer prognosis. References External links Cell cycle regulators Proteins
Cyclin E
Chemistry
1,126
387,703
https://en.wikipedia.org/wiki/Orphanage
An orphanage is a residential institution, total institution or group home, devoted to the care of orphans and children who, for various reasons, cannot be cared for by their biological families. The parents may be deceased, absent, or abusive. There may be substance abuse or mental illness in the biological home, or the parent may simply be unwilling to care for the child. The legal responsibility for the support of abandoned children differs from country to country, and within countries. Government-run orphanages have been phased out in most developed countries during the latter half of the 20th century but continue to operate in many other regions internationally. It is now generally accepted that orphanages are detrimental to the emotional wellbeing of children, and government support goes instead towards supporting the family unit. A few large international charities continue to fund orphanages, but most are still commonly founded by smaller charities and religious groups. Especially in developing countries, orphanages may prey on vulnerable families at risk of breakdown and actively recruit children to ensure continued funding. Orphanages in developing countries are rarely run by the state. However, not all orphanages that are state-run are less corrupted; the Romanian orphanages, like those in Bucharest, were founded due to the soaring population numbers catalyzed by dictator Nicolae Ceaușescu, who banned abortion and birth control and incentivized procreation in order to increase the Romanian workforce. Today's residential institutions for children, also described as congregate care, include group homes, residential child care communities, children's homes, refuges, rehabilitation centers, night shelters, and youth treatment centers. History The Romans formed their first orphanages around 400 AD. Jewish law prescribed care for the widow and the orphan, and Athenian law supported all orphans of those killed in military service until the age of eighteen. Plato (Laws, 927) says: "Orphans should be placed under the care of public guardians. Men should have a fear of the loneliness of orphans and of the souls of their departed parents. A man should love the unfortunate orphan of whom he is guardian as if he were his own child. He should be as careful and as diligent in the management of the orphan's property as of his own or even more careful still." The care of orphans was referred to bishops and, during the Middle Ages, to monasteries. As soon as they were old enough, children were often given as apprentices to households to ensure their support and to learn an occupation. In medieval Europe, care for orphans tended to reside with the Church. The Elizabethan Poor Laws were enacted at the time of the Reformation and placed public responsibility on individual parishes to care for the indigent poor. Foundling Hospitals The growth of sentimental philanthropy in the 18th century led to the establishment of the first charitable institutions that would cater to orphans. The Foundling Hospital was founded in 1741 by the philanthropic sea captain Thomas Coram in London, England, as a children's home for the "education and maintenance of exposed and deserted young children." The first children were admitted into a temporary house located in Hatton Garden. At first, no questions were asked about child or parent, but a distinguishing token was put on each child by the parent. On reception, children were sent to wet nurses in the countryside, where they stayed until they were about four or five years old. At sixteen, girls were generally apprenticed as servants for four years; at fourteen, boys were apprenticed into a variety of occupations, typically for seven years. There was a small benevolent fund for adults. In 1756, the House of Commons resolved that all children offered should be received, that local receiving places should be appointed all over the country, and that the funds should be publicly guaranteed. A basket was accordingly hung outside the hospital; the maximum age for admission was raised from two months to twelve, and a flood of children poured in from country workhouses. Parliament soon came to the conclusion that the indiscriminate admission should be discontinued. The hospital adopted a system of receiving children only with considerable sums. This practice was finally stopped in 1801, and it henceforth became a fundamental rule that no money was to be received. 19th century By the early nineteenth century, the problem of abandoned children in urban areas, especially London, began to reach alarming proportions. The workhouse system, instituted in 1834, although often brutal, was an attempt at the time to house orphans as well as other vulnerable people in society who could not support themselves in exchange for work. Conditions, especially for the women and children, were so bad as to cause an outcry among the social reform–minded middle-class; some of Charles Dickens' most famous novels, including Oliver Twist, highlighted the plight of the vulnerable and the often abusive conditions that were prevalent in the London orphanages. Clamour for change led to the birth of the orphanage movement. In England, the movement really took off in the mid-19th century although orphanages such as the Orphan Working Home in 1758 and the Bristol Asylum for Poor Orphan Girls in 1795, had been set up earlier. Private orphanages were founded by private benefactors; these often received royal patronage and government oversight. Ragged schools, founded by John Pounds and the Lord Shaftesbury were also set up to provide pauper children with basic education. Orphanages were also set up in the United States from the early 19th century; for example, in 1806, the first private orphanage in New York (the Orphan Asylum Society, now Graham Windham) was co-founded by Elizabeth Schuyler Hamilton, widow of Alexander Hamilton, one of the Founding Fathers of the United States. Under the influence of Charles Loring Brace, foster care became a popular alternative from the mid-19th century. Later, the Social Security Act of 1935 improved conditions by authorizing Aid to Families with Dependent Children as a form of social security. A very influential philanthropist of the era was Thomas John Barnardo, the founder of the charity Barnardos. Becoming aware of the great numbers of homeless and destitute children adrift in the cities of England and encouraged by the 7th Earl of Shaftesbury and the 1st Earl Cairns, he opened the first of the "Dr. Barnardo’s Homes" in 1870. By his death in 1905, he had established 112 district homes, which searched for and received waifs and strays, to feed, clothe and educate them. The system under which the institution was carried on is broad as follows: the infants and younger girls and boys were chiefly "boarded out" in rural districts; girls above fourteen years of age were sent to the industrial training homes, to be taught useful domestic occupations; boys above seventeen years of age were first tested in labor homes and then placed in employment at home, sent to sea, or emigrated; boys of between thirteen and seventeen years of age were trained for the various trades for which they might be mentally or physically fitted. Deinstitutionalization Evidence from a variety of studies supports the vital importance of attachment security and later development of children. Deinstitutionalization of orphanages and children's homes program in the United States began in the 1950s, after a series of scandals involving the coercion of birth parents and abuse of orphans (notably at Georgia Tann's Tennessee Children's Home Society). In Romania, a decree was established that aggressively promoted population growth, banning contraception and abortions for women with fewer than four children, despite the wretched poverty of most families. After Ceausescu was overthrown, he left a society unable and unwilling to take care of its children. Researchers conducted a study to see what the implications of this early childhood neglect were on development. Typically reared Romanian children showed high rates of secure attachment. Whereas the institutionally raised children showed huge rates of disorganized attachment. Many countries accepted the need to de-institutionalize the care of vulnerable children—that is, close down orphanages in favor of foster care and accelerated adoption. Foster care operates by taking in children from their homes due to the lack of care or abuse of their parents, where orphanages take in children with no parents or children whose parents have dropped them off for a better life, typically due to income. Major charities are increasingly focusing their efforts on the re-integration of orphans in order to keep them with their parents or extended family and communities. Orphanages are no longer common in the European Community, and Romania, in particular, has struggled greatly to reduce the visibility of its children's institutions to meet conditions of its entry into the European Union. Some have stated it is important to understand the reasons for child abandonment, then set up targeted alternative services to support vulnerable families at risk of separation such as mother and baby units and day care centres. Comparison to alternatives Research from the Bucharest Early Intervention Project (BEIP) is often cited as demonstrating that residential institutions negatively impact the wellbeing of children. The BEIP selected orphanages in Bucharest, Romania that raised abandoned children in socially and emotionally deprived environments in order to study the changes in development of infants and children after they had been placed with specially trained foster families in the local community. This study demonstrated how the loving attention typically provided to children by their parents or caregivers is pivotal for optimal human development, specifically of the brain; adequate nutrition is not enough. Further research of children who were adopted from institutions in Eastern European countries to the US demonstrated that for every 3.5 months that an infant spent in the institution, they lagged behind their peers in growth by 1 month. Further, a meta-analysis of research on the IQs of children in orphanages found lower IQs among the children in many institutions, but this result was not found in the low-income country setting. Worldwide, residential institutions like orphanages can often be detrimental to the psychological development of affected children. In countries where orphanages are no longer in use, the long-term care of unwarded children by the state has been transitioned to a domestic environment, with an emphasis on replicating a family home. Many of these countries, such as the United States, utilize a system of monetary stipends paid to foster parents to incentivize and subsidize the care of state wards in private homes. A distinction must be made between foster care and adoption, as adoption would remove the child from the care of the state and transfer the legal responsibility for that child's care to the adoptive parent completely and irrevocably, whereas, in the case of foster care, the child would remain a ward of the state with the foster parent acting only as a caregiver. Orphanages, especially larger ones, have had some well publicised examples of poor care. In large institutions children, but particularly babies, may not receive enough eye contact, physical contact, and stimulation to promote proper physical, social or cognitive development. In the worst cases, orphanages can be dangerous and unregulated places where children are subject to abuse and neglect. Children living in orphanages for prolonged periods get behind in development goals, and have worse mental health. Orphanage children are not included in statistics making it easy to traffic them or abuse them in other ways. There are campaigns to include orphanage children and street children in progress statistics. Foster care The benefit of foster care over orphanages is disputed. One significant study carried out by Duke University concluded that institutional care in America in the 20th century produced the same health, emotional, intellectual, mental, and physical outcomes as care by relatives, and better than care in the homes of strangers. One explanation for this is the prevalence of permanent temporary foster care. This is the name for a long string of short stays with different foster care families. Permanent temporary foster care is highly disruptive to the child and prevents the child from developing a sense of security or belonging. Placement in the home of a relative maintains and usually improves the child's connection to family members. Experts and child advocates maintain that orphanages are expensive and often harm children's development by separating them from their families and that it would be more effective and cheaper to aid close relatives who want to take in the orphans. Group homes Another alternative is group homes which are used for short-term placements. They may be residential treatment centers, and they frequently specialize in a particular population with psychiatric or behavioral problems, e.g., a group home for children and teens with autism, eating disorders, or substance abuse problems or child soldiers undergoing decommissioning. Kinship care Most children who live in orphanages are not orphans; four out of five children in orphanages have at least one living parent and most having some extended family. Developing countries and their governments rely on kinship care to aid in the orphan crisis because it is cheaper to financially help extended families in taking in an orphaned child than it is to institutionalize them. Commercial orphanages While many orphanages are run as not for profit institutions, some orphanages are run as for profit ventures. This has been criticized as incentivizing against the welfare of the orphans. Most of the children living in institutions around the world have a surviving parent or close relative, and they most commonly entered orphanages because of poverty. It is speculated that flush with money, orphanages are increasing and push for children to join even though demographic data show that even the poorest extended families usually take in children whose parents have died. Visitors to developing countries can be taken in by orphanage scams, which can include orphanages set up as a front to get foreigners to pay school fees of orphanage directors' extended families. Alternatively the children whose upkeep is being funded by foreigners may be sent to work, not to school, the exact opposite of what the donor is expecting. The worst even sell children. In Cambodia, from 2005 to 2017, the number of orphanages increased by 75%, with many of these orphanages renting children from poor families for $25/month. Families are promised that their children can get free education and food here, but what really happens is that they are used as props to garner donations. Some are also bought from their parents for very little and passed on to westerners who pay a large fee to adopt them. This also happens in China. In Nepal, orphanages can be used as a way to remove a child from their parents before placing them for adoption overseas, which is equally lucrative to the owners who receive a number of official and unofficial payments and "donations". In other countries, such as Indonesia, orphanages are run as businesses, which will attract donations and make the owners rich; often the conditions orphans are kept in will deliberately be poor to attract more donations. Worldwide Developing nations are lacking in child welfare and their well-being because of a lack of resources. Research that is being collected in the developing world shows that these countries focus purely on survival indicators instead of a combination of their survival and other positive indicators like a developed nation would do. Europe The orphanages and institutions remaining in Europe tend to be in Eastern Europe and are generally state-funded. Albania There are estimated to be about 31,000 orphans (0–14 years old) in Albanian orphanages. (2012 statistics) In most cases they were abandoned by their parents. At 14 they are required, by law, to leave their orphanage and live on their own. There are approximately 10 small orphanages in Albania; each one having only 12-40 children residing there. The larger ones would be state-run. Bosnia and Herzegovina SOS Children's Villages giving support to 240 orphaned children. Bulgaria The Bulgarian government has shown interest in strengthening children's rights. In 2010, Bulgaria adopted a national strategic plan for the period 2010–2025 to improve the living standards of the country's children. Bulgaria is working hard to get all institutions closed within the next few years and find alternative ways to take care of the children. "Support is sporadically given to poor families and work during daytime; correspondingly, different kinds of day centers have started up, though the quality of care in these centers is poorly measured and difficult to monitor. A smaller number of children have also been able to be relocated into foster families". There are 7000 children living in Bulgarian orphanages wrongly classified as orphaned. Only 10 percent of these are orphans, with the rest of the children placed in orphanages for temporary periods when the family is in crisis. Estonia As of 2009, there are 35 different orphanages. Hungary A comprehensive national strategy for strengthening the rights of children was adopted by Parliament in 2007 and will run until 2032. Child flow to orphanages has been stopped and children are now protected by social services. Violation of children's rights leads to litigation. Lithuania In Lithuania there are 105 institutions. 41 percent of the institutions each have more than 60 children. Lithuania has the highest number of orphaned children in Northern Europe. Poland Children's rights enjoy relatively strong protection in Poland. Orphaned children are now protected by social services. Social Workers' opportunities have increased by establishing more foster homes and aggressive family members can now be forced away from home, instead of replacing the child/children. Moldova More than 8800 children are being raised in state institutions, but only three percent of them are orphans. Romania The Romanian child welfare system is in the process of being revised and has reduced the flow of infants into orphanages. According to Baroness Emma Nicholson, in some counties Romania now has "a completely new, world class, state of the art, child health development policy." Dickensian orphanages remain in Romania, but Romania seeks to replace institutions by family care services, as children in need will be protected by social services. As of 2018, there were 17,718 children in old-style residential centers, a significant decrease from about 100,000 in 1990. Serbia There are many state orphanages "where several thousand children are kept and which are still part of an outdated child care system". The conditions for them are bad because the government does not pay enough attention in improving the living standards for disabled children in Serbia's orphanages and medical institutions. Slovakia The committee made recommendations, such as proposals for the adoption of a new "national 14" action plan for children for at least the next five years, and the creation of an independent institution for the protection of child rights. Sweden One of the first orphanages in Sweden was the Stora Barnhuset (1633-1922) in Stockholm, which remained the biggest orphanage in Sweden for centuries. In 1785, however, a reform by Gustav III of Sweden stipulated that orphans should first and foremost always be placed in foster homes when that was possible. In Sweden, there are 5,000 children in the care of the state. None of them are currently living in an orphanage, because there is a social service law which requires that the children reside in a family home. United Kingdom During the Victorian era, child abandonment was rampant, and orphanages were set up to reduce infant mortality. Such places were often so full of children that nurses often administered Godfrey's Cordial, a special concoction of opium and treacle, to soothe baby colic. Orphaned children were placed in either prisons or the poorhouse/workhouse, as there were so few places in orphanages, or else they were left to fend for themselves on the street. Such openings in orphanages as were available could only be obtained by collecting votes for admission, placing them out of reach of poor families. Known orphanages are: Sub-Saharan Africa The majority of African orphanages (especially in Sub-Saharan Africa) appear to be funded by donors, often from Western nations, rather than by domestic governments. Ethiopia "For example, in the Jerusalem Association Children's Home (JACH), only 160 children remain of the 785 who were in JACH's three orphanages." / "Attitudes regarding the institutional care of children have shifted dramatically in recent years in Ethiopia. There appears to be a general recognition by MOLSA and the NGOs with which Pact is working that such care is, at best, a last resort and that serious problems arise with the social reintegration of children who grow up in institutions, and deinstitutionalization through family reunification and independent living are being emphasized." Ghana A 2007 survey sponsored by Africa (previously Orphan Aid Africa) and carried out by the Department of Social Welfare came up with the figure of 4,800 children in institutional care in 148 orphanages. The government is currently attempting to phase out the use of orphanages in favor of foster care placements and adoption. At least eighty-eight homes have been closed since the passage of the National Plan of Action for Orphans and Vulnerable Children. The website www.ovcghana.org details these reforms. Kenya A 1999 survey of 36,000 orphans found the following number in institutional care: 64 in registered institutions and 164 in unregistered institutions. Malawi There are about 101 orphanages in Malawi. There is a UNICEF/Government driven program on de-institutionalization, but few orphanages are yet involved in the program. Rwanda Out of 400,000 orphans, 5,000 are living in orphanages. The Government of Rwanda are working with Hope and Homes for Children to close the first institution and develop a model for community-based childcare which can be used across the country and ultimately Africa Tanzania "Currently, there are 52 orphanages in Tanzania caring for about 3,000 orphans and vulnerable children." A world bank document on Tanzania showed it was six times more expensive to institutionalize a child there than to help the family become functional and support the child themselves. Nigeria In Nigeria, a rapid assessment of orphans and vulnerable children conducted in 2004 with UNICEF support revealed that there were about seven million orphans in 2003 and that 800,000 more orphans were added during that same year. Out of this total number, about 1.8  million are orphaned by HIV/AIDS. With the spread of HIV/AIDS, the number of orphans is expected to increase rapidly in the coming years to 8.2  million by 2010. South Africa Since 2000, South Africa does not license orphanages any more but they continue to be set up unregulated and potentially more harmful. Theoretically, the policy supports community-based family homes but this is not always the case. One example is the homes operated by Thokomala. Zambia Zimbabwe There are 39 privately run children's charity homes, or orphanages, in the country, and the government operates eight of its own. Privately run Orphanages can accommodate an average of 2000 children, though some are very small and located in very remote areas, hence can take in less than 150 children. Statistics on the total number of children in orphanages nationwide are unavailable, but caregivers say their facilities were becoming unmanageably overwhelmed almost on a daily basis. Between 1994 and 1998, the number of orphans in Zimbabwe more than doubled from 200,000 to 543,000, and in five years, the number is expected to reach 900,000. (Unfortunately, there is no room for these children.) Togo In Togo, there were an estimated 280,000 orphans under 18 years of age in 2005, 88,000 of them orphaned by AIDS. Ninety-six thousand orphans in Togo attend school. Sierra Leone Children (0–17 years) orphaned by AIDS, 2005, estimate 31,000 Children (0–17 years) orphaned due to all causes, 2005, estimate 340,000 Orphan school attendance ratio, 1999–2005 71,000 Senegal Children (0–17 years) orphaned by AIDS, 2005, estimate 25,000 Children (0–17 years) orphaned due to all causes, 2005, estimate 560,000 Orphan school attendance ratio, 1999–2005 74,000 South Asia Nepal There are at least 602 child care homes housing 15,095 children in Nepal "Orphanages have turned into a Nepalese industry there is rampant abuse and a great need for intervention." Many do not require adequate checks of their volunteers, leaving children open to abuse. Afghanistan "At Kabul's two main orphanages, Alauddin and Tahia Maskan, the number of children enrolled has increased almost 80 percent since last January, from 700 to over 1,200 children. Almost half of these come from families who have at least one parent, but who can't support their children." The non-governmental organisation Mahboba's promise assists orphans in contemporary Afghanistan. Nowadays the number of orphanages had changed. There are approximately 19 orphanages only in Kabul. Bangladesh "There are no statistics regarding the actual number of children in welfare institutions in Bangladesh. The Department of Social Services, under the Ministry of Social Welfare, has a major program named Child Welfare and Child Development in order to provide access to food, shelter, basic education, health services and other basic opportunities for hapless children." (The following numbers mention capacity only, not actual numbers of orphans at present.) 9,500 – State institutions 250 – babies in three available "baby homes" 400 – Destitute Children's Rehabilitation Centre 100 – Vocational Training Centre for Orphans and Destitute Children 1,400 -Sixty-five Welfare and Rehabilitation Programmes for Children with Disability The private welfare institutions are mostly known as orphanages and madrassahs. The authorities of most of these orphanages put more emphasis on religion and religious studies. One example follows: 400 – Approximately – Nawab Sir Salimullah Muslim Orphanage. Maldives Orphans, Children (0–17 years) orphaned due to all causes, 2010, estimate 51. India India is in the top 10 and also has a very large number of orphans as well as a destitute child population. Orphanages operated by the state are generally known as juvenile homes. In addition, there is a vast number of privately run orphanages running into thousands spread across the country. These are run by various trusts, religious groups, individual citizens, citizens groups, NGO's, etc. While some of these places endeavor to place the children for adoption a vast majority just care and educate them till they are of legal majority age and help place them back on their feet. Prominent organizations in this field include BOYS TOWN, SOS children's villages, etc. There have been scandals especially with regard to adoption. Since government rules restrict funds unless there are a certain number of residents, some orphanages make sure the resident numbers remain high at the cost of adoption. Pakistan According to a UNICEF report in 2016, there are around 4.2 million orphaned children in Pakistan. Pakistan has had sizable economic growth from 1950 to 1999 yet they aren't performing well in multiple social indicators like education and health, and this is mainly due to the corrupt and unstable government. Pakistan heavily relies on the nonprofit sector and zakat to finance social issues such as aid for orphans. Zakat is a financial obligation on Muslims which requires one to donate 2.5% of the family's income to charity, and it is specifically mentioned in the Quran to take care of orphans. With the new use of zakat money from donations to investments it has a lot of potential in benefiting the development as well as the ultimate goal of poverty alleviation. The Pakistan government relies on this public sector on taking care of local issues so that they do not have the burden. Furthermore, only 6 percent of cash revenues are contributed to non-profits in Pakistan, and they are heavily favored by the government because it saves them money as non-profits are taking care of issues such as orphan care. East and Southeast Asia Taiwan The number of orphanages and orphans drastically dropped from 15 institutions and 2,216 persons in 1971 to 9 institutions and 638 persons by the end of 2001. Thailand There are still a substantial number of NGOs and informal Orphanages in Thailand, particularly in Northern Thailand near the borders of Laos and Myanmar, e.g. around Chiang Rai. Very few of the children in these establishments are orphans, most have living parents. They attract funding from well-meaning tourists. Often protecting the children from trafficking/abuse is cited but the names and photographs of the children are published in marketing material to attract more funding. The reality is that the safest environment for these children is almost always with their parents or in their villages with familial connections where strangers are rarely seen and immediately recognized. A very few of these orphanages, go so far as to abduct or forcibly remove children from their homes, often across the border in Myanmar. The parents in local hill tribes may be encouraged to "buy a place" in the orphanage for vast sums, being told their child will have a better future. Some children's homes claim to always try to repatriate children with their families, but the local managers & director of the homes know of no such procedures or processes. Vietnam There are approximately 2 million highly vulnerable children in Vietnam with an estimated 500,000 orphaned or abandoned children. There are a number of orphanages present in the country including the Vinh Son Montagnard Orphanage, however these are generally privately funded. There are very few government run institutions. South Korea According to the Los Angeles Times, "There are now 17,000 children in public orphanages throughout the country and untold numbers at private institutions." Japan Approximately 39,000 children live in orphanages in Japan out of the 45,000 (2018 statistics) who are not able to live with their birth parents. However, as of 2016, Japanese orphanages are severely underfunded, relying heavily on volunteer work. There are 602 foster homes across Japan, each with 30 to 100 children. A large portion of children in these orphanages are not actually orphans but victims of domestic abuse or neglect. Cambodia As of 2010, 11,945 children lived in 269 residential care facilities in Cambodia. About 44% were placed there by a parent. However it is estimated that there are 553,000 orphans in the country. Most of these children are cared for by their extended family or community. China There are currently over 600,000 abandoned orphans living in China (some would put the figure as high as 1 million). Of these, 98% have special needs. Laos "It is stated that there are 20,000 orphaned children in Laos." However the figure generally remains unknown as about 30% of children are never registered with the government and remain invisible. In Laos nearly 50 per cent of the population lives below the poverty line and many children are involved in child labor. There are six orphanages that are run by SOS Children's Village that help with this problem. Middle East and North Africa Egypt "The [Mosques of Charity] orphanage houses about 120 children in Giza, Menoufiya and Qalyubiya." "We [Dar Al-Iwaa] provide free education and accommodation for over 200 girls and boys." "Dar Al-Mu'assassa Al-Iwaa'iya (Shelter Association), a government association affiliated with the Ministry of Social Affairs, was established in 1992. It houses about 44 children." There are also 192 children at The Awlady, 30 at Sayeda Zeinab orphanage, and 300 at My Children Orphanage. Note: There are about 185 orphanages in Egypt. The above information was taken from the following articles: "Other families" by Amany Abdel-Moneim. Al-Ahram Weekly (5/1999). "Ramadan brings a charity to Egypt's orphans". Shanghai Star (13 December 2001). "A Child by Any Other Name" by Réhab El-Bakry. Egypt Today (11/2001). Orphanage Project in Egypt—www.littlestlamb.org Sudan There is still at least one orphanage in Sudan although the conditions there have been reported as very poor. South Sudan The number of orphans is expected to be 5,000 in 2023 in South Sudan. And in 2018, the UN Children Fund (UNICEF) reported that about 15,000 children in South Sudan had become separated from their families or were missing due to conflict. Bahrain The "Royal Charity Organization" is a Bahraini governmental charity organization founded in 2001 by King Hamad ibn Isa Al Khalifah to sponsor all helpless Bahraini orphans and widows. Since then almost 7,000 Bahraini families are granted monthly payments, annual school bags, and a number of university scholarships. Graduation ceremonies, various social and educational activities, and occasional contests are held each year by the organization for the benefit of orphans and widows sponsored by the organization. Iraq UNICEF maintains the same number at present. "While the number of state homes for orphans in the whole of Iraq was 25 in 1990 (serving 1,190 children); both the number of homes and the number of beneficiaries has declined. The quality of services has also declined." A 1999 study by UNICEF "recommended the rebuilding of national capacity for the rehabilitation of orphans." The new project "will benefit all the 1,190 children placed in orphanages." Palestinian Territory "In 1999, the number of children living in orphanages witnessed a considerable drop as compared to 1998. The number dropped from 1,980 to 1,714 orphans. This is due to the policy of child re-integration in their household adopted by the Ministry of Social Affairs." Former Soviet Union In the post-Soviet countries, orphanages are better known as "children's homes" (). After reaching school age, all children enroll at internats () (boarding schools). Russia In 2021 it was recorded that there were 406,138 orphans living in orphan homes and families in Russia. UNICEF estimates that 95% of these children are "social orphans", meaning that they have at least one living parent who has given them up to the state. In 2011 Russian authorities registered 88,522 children who became orphans that year (down from 114,715 in 2009). There are few webpages for Russian orphanages in English. "Of a total of more than 600,000 children classified as being 'without parental care' (most of them live with other relatives and fosters), as many as one-third reside in institutions." In 2011, there were 1344 institutions for orphans in Russia, including 1094 orphanages ("children's homes") and 207 special ("corrective") orphanages for children with serious health issues. Azerbaijan It is estimated that more than 10,000 children are living in 44 orphanages. In general, "many children are abandoned due to extreme poverty and harsh living conditions. Some may be raised by family members or neighbors but the majority live in crowded orphanages until the age of fifteen when they are sent into the community to make a living for themselves." Belarus Approximate total – 1,773 (1993 statistics for "all types of orphanages") Kyrgyzstan Partial information: 85 – Ivanovka Orphanage Tajikistan There are 4 orphanages in the major cities and 64 boarding schools in Tajikistan, where 8275 children are being educated. Those four orphanages raise 185 children up to 3 years old. In total there are 160 orphans. This small number is likely due to the popularity of adopting. Ukraine Before the Russian invasion of 2022, there were an estimated 100,000 orphans in Ukraine's state-run facilities. Of this number about 80 percent are described as "social orphans", because the parents are either financially destitute, abusive, or addicted to drugs or alcohol and thus are unable to raise them. Due to a lack of funding and overcrowding the conditions at these orphanages are often poor, especially for disabled children. Since 2012 the number of children adopted by foreigners has gradually been reducing. By 2016, the number of children adopted by foreigners had been reduced to around 200 from about 2,000 in 2012. A bit more than a thousand children were adopted by Ukrainians in 2016. During 2019 1,419 children were adopted. In 2020 2,047 children were adopted, in 1,890 cases the adoption was carried out by citizens of Ukraine. Other information: thousands – Zaporizhzhia Oblast. 150 – Kyiv State Baby Orphanage 30 – Beregena Orphanage 120 – Dom Invalid Orphanage Oceania Australia Orphanages in Australia mostly closed after World War II and up to the 1970s. Instead, children are mainly put in either Kinship, Residential or Foster care. Notable former orphanages include the Melbourne Orphanage and the St. John's Orphanage in Goulburn, New South Wales. Indonesia No verifiable information for the number of children actually in orphanages. The number of orphaned and abandoned children is approximately 500,000. Fiji Orphans, children (0–17 years) orphaned due to all causes, 2005, estimate 25,000 North America and Caribbean Haiti Haitians and expatriate childcare professionals are careful to make it clear that Haitian orphanages and children's homes are not orphanages in the North American sense, but instead shelters for vulnerable children, often housing children whose parent(s) are poor as well as those who are abandoned, neglected or abused by family guardians. Neither the number of children or the number of institutions is officially known, but Chambre de L'Enfance Necessiteusse Haitienne (CENH) indicated that it has received requests for assistance from nearly 200 orphanages from around the country for more than 200,000 children. Although not all are orphans, many are vulnerable or originate in vulnerable families that "hoped to increase their children's opportunities by sending them to orphanages. Catholic Relief Services provides assistance to 120 orphanages with 9,000 children in the Ouest, Sud, Sud-Est and Grand'Anse, but these include only orphanages that meet their criteria. They estimate receiving ten requests per week for assistance from additional orphanages and children's homes, but some of these are repeat requests." In 2007, UNICEF estimated there were 380,000 orphans in Haiti, which has a population of just over 9 million, according to the CIA World Factbook. However, since the January 2010 earthquake, the number of orphans has skyrocketed, and the living conditions for orphans have seriously deteriorated. Official numbers are hard to find due to the general state of chaos in the country. Jamaica A large amount of children on the island of Jamaica grow up without a parental relationship as a result of their parents' death. An example of places for these lone children to go to are SOS children's villages, The Maxfield Park Children's Home and the Missionaries of the Poor facilities. Mexico There are over 700 public and privates orphanages in Mexico which house over 30,000 children. In 2018 it was estimated that 400,000 children lacked parents. Of these 100,000 are thought to be homeless. Some notable orphanages include: Casa Hogar Jeruel Orphanage in Chihuahua City, Mexico Casa Hogar Alegría United States While the term "orphanage" is no longer typically used in the United States, nearly every US state continues to operate residential group homes for children in need of a safe place to live and in which to be supported in their educational and life-skills pursuits. Homes like the Milton Hershey School in Pennsylvania, Mooseheart in Illinois and the Crossnore School and Children's Home in North Carolina continue to provide care and support for children in need. While a place like the Milton Hershey School houses nearly 2,000 children, each child lives in a small group-home environment with "house parents" who often live many years in that home. Children who grow up in these residential homes have higher rates of high school and college graduation than those who spend equivalent numbers of years in the US Foster Care system, wherein only 44 to 66 percent of children graduate from high school. Some private orphanages still exist in the United States apart from governmental child protective services processes. Following World War II, most orphanages in the U.S. began closing or converting to boarding schools or different kinds of group homes. Also, the term "children's home" became more common for those still existing. Over the past few decades, orphanages in the U.S. have been replaced with smaller institutions that try to provide a group home or boarding school environment. Most children who would have been in orphanages are in these residential treatment centers (RTC), residential child care communities, or with foster families. Adopting from RTCs, group homes, or foster families does not require working with an adoption agency, and in many areas, fostering to adopt is highly encouraged. Central and South America Guatemala "...currently there are about 200,000 children in orphanages." Peru It is estimated that 550,000 children grow up without parents in Peru. Many of the children in orphanages are considered “social orphans”. Significant charities that help orphans Prior to the establishment of state care for orphans in First World countries, private charities existed to take care of destitute orphans, over time other charities have found other ways to care for children. The Orphaned Starfish Foundation is a non-profit organisation based in New York City that focuses on developing vocational schools for orphans, victims of abuse and at-risk youth. It runs fifty computer centers in twenty-five countries, serving over 10,000 children worldwide Lumos works to replace institutions with community-based services that provide children with access to health, education, and social care tailored to their individual needs. Hope and Homes for Children are working with governments to deinstitutionalize their child care systems. Stockwell Home and later Birchington, started by Charles H Spurgeon, is now Spurgeons after the last orphanage closed in 1979. Spurgeons Children's Charity provides support to vulnerable and disadvantaged children and families across England. SOS Children's Villages is the world's largest non-governmental, non-denominational child welfare organization that provides loving family homes for orphaned and abandoned children. Dr. Barnardo's Homes are now simply Barnardo's after closing their last orphanage in 1989. OAfrica, previously OrphanAid Africa, has been working in Ghana since 2002, to get children out of orphanages and into families, in partnership with the government and as the only private implementing partner of the National Plan of Action. Joint Council on International Children's Services is a nonprofit child advocacy organization based in Alexandria, Virginia. It is the largest association of international adoption agencies in America, and in addition to working in 51 countries, advocates for ethical practices in American adoption agencies See also Adoption Boys Town (organization) Child abandonment Child abuse Child and family services Child and youth care Community-based care Congregate Care Cottage Homes Deinstitutionalisation Family support Florida Sheriffs Youth Ranches Foster Care Foster Care in the United States Group home Hope and Homes for Children Janusz Korczak Kinship Care Orphan Train Residential Care Residential Child Care Communities Residential education Residential treatment center Settlement movement Teaching-family model The Steele home Orphanage Wraparound (childcare) Whole Child International References Works cited External links Keeping Children Out of Harmful Institutions: Why we should be investing in family-based care Child welfare Total institutions
Orphanage
Biology
8,751
1,410,576
https://en.wikipedia.org/wiki/Schwarzschild%20geodesics
In general relativity, Schwarzschild geodesics describe the motion of test particles in the gravitational field of a central fixed mass that is, motion in the Schwarzschild metric. Schwarzschild geodesics have been pivotal in the validation of Einstein's theory of general relativity. For example, they provide accurate predictions of the anomalous precession of the planets in the Solar System and of the deflection of light by gravity. Schwarzschild geodesics pertain only to the motion of particles of masses so small they contribute little to the gravitational field. However, they are highly accurate in many astrophysical scenarios provided that is many-fold smaller than the central mass , e.g., for planets orbiting their star. Schwarzschild geodesics are also a good approximation to the relative motion of two bodies of arbitrary mass, provided that the Schwarzschild mass is set equal to the sum of the two individual masses and . This is important in predicting the motion of binary stars in general relativity. Historical context The Schwarzschild metric is named in honour of its discoverer Karl Schwarzschild, who found the solution in 1915, only about a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution. In 1931, Yusuke Hagihara published a paper showing that the trajectory of a test particle in the Schwarzschild metric can be expressed in terms of elliptic functions. Samuil Kaplan in 1949 has shown that there is a minimum radius for the circular orbit to be stable in Schwarzschild metric. Schwarzschild metric An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of an uncharged, non-rotating, spherically symmetric body of mass . The Schwarzschild solution can be written as where , in the case of a test particle of small positive mass, is the proper time (time measured by a clock moving with the particle) in seconds, is the speed of light in meters per second, is, for , the time coordinate (time measured by a stationary clock at infinity) in seconds, is, for , the radial coordinate (circumference of a circle centered at the star divided by ) in meters, is the colatitude (angle from North) in radians, is the longitude in radians, and is the Schwarzschild radius of the massive body (in meters), which is related to its mass by where is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio goes to zero. In that limit, the metric returns to that defined by special relativity. In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius of the Earth is roughly 9 mm ( inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes. Orbits of test particles We may simplify the problem by using symmetry to eliminate one variable from consideration. Since the Schwarzschild metric is symmetrical about , any geodesic that begins moving in that plane will remain in that plane indefinitely (the plane is totally geodesic). Therefore, we orient the coordinate system so that the orbit of the particle lies in that plane, and fix the coordinate to be so that the metric (of this plane) simplifies to Two constants of motion (values that do not change over proper time ) can be identified (cf. the derivation given below). One is the total energy : and the other is the specific angular momentum: where is the total angular momentum of the two bodies, and is the reduced mass. When , the reduced mass is approximately equal to . Sometimes it is assumed that . In the case of the planet Mercury this simplification introduces an error more than twice as large as the relativistic effect. When discussing geodesics, can be considered fictitious, and what matters are the constants and . In order to cover all possible geodesics, we need to consider cases in which is infinite (giving trajectories of photons) or imaginary (for tachyonic geodesics). For the photonic case, we also need to specify a number corresponding to the ratio of the two constants, namely , which may be zero or a non-zero real number. Substituting these constants into the definition of the Schwarzschild metric yields an equation of motion for the radius as a function of the proper time : The formal solution to this is Note that the square root will be imaginary for tachyonic geodesics. Using the relation higher up between and , we can also write Since asymptotically the integrand is inversely proportional to , this shows that in the frame of reference if approaches it does so exponentially without ever reaching it. However, as a function of , does reach . The above solutions are valid while the integrand is finite, but a total solution may involve two or an infinity of pieces, each described by the integral but with alternating signs for the square root. When and , we can solve for and explicitly: and for photonic geodesics () with zero angular momentum (Although the proper time is trivial in the photonic case, one can define an affine parameter , and then the solution to the geodesic equation is .) Another solvable case is that in which and and are constant. In the volume where this gives for the proper time This is close to solutions with small and positive. Outside of the solution is tachyonic and the "proper time" is space-like: This is close to other tachyonic solutions with small and negative. The constant tachyonic geodesic outside is not continued by a constant geodesic inside , but rather continues into a "parallel exterior region" (see Kruskal–Szekeres coordinates). Other tachyonic solutions can enter a black hole and re-exit into the parallel exterior region. The constant solution inside the event horizon () is continued by a constant solution in a white hole. When the angular momentum is not zero we can replace the dependence on proper time by a dependence on the angle using the definition of which yields the equation for the orbit where, for brevity, two length-scales, and , have been defined by Note that in the tachyonic case, will be imaginary and real or infinite. The same equation can also be derived using a Lagrangian approach or the Hamilton–Jacobi equation (see below). The solution of the orbit equation is This can be expressed in terms of the Weierstrass elliptic function . Local and delayed velocities Unlike in classical mechanics, in Schwarzschild coordinates and are not the radial and transverse components of the local velocity (relative to a stationary observer), instead they give the components for the celerity which are related to by for the radial and for the transverse component of motion, with . The coordinate bookkeeper far away from the scene observes the shapiro-delayed velocity , which is given by the relation and . The time dilation factor between the bookkeeper and the moving test-particle can also be put into the form where the numerator is the gravitational, and the denominator is the kinematic component of the time dilation. For a particle falling in from infinity the left factor equals the right factor, since the in-falling velocity matches the escape velocity in this case. The two constants angular momentum and total energy of a test-particle with mass are in terms of and where and For massive testparticles is the Lorentz factor and is the proper time, while for massless particles like photons is set to and takes the role of an affine parameter. If the particle is massless is replaced with and with , where is the Planck constant and the locally observed frequency. Exact solution using elliptic functions The fundamental equation of the orbit is easier to solve if it is expressed in terms of the inverse radius The right-hand side of this equation is a cubic polynomial, which has three roots, denoted here as , , and The sum of the three roots equals the coefficient of the term A cubic polynomial with real coefficients can either have three real roots, or one real root and two complex conjugate roots. If all three roots are real numbers, the roots are labeled so that . If instead there is only one real root, then that is denoted as ; the complex conjugate roots are labeled and . Using Descartes' rule of signs, there can be at most one negative root; is negative if and only if . As discussed below, the roots are useful in determining the types of possible orbits. Given this labeling of the roots, the solution of the fundamental orbital equation is where represents the function (one of the Jacobi elliptic functions) and is a constant of integration reflecting the initial position. The elliptic modulus of this elliptic function is given by the formula Newtonian limit To recover the Newtonian solution for the planetary orbits, one takes the limit as the Schwarzschild radius goes to zero. In this case, the third root becomes roughly , and much larger than or . Therefore, the modulus tends to zero; in that limit, becomes the trigonometric sine function Consistent with Newton's solutions for planetary motions, this formula describes a focal conic of eccentricity If is a positive real number, then the orbit is an ellipse where and represent the distances of furthest and closest approach, respectively. If is zero or a negative real number, the orbit is a parabola or a hyperbola, respectively. In these latter two cases, represents the distance of closest approach; since the orbit goes to infinity (), there is no distance of furthest approach. Roots and overview of possible orbits A root represents a point of the orbit where the derivative vanishes, i.e., where . At such a turning point, reaches a maximum, a minimum, or an inflection point, depending on the value of the second derivative, which is given by the formula If all three roots are distinct real numbers, the second derivative is positive, negative, and positive at u1, u2, and u3, respectively. It follows that a graph of u versus φ may either oscillate between u1 and u2, or it may move away from u3 towards infinity (which corresponds to r going to zero). If u1 is negative, only part of an "oscillation" will actually occur. This corresponds to the particle coming from infinity, getting near the central mass, and then moving away again toward infinity, like the hyperbolic trajectory in the classical solution. If the particle has just the right amount of energy for its angular momentum, u2 and u3 will merge. There are three solutions in this case. The orbit may spiral in to , approaching that radius as (asymptotically) a decreasing exponential in φ, , or . Or one can have a circular orbit at that radius. Or one can have an orbit that spirals down from that radius to the central point. The radius in question is called the inner radius and is between and 3 times rs. A circular orbit also results when is equal to , and this is called the outer radius. These different types of orbits are discussed below. If the particle comes at the central mass with sufficient energy and sufficiently low angular momentum then only will be real. This corresponds to the particle falling into a black hole. The orbit spirals in with a finite change in φ. Precession of orbits The function sn and its square sn2 have periods of 4K and 2K, respectively, where K is defined by the equation Therefore, the change in φ over one oscillation of (or, equivalently, one oscillation of ) equals In the classical limit, u3 approaches and is much larger than or . Hence, is approximately For the same reasons, the denominator of Δφ is approximately Since the modulus is close to zero, the period K can be expanded in powers of ; to lowest order, this expansion yields Substituting these approximations into the formula for Δφ yields a formula for angular advance per radial oscillation For an elliptical orbit, and represent the inverses of the longest and shortest distances, respectively. These can be expressed in terms of the ellipse's semi-major axis and its orbital eccentricity , giving Substituting the definition of gives the final equation Bending of light by gravity In the limit as the particle mass m goes to zero (or, equivalently if the light is heading directly toward the central mass, as the length-scale a goes to infinity), the equation for the orbit becomes Expanding in powers of , the leading order term in this formula gives the approximate angular deflection δφ for a massless particle coming in from infinity and going back out to infinity: Here, is the impact parameter, somewhat greater than the distance of closest approach, : Although this formula is approximate, it is accurate for most measurements of gravitational lensing, due to the smallness of the ratio . For light grazing the surface of the sun, the approximate angular deflection is roughly 1.75 arcseconds, roughly one millionth part of a circle. More generally, the geodesics of a photon with radial coordinate can be calculated as follows, by applying the equation The equation can be derived as which leads to This equation with second derivative can be numerically integrated as follows by a 4th order Runge-Kutta method, considering a step size and with: , , and . The value at the next step is and the value at the next step is The step can be chosen to be constant or adaptive, depending on the accuracy required on . Relation to Newtonian physics Effective radial potential energy The equation of motion for the particle derived above can be rewritten using the definition of the Schwarzschild radius rs as which is equivalent to a particle moving in a one-dimensional effective potential The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution where is the semi-major axis and is the eccentricity. The third term is attractive and dominates at small values, giving a critical inner radius rinner at which a particle is drawn inexorably inwards to ; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the length-scale defined above. Circular orbits and their stability The effective potential can be re-written in terms of the length . Circular orbits are possible when the effective force is zero i.e., when the two attractive forces — Newtonian gravity (first term) and the attraction unique to general relativity (third term) — are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as rinner and router which are obtained using the quadratic formula. The inner radius rinner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from rinner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem. When is much greater than (the classical case), these formulae become approximately Substituting the definitions of and rs into router yields the classical formula for a particle of mass orbiting a body of mass . where ωφ is the orbital angular speed of the particle. This formula is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force: Where is the reduced mass. In our notation, the classical orbital angular speed equals At the other extreme, when a2 approaches 3rs2 from above, the two radii converge to a single value The quadratic solutions above ensure that router is always greater than 3rs, whereas rinner lies between  rs and 3rs. Circular orbits smaller than  rs are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at rinner = rs. The sphere of this radius is sometimes known as the photon sphere. Precession of elliptical orbits The orbital precession rate may be derived using this radial effective potential V. A small radial deviation from a circular orbit of radius router will oscillate stably with an angular frequency which equals Taking the square root of both sides and performing a Taylor series expansion yields Multiplying by the period T of one revolution gives the precession of the orbit per revolution where we have used ωφT = 2п and the definition of the length-scale a. Substituting the definition of the Schwarzschild radius rs gives This may be simplified using the elliptical orbit's semiaxis A and eccentricity e related by the formula to give the precession angle Mathematical derivations of the orbital equation Christoffel symbols The non-vanishing Christoffel symbols for the Schwarzschild-metric are: Geodesic equation According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is where Γ represents the Christoffel symbol and the variable parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor , or rather on how it changes with position. The variable is a constant multiple of the proper time for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable . Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed. Therefore, to solve for the motion of a particle, the most straightforward way is to solve the geodesic equation, an approach adopted by Einstein and others. The Schwarzschild metric may be written as where the two functions and its reciprocal are defined for brevity. From this metric, the Christoffel symbols may be calculated, and the results substituted into the geodesic equations It may be verified that is a valid solution by substitution into the first of these four equations. By symmetry, the orbit must be planar, and we are free to arrange the coordinate frame so that the equatorial plane is the plane of the orbit. This solution simplifies the second and fourth equations. To solve the second and third equations, it suffices to divide them by and , respectively. which yields two constants of motion. Lagrangian approach Because test particles follow geodesics in a fixed metric, the orbits of those particles may be determined using the calculus of variations, also called the Lagrangian approach. Geodesics in space-time are defined as curves for which small local variations in their coordinates (while holding their endpoints events fixed) make no significant change in their overall length s. This may be expressed mathematically using the calculus of variations where τ is the proper time, s = cτ is the arc-length in space-time and T is defined as in analogy with kinetic energy. If the derivative with respect to proper time is represented by a dot for brevity T may be written as Constant factors (such as c or the square root of two) don't affect the answer to the variational problem; therefore, taking the variation inside the integral yields Hamilton's principle The solution of the variational problem is given by Lagrange's equations When applied to t and φ, these equations reveal two constants of motion which may be expressed in terms of two constant length-scales, and As shown above, substitution of these equations into the definition of the Schwarzschild metric yields the equation for the orbit. Hamiltonian approach A Lagrangian solution can be recast into an equivalent Hamiltonian form. In this case, the Hamiltonian is given by Once again, the orbit may be restricted to by symmetry. Since and do not appear in the Hamiltonian, their conjugate momenta are constant; they may be expressed in terms of the speed of light and two constant length-scales and The derivatives with respect to proper time are given by Dividing the first equation by the second yields the orbital equation The radial momentum pr can be expressed in terms of r using the constancy of the Hamiltonian ; this yields the fundamental orbital equation Hamilton–Jacobi approach The orbital equation can be derived from the Hamilton–Jacobi equation. The advantage of this approach is that it equates the motion of the particle with the propagation of a wave, and leads neatly into the derivation of the deflection of light by gravity in general relativity, through Fermat's principle. The basic idea is that, due to gravitational slowing of time, parts of a wave-front closer to a gravitating mass move more slowly than those further away, thus bending the direction of the wave-front's propagation. Using general covariance, the Hamilton–Jacobi equation for a single particle of unit mass can be expressed in arbitrary coordinates as This is equivalent to the Hamiltonian formulation above, with the partial derivatives of the action taking the place of the generalized momenta. Using the Schwarzschild metric gμν, this equation becomes where we again orient the spherical coordinate system with the plane of the orbit. The time t and azimuthal angle φ are cyclic coordinates, so that the solution for Hamilton's principal function S can be written where and are the constant generalized momenta. The Hamilton–Jacobi equation gives an integral solution for the radial part Taking the derivative of Hamilton's principal function S with respect to the conserved momentum pφ yields which equals Taking an infinitesimal variation in φ and r yields the fundamental orbital equation where the conserved length-scales a and b are defined by the conserved momenta by the equations Hamilton's principle The action integral for a particle affected only by gravity is where is the proper time and is any smooth parameterization of the particle's world line. If one applies the calculus of variations to this, one again gets the equations for a geodesic. To simplify the calculations, one first takes the variation of the square of the integrand. For the metric and coordinates of this case and assuming that the particle is moving in the equatorial plane , that square is Taking variation of this gives Motion in longitude Vary with respect to longitude only to get Divide by to get the variation of the integrand itself Thus Integrating by parts gives The variation of the longitude is assumed to be zero at the end points, so the first term disappears. The integral can be made nonzero by a perverse choice of unless the other factor inside is zero everywhere. So the equation of motion is Motion in time Vary with respect to time only to get Divide by to get the variation of the integrand itself Thus Integrating by parts gives So the equation of motion is Conserved momenta Integrate these equations of motion to determine the constants of integration getting These two equations for the constants of motion (angular momentum) and (energy) can be combined to form one equation that is true even for photons and other massless particles for which the proper time along a geodesic is zero. Radial motion Substituting and into the metric equation (and using ) gives from which one can derive which is the equation of motion for . The dependence of on can be found by dividing this by to get which is true even for particles without mass. If length scales are defined by and then the dependence of on simplifies to See also Classical central-force problem Frame fields in general relativity Kepler problem Two-body problem in general relativity Notes References Bibliography Schwarzschild, K. (1916). Über das Gravitationsfeld eines Massenpunktes nach der Einstein'schen Theorie. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1, 189–196. scan of the original paper text of the original paper, in Wikisource translation by Antoci and Loinger a commentary on the paper, giving a simpler derivation Schwarzschild, K. (1916). Über das Gravitationsfeld einer Kugel aus inkompressibler Flüssigkeit. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1, 424-?. (See Gravitation (book).) External links Excerpt from Reflections on Relativity by Kevin Brown. Exact solutions in general relativity
Schwarzschild geodesics
Mathematics
5,356
14,372,575
https://en.wikipedia.org/wiki/Karahafu
is a type of curved gable found in Japanese architecture. It is used on Japanese castles, Buddhist temples, and Shinto shrines. Roofing materials such as tile and bark may be used as coverings. The face beneath the gable may be flush with the wall below, or it may terminate on a lower roof. History Although kara (唐) can be translated as meaning "China" or "Tang", this type of roof with undulating bargeboards is an invention of Japanese carpenters in the late Heian period. It was named thus because the word kara could also mean "peculiar" or "elegant", and was often added to names of objects considered grand or intricate regardless of origin. The karahafu developed during the Heian period and is shown in picture scrolls to decorate gates, corridors, and palanquins. The first known depiction of a karahafu appears on a miniature shrine () in Shōryoin shrine at Hōryū-ji in Nara. The karahafu and its building style (karahafu-zukuri) became increasingly popular during the Kamakura and Muromachi period, when Japan witnessed a new wave of influences from the Asian continent. During the Kamakura period, Zen Buddhism spread to Japan and the karahafu was employed in many Zen temples. Initially, the karahafu was used only in temples and aristocratic gateways, but starting from the beginning of the Azuchi–Momoyama period, it became an important architectural element in the construction of a daimyōs mansions and castles. The daimyō'''s gateway with a karahafu roof was reserved for the shōgun during his onari visits to the retainer, or for the reception of the emperor at shogunate establishments. A structure associated with these social connections naturally imparted special meaning. Gates with a karahafu roof, the karamon (mon meaning "gate"), became a means to proclaim the prestige of a building and functioned as a symbol of both religious and secular architecture. In the Tokugawa shogunate, the karamon gates were a powerful symbol of authority reflected in architecture. Images See also Japanese architecture Japanese castle List of roof shapes Notes References Coaldrake, William. (1996). Architecture and Authority in Japan. London/New York: Routledge. . Sarvimaki Marja. (2000). Structures, Symbols and Meanings: Chinese and Korean Influence on Japanese Architecture. Helsinki University of Technology, Department of Architecture. . Sarvimaki Marja. (2003). "Layouts and Layers: Spatial Arrangements in Japan and Korea". Sungkyun Journal of East Asian Studies, Volume 3, No. 2. Retrieved on May 30, 2009. Parent, Mary Neighbour. (2003). Japanese Architecture and Art Net Users System''. Japanese architectural features Roofs
Karahafu
Technology,Engineering
581
40,143,912
https://en.wikipedia.org/wiki/Austrocortirubin
Austrocortirubin is an antibacterial metabolite found in the Dermocybe splendida mushroom. Notes Antimicrobials Natural phenols Tetrahydroxyanthraquinones Ethers 3-Hydroxypropenals within hydroxyquinones
Austrocortirubin
Chemistry,Biology
60
24,175,858
https://en.wikipedia.org/wiki/Solasodine
Solasodine is a poisonous alkaloid chemical compound that occurs in plants of the family Solanaceae such as potatoes and tomatoes. Solasonine and solamargine are glycoalkaloid derivatives of solasodine. Solasodine is teratogenic to hamster fetuses in a dose of 1200 to 1600 mg/kg. A 2013 literature survey found that various studies have indicated that solasodine may have diuretic, anticancer, antifungal, cardiotonic, antispermatogenetic, antiandrogenic, immunomodulatory, antipyretic and/or various other effects on central nervous system. Uses It is commercially used as a precursor for the production of complex steroidal compounds such as contraceptive pills, via a 16-DPA intermediate. See also Solanum mauritianum References Steroidal alkaloids Plant toxins Steroidal alkaloids found in Solanaceae Spiro compounds
Solasodine
Chemistry
208
22,066,537
https://en.wikipedia.org/wiki/Sperm%20guidance
Sperm guidance is the process by which sperm cells (spermatozoa) are directed to the oocyte (egg) for the aim of fertilization. In the case of marine invertebrates the guidance is done by chemotaxis. In the case of mammals, it appears to be done by chemotaxis, thermotaxis and rheotaxis. Background Since the discovery of sperm attraction to the female gametes in ferns over a century ago, sperm guidance in the form of sperm chemotaxis has been established in a large variety of species Although sperm chemotaxis is prevalent throughout the Metazoa kingdom, from marine species with external fertilization such as sea urchins and corals, to humans, much of the current information on sperm chemotaxis is derived from studies of marine invertebrates, primarily sea urchin and starfish. As a matter of fact, until not too long ago, the dogma was that, in mammals, guidance of spermatozoa to the oocyte was unnecessary. This was due to the common belief that, following ejaculation into the female genital tract, large numbers of spermatozoa 'race' towards the oocyte and compete to fertilize it. This belief was taken apart when it became clear that only few of the ejaculated spermatozoa — in humans, only ~1 of every million spermatozoa — succeed in entering the oviducts (fallopian tubes) and when more recent studies showed that mammalian spermatozoa employ at least three different mechanisms, each of which can potentially serve as a guidance mechanism: chemotaxis, thermotaxis and rheotaxis. Sperm guidance in non-mammalian species Sperm guidance in non-mammalian species is performed by chemotaxis. The oocyte secretes a chemoattractant, which, as it diffuses away, forms a concentration gradient: a high concentration close to the egg, and a gradually lower concentration as the distance from the oocyte increases. Spermatozoa can sense this chemoattractant and orient their swimming direction up the concentration gradient towards the oocyte. Sperm chemotaxis was demonstrated in a large number of non-mammalian species, from marine invertebrates to frogs. Chemoattractants The sperm chemoattractants in non-mammalian species vary to a large extent. Some examples are shown in Table 1. So far, most sperm chemoattractants that have been identified in non-mammalian species are peptides or low-molecular-weight proteins (1–20 kDa), which are heat stable and sensitive to proteases. Exceptions to this rule are the sperm chemoattractants of corals, ascidians, plants such as ferns, and algae (Table 1). Table 1. Some sperm chemoattractants in non-mammalian species* Taken from reference. Species specificity The variety of chemoattractants raises the question of species specificity with respect to the chemoattractant identity. There is no single rule for chemoattractant-related specificity. Thus, in some groups of marine invertebrates (e.g., hydromedusae and certain ophiuroids), the specificity is very high; in others (e.g., starfish), the specificity is at the family level and, within the family, there is no specificity. In mollusks, there appears to be no specificity at all. Likewise, in plants, a unique simple compound [e.g., fucoserratene — a linear, unsaturated alkene (1,3-trans 5-cis-octatriene)] might be a chemoattractant for various species. Behavioral mechanism Here, too, there is no single rule. In some species (for example, in hydroids like Campanularia or tunicate like Ciona), the swimming direction of the spermatozoa changes abruptly towards the chemoattractant source. In others (for example, in sea urchin, hydromedusa, fern, or fish such as Japanese bitterlings), the approach to the chemoattractant source is indirect and the movement is by repetitive loops of small radii. In some species (for example, herring or the ascidian Ciona) activation of motility precedes chemotaxis. In chemotaxis, cells may either sense a temporal gradient of the chemoattractant, comparing the occupancy of its receptors at different time points (as do bacteria), or they may detect a spatial gradient, comparing the occupancy of receptors at different locations along the cell (as do leukocytes). In the best-studied species, sea urchin, the spermatozoa sense a temporal gradient and respond to it with a transient increase in flagellar asymmetry. The outcome is a turn in the swimming path, followed by a period of straight swimming, leading to the observed epicycloid-like movements directed towards the chemoattractant source. Molecular mechanism The molecular mechanism of sperm chemotaxis is still not fully known. The current knowledge is mainly based on studies in the sea urchin Arbacia punctulata, where binding of the chemoattractant resact (Table 1) to its receptor, a guanylyl cyclase, activates cGMP synthesis (Figure 1). The resulting rise of cGMP possibly activates K+-selective ion channels. The consequential hyperpolarization activates hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels. The depolarizing inward current through HCN channels possibly activates voltage-activated Ca2+ channels, resulting in elevation of intracellular Ca2+. This rise leads to flagellar asymmetry and, consequently, a turn of the sperm cell. Figure 1. A model of the signal-transduction pathway during sperm chemotaxis of the sea urchin Arbacia punctulata. Binding of a chemoattractant (ligand) to the receptor — a membrane-bound guanylyl cyclase (GC) — activates the synthesis of cGMP from GTP. Cyclic GMP possibly opens cyclic nucleotide-gated (CNG) K+-selective channels, thereby causing hyperpolarization of the membrane. The cGMP signal is terminated by the hydrolysis of cGMP through phosphodiesterase (PDE) activity and inactivation of GC. On hyperpolarization, hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels allow the influx of Na+ that leads to depolarization and thereby causes a rapid Ca2+ entry through voltage-activated Ca2+ channels (Cav), Ca2+ ions interact by unknown mechanisms with the axoneme of the flagellum and cause an increase of the asymmetry of flagellar beat and eventually a turn or bend in the swimming trajectory. Ca2+ is removed from the flagellum by a Na+/Ca2+ exchange mechanism. [Taken from ref.] Sperm guidance in mammals Three different guidance mechanisms have been proposed to occur in the mammalian oviduct: thermotaxis, rheotaxis, and chemotaxis. Indeed, due to obvious restrictions, all these mechanisms were demonstrated in vitro only. However, the discoveries of proper stimuli in the female – an ovulation-dependent temperature gradient in the oviduct, post-coitus oviductal fluid flow in female mice, and sperm chemoattractants secreted from the oocyte and its surrounding cumulus cells, respectively – strongly suggest the mutual occurrence of these mechanisms in vivo. I. Chemotaxis Following the findings that human spermatozoa accumulate in follicular fluid and that there is a remarkable correlation between this in vitro accumulation and oocyte fertilization, chemotaxis was substantiated as the cause of this accumulation. Sperm chemotaxis was later also demonstrated in mice and rabbits. In addition, sperm accumulation in follicular fluid (but without substantiating that it truly reflects chemotaxis) was demonstrated in horses and pigs. A key feature of sperm chemotaxis in humans is that this process is restricted to capacitated cells — the only cells that possess the ability to penetrate the oocyte and fertilize it. This raised the possibility that, in mammals, chemotaxis is not solely a guidance mechanism but it is also a mechanism of sperm selection. Importantly, the fraction of capacitated (and, hence, chemotactically responsive) spermatozoa is low (~10% in humans), the life span of the capacitated/chemotactic state is short (1–4 hours in humans), a spermatozoon can be at this state only once in its lifetime, and sperm individuals become capacitated/chemotactic at different time points, resulting in continuous replacement of capacitated/chemotactic cells within the sperm population, i.e., prolonged availability of capacitated cells. These sperm features raised the possibility that prolonging the time period, during which capacitated spermatozoa can be found in the female genital tract, is a mechanism, evolved in humans, to compensate for the lack of coordination between insemination and ovulation. Chemotaxis is a short-range guidance mechanism. As such, it can guide spermatozoa for short distances only, estimated at the order of millimeters. Chemoattractants In humans, there are at least two different origins of sperm chemoattractants. One is the cumulus cells that surround the oocyte, and the other is the mature oocyte itself. The chemoattractant secreted from the cumulus cells is the steroid progesterone, shown to be effective at the picomolar range. The chemoattractant secreted from the oocyte is even more potent. It is a hydrophobic non-peptide molecule which, when secreted from the oocyte, is in complex with a carrier protein Additional compounds have been shown to act as chemoattractants for mammalian spermatozoa. They include the chemokine CCL20, atrial natriuretic peptide (ANP), specific odorants, natriuretic peptide type C (NPPC), and allurin, to mention a few. It is reasonable to assume that not all of them are physiologically relevant. Species specificity Species specificity was not detected in experiments that compared the chemotactic responsiveness of human and rabbit spermatozoa to follicular fluids or egg-conditioned media obtained from human, bovine, and rabbit. The subsequent findings that cumulus cells of both human and rabbit (and, probably, of other mammals as well) secrete the chemoattractant progesterone is sufficient to account for the lack of specificity in the chemotactic response of mammalian spermatozoa. Behavioral mechanism Mammalian spermatozoa, like sea-urchin spermatozoa, appear to sense the chemoattractant gradient temporally (comparing receptor occupancy over time) rather than spatially (comparing receptor occupancy over space). This is because the establishment of a temporal gradient in the absence of spatial gradient, achieved by mixing human spermatozoa with a chemoattractant or by photorelease of a chemoattractant from its caged compound, results in delayed transient changes in swimming behavior that involve increased frequency of turns and hyperactivation events. On the basis of these observations and the finding that the level of hyperactivation events is reduced when chemotactically responsive spermatozoa swim in a spatial chemoattractant gradient it was proposed that turns and hyperactivation events are suppressed when capacitated spermatozoa swim up a chemoattractant gradient, and vice versa when they swim down a gradient. In other words, human spermatozoa approach chemoattractants by modulating the frequency of turns and hyperactivation events, similarly to Escherichia coli bacteria. Molecular mechanism As in non-mammalian species, the end signal in chemotaxis for changing the direction of swimming is Ca2+. The discovery of progesterone as a chemoattractant led to the identification of its receptor on the sperm surface – CatSper, a Ca2+ channel present exclusively in the tail of mammalian spermatozoa. (Note, though, that progesterone only stimulates human CatSper but not mouse CatSper. Consistently, sperm chemotaxis to progesterone was not found in mice.) However, the molecular steps subsequent to CatSper activation by progesterone are obscure, though the involvement of trans-membrane adenylyl cyclase, cAMP and protein kinase A as well as soluble guanylyl cyclase, cGMP, inositol trisphosphate receptor and store-operated Ca2+ channel was proposed. II. Thermotaxis The realization that sperm chemotaxis can guide spermatozoa for short distances only, triggered a search for potential long-range guidance mechanisms. The findings that, at least in rabbits and pigs, a temperature difference exists within the oviduct, and that this temperature difference is established at ovulation in rabbits due to a temperature drop in the oviduct near the junction with the uterus, creating a temperature gradient between the sperm storage site and the fertilization site in the oviduct, led to a study of whether mammalian spermatozoa can respond to a temperature gradient by thermotaxis. Establishing sperm thermotaxis as an active process Mammalian sperm thermotaxis was, hitherto, demonstrated in three species: humans, rabbits, and mice. This was done by two methods. One involved a Zigmond chamber, modified to make the temperature in each well separately controllable and measurable. A linear temperature gradient was established between the wells and the swimming of spermatozoa in this gradient was analyzed. A small fraction of the spermatozoa (at the order of ~10%), shown to be the capacitated cells, biased their swimming direction according to the gradient, moving towards the warmer temperature. The other method involved two- or three-compartment separation tube placed within a thermoseparation device that maintains a linear temperature gradient. Sperm accumulation at the warmer end of the separation tube was much higher than the accumulation at the same temperature but in the absence of a temperature gradient. This gradient-dependent sperm accumulation was observed over a wide temperature range (29-41 °C). Since temperature affects almost every process, much attention has been devoted to the question of whether the measurements, mentioned just above, truly demonstrate thermotaxis or whether they reflect another temperature-dependent process. The most pronounced effect of temperature in liquid is convection, which raised the concern that the apparent thermotactic response could have been a reflection of a passive drift in the liquid current or a rheotactic response to the current (rather than to the temperature gradient per se). Another concern was that the temperature could have changed the local pH of the buffer solution in which the spermatozoa are suspended. This could generate a pH gradient along the temperature gradient, and the spermatozoa might have responded to the formed pH gradient by chemotaxis. However, careful experimental examinations of all these possibilities with proper controls demonstrated that the measured responses to temperature are true thermotactic responses and that they are not a reflection of any other temperature-sensitive process, including rheotaxis and chemotaxis. Behavioral mechanism of mammalian sperm thermotaxis The behavioral mechanism of sperm thermotaxis has been so far only investigated in human spermatozoa. Like the behavioral mechanisms of bacterial chemotaxis and human sperm chemotaxis, the behavioral mechanism of human sperm thermotaxis appears to be stochastic rather than deterministic. Capacitated human spermatozoa swim in rather straight lines interrupted by turns and brief episodes of hyperactivation. Each such episode results in swimming in a new direction. When the spermatozoa sense a decrease in temperature, the frequency of turns and hyperactivation events increases due to increased flagellar-wave amplitude that results in enhanced side-to-side head displacement. With time, this response undergoes partial adaptation. The opposite happens in response to an increase in temperature. This suggests that when capacitated spermatozoa swim up a temperature gradient, turns are repressed and the spermatozoa continue swimming in the gradient direction. When they happen to swim down the gradient, they turn again and again until their swimming direction is again up the gradient. Temperature sensing The response of spermatozoa to temporal temperature changes even when the temperature is kept constant spatially suggests that, as in the case of human sperm chemotaxis, sperm thermotaxis involves temporal gradient sensing. In other words, spermatozoa apparently compare the temperature (or a temperature-dependent function) between consecutive time points. This, however, does not exclude the occurrence of spatial temperature sensing in addition to temporal sensing. Human spermatozoa can respond thermotactically within a wide temperature range (at least 29–41 °C). Within this range they preferentially accumulate in warmer temperatures rather than at a single specific, preferred temperature. Amazingly, they can sense and thermotactically respond to temperature gradients as low as <0.014 °C/mm. This means that when human spermatozoa swim a distance that equals their body length (~46 μm) they respond to a temperature difference of <0.0006 °C! Molecular mechanism The molecular mechanism underlying thermotaxis, in general, and thermosensing with such extreme sensitivity, in particular, is obscure. It is known that, unlike other recognized thermosensors in mammals, the thermosensors for sperm thermotaxis do not seem to be temperature-sensitive ion channels. They are rather opsins, known to be G-protein-coupled receptors that act as photosensors in vision. The opsins are present in spermatozoa at specific sites, which depend on the species and the opsin type. They are involved in sperm thermotaxis via at least two signaling pathways: a phospholipase C signaling pathway and a cyclic-nucleotide pathway. The former was shown by pharmacological means in human spermatozoa to involve the enzyme phospholipase C, an inositol trisphosphate receptor located on internal calcium stores, the calcium channel TRPC3, and intracellular calcium. The cyclic-nucleotide pathway was, hitherto, shown to involve phosphodiesterase. Blocking both pathways fully inhibits sperm thermotaxis. III. Rheotaxis When human and mouse spermatozoa are exposed to a fluid flow, roughly one half of them (i.e., both capacitated and noncapacitated spermatozoa) reorient and swim against the current. The flow, which is prolactin-triggered oviductal fluid secretion, is generated in female mice within 4 h of sexual stimulation and coitus. Thus, rheotaxis orients spermatozoa towards the fertilization site. It was proposed that capacitated spermatozoa might detach from the oviductal surface faster than non-capacitated spermatozoa, enabling them to swim into the main current. To understand the mechanism of sperm turning in rheotaxis, quantitative analysis of human sperm flagellar behavior during rheotaxis turning was carried out. The results revealed, both at the single cell and population levels, that there is no significant difference in flagellar beating between rheotaxis turning spermatozoa and free-swimming spermatozoa. This finding taken together with the constant internal Ca2+ signal, measured during rheotaxis turning, demonstrated that, in contrast to the active process of chemotaxis and thermotaxis, human sperm rheotaxis is a passive process and no flow sensing is involved. All mechanisms combined Like in any other highly essential system in biology, mammalian sperm guidance is expected to involve redundancy. Indeed, at least three guidance mechanisms are likely to act in the female genital tract, two active mechanisms — chemotaxis and thermotaxis, and a passive mechanism — rheotaxis. When one of these mechanisms is not functional for any reason, guidance is not expected to be lost and the cells should still be able to navigate to the oocyte. This resembles guidance of migrating birds, where the birds' navigation is unaffected when one of the guidance mechanisms is not functional. It has been suggested that capacitated spermatozoa, released from the sperm storage site at the isthmus, may be first actively guided by thermotaxis from the cooler sperm storage site towards the warmer fertilization site (Figure 2). Two passive processes, rheotaxis and contractions of the oviduct may assist the spermatozoa to reach there. At this location, the spermatozoa may be chemotactically guided to the oocyte-cumulus complex by the gradient of progesterone, secreted from the cumulus cells. In addition, progesterone may inwardly guide spermatozoa, already present within the periphery of the cumulus oophorus. Spermatozoa that are already deep within the cumulus oophorus may sense the more potent chemoattractant that is secreted from the oocyte and chemotactically guide themselves to the oocyte according to the gradient of this chemoattractant. It should be borne in mind, however, that this is only a model. Figure 2. A simplified scheme describing the suggested sequence of active sperm guidance mechanisms in mammals. In addition, two passive processes, sperm rheotaxis and contractions of the oviduct, may assist sperm movement towards the fertilization site. A number of observations point to the possibility that chemotaxis and thermotaxis also occur at lower parts of the female genital tract. For example, small, gradual estrus cycle-correlated temperature increase was measured in cows from the vagina towards the uterine horns, and a gradient of natriuretic peptide precursor A, shown to be a chemoattractant for mouse spermatozoa, was found, in decreasing concentration order, in the ampulla, isthmus, and uterotubal junction. The physiological functions, if any, of these chemical and temperature gradients are yet to be resolved. Potential clinical applications Sperm guidance by either chemotaxis or thermotaxis can potentially be used to obtain sperm populations that are enriched with capacitated spermatozoa for in vitro fertilization procedures. Indeed, sperm populations selected by thermotaxis were recently shown to have much higher DNA integrity and lower chromatin compaction than unselected spermatozoa and, in mice, to give rise to more and better embryos through intracytoplasmic sperm injection (ICSI), doubling the number of successful pregnancies. Chemotaxis and thermotaxis can also be exploited possibly as a diagnostic tool to assess sperm quality. In addition, these processes can potentially be used, in the long run, as a means of contraception by interfering with the normal process of fertilization. References Semen Cell biology
Sperm guidance
Biology
5,060
77,732
https://en.wikipedia.org/wiki/Telamon
In Greek mythology, Telamon (; Ancient Greek: Τελαμών, Telamōn means "broad strap") was the son of King Aeacus of Aegina, and Endeïs, a mountain nymph. The elder brother of Peleus, Telamon sailed alongside Jason as one of his Argonauts, and was present at the hunt for the Calydonian Boar. In the Iliad, he was the father of Greek heroes Ajax the Great and Teucer by different mothers. Some accounts mention a third son of his, Trambelus. He and Peleus were also close friends of Heracles, assisting him on his expeditions against the Amazons and his assault on Troy (see below). In an earlier account recorded by Pherecydes of Athens, Telamon and Peleus were not brothers, but friends. According to this account, Telamon was the son of Actaeus and Glauce, with the latter being the daughter of Cychreus, king of Salamis; and Telamon married Periboea (Eriboea), daughter of King Alcathous of Megara. Mythology After killing their half-brother, Phocus, Telamon and Peleus fled Aegina and made their way to the island of Salamis, where King Cychreus welcomed Telamon and befriended him. Telamon married Cychreus' daughter Periboea, who gave birth to Ajax; sometime later, Cychreus gave Telamon his kingdom. In other versions of the myth Cychreus' daughter is named Glauce, and Periboea is Telamon's second wife, and the daughter of Alcathous. Trojan War Telamon also features in both versions of Heracles' sacking of Troy, which was ruled by King Laomedon (or Tros in the alternate versions). Before the Trojan War, Poseidon sent a sea monster to attack Troy. Tros version In the King Tros version, Heracles (along with Telamon and Oicles) agreed to kill the monster if Tros would give him the horses he received from Zeus as compensation for Zeus' kidnapping Tros' son, Ganymede. Tros agreed; Heracles succeeded and Telamon married Hesione, Tros' daughter, by whom he sired Teucer. Laomedon version In the King Laomedon version, Laomedon planned on sacrificing his daughter Hesione to Poseidon in the hope of appeasing him. Heracles rescued her at the last minute and killed both the monster and Laomedon and Laomedon's sons, except for Ganymede, who was on Mount Olympus, and Podarces, who saved his own life by giving Heracles a golden veil Hesione had made. Telamon took Hesione as a war prize and married her, and she gave birth by him to Teucer. When Ajax later committed suicide at Troy, Telamon banished Teucer from Salamis for failing to bring his brother home. Bibliotheca version In Apollodorus' Library, Telamon was almost killed during the siege of Troy. Telamon was the first one to break through the Trojan wall, which enraged Hercules as he was coveting that glory for himself. Hercules was about to cut him down with his sword when Telamon began to quickly assemble an altar out of nearby stones in honor of Hercules. Hercules was so pleased, after the sack of Troy he gave Telamon Hesione as a wife. Hesione requested that she be able to bring her brother Podarces with her. Hercules would not allow it unless Hesione bought Podarces as a slave. Hesione paid for her brother with a veil. Podarces' name was then changed to Priam – which, according to Greek author Apollodorus, was derived from the Greek phrase "to buy". In architecture In architecture, telamons are colossal male figures used as columns. These are also called atlas, atlantes, or atlantids; they are the male versions of caryatids. The Telamon The "Telamon" (also "Song of Telamon", "Telamon Song", "Telamon-song") is an ancient Greek song (fl. 5th century BC) only found referred to by name in some ancient Greek plays and later scholia or commentaries. It is usually thought to be a warlike song about Telamon's son Ajax, though some other commentaries thought it to be a mournful song about Telamon himself. It began with: "Son of Telamon, warlike Ajax! They say you are the bravest of the Greeks who came to Troy, next to Achilles." References Sources External links Argonauts Kings in Greek mythology Characters in the Argonautica Mythological Aeginetans Mythological Salaminians Salaminian mythology Columns and entablature
Telamon
Technology
1,063
62,859,875
https://en.wikipedia.org/wiki/Living%20building%20material
A living building material (LBM) is a material used in construction or industrial design that behaves in a way resembling a living organism. Examples include: self-mending biocement, self-replicating concrete replacement, and mycelium-based composites for construction and packaging. Artistic projects include building components and household items. History The development of living building materials began with research of methods for mineralizing concrete, that were inspired by coral mineralization. The use of microbiologically induced calcite precipitation (MICP) in concrete was pioneered by Adolphe et al. in 1990, as a method of applying a protective coating to building façades. In 2007, "Greensulate", a mycelium-based building insulation material was introduced by Ecovative Design, a spin off of research conducted at the Rensselaer Polytechnic Institute. Mycelium composites were later developed for packaging, sound absorption, and structural building materials such as bricks. In the United Kingdom, the Materials for Life (M4L) project was founded at Cardiff University in 2013 to "create a built environment and infrastructure which is a sustainable and resilient system comprising materials and structures that continually monitor, regulate, adapt and repair themselves without the need for external intervention." M4L led to the UK's first self-healing concrete trials. In 2017 the project expanded into a consortium led by the universities of Cardiff, Cambridge, Bath and Bradford, changing its name to Resilient Materials 4 Life (RM4L) and receiving funding from the Engineering and Physical Sciences Research Council. This consortium focuses on four aspects of material engineering: self-healing of cracks at multiple scales; self-healing of time-dependent and cycling loading damage; self-diagnosis and healing of chemical damage; and self-diagnosis and immunization against physical damage. In 2016 the United States Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched the Engineered Living Materials (ELM) program. The goal of this program is to "develop design tools and methods that enable the engineering of structural features into cellular systems that function as living materials, thereby opening up a new design space for building technology... [and] to validate these new methods through the production of living materials that can reproduce, self-organize, and self-heal." In 2017 the ELM program contracted Ecovative Design to produce "a living hybrid composite building material... [to] genetically re-program that living material with responsive functionality [such as] wound repair... [and to] rapidly reuse and redeploy [the] material into new shapes, forms, and applications." In 2020 a research group at the University of Colorado, funded by an ELM grant, published a paper after successfully creating exponentially regenerating concrete. Self-replicating concrete Self-replicating concrete is produced using a mixture of sand and hydrogel, which are used as a growth medium for synechococcus bacteria to grow on. Synthesis and fabrication The sand-hydrogel mixture from which self-replicating concrete is made has a lower pH, lower ionic strength, and lower curing temperatures than a typical concrete mix, allowing it to serve as a growth medium for the bacteria. As the bacteria reproduce they spread through the medium, and biomineralize it with calcium carbonate, which is the main contributor to the overall strength and durability of the material. After mineralization the sand-hydrogel compound is strong enough to be used in construction, as concrete or mortar. The bacteria in self-replicating concrete react to humidity changes: they are most active - and reproduce the fastest - in an environment with 100% humidity, though a drop to 50% does not have a large impact on the cellular activity. Lower humidity does result in a stronger material than high humidity. As the bacteria reproduce, their biomineralization activity increases; this allows production capacity to scale exponentially. Properties The structural properties of this material are similar to those of Portland cement-based mortars: it has an elastic modulus of 293.9 MPa, and a tensile strength of 3.6 MPa (the minimum required value for Portland-cement based concrete is approximately 3.5 MPa); however it has a fracture energy of 170 N, which is much less than most standard concrete formulations, which can reach up to several kN. Uses Self-replicating concrete can be used in a variety of applications and environments, but the effect of humidity on the properties of the end material (see above) means that the application of the material must be tailored to its environment. In humid environments the material can be used as to fill cracks in roads, walls and sidewalks, sipping into cavities and growing into a solid mass as it sets; while in drier environments it can be used structurally, due to its increased strength in low-humidity environments. Unlike traditional concrete, the production of which releases massive amounts of carbon dioxide to the atmosphere, the bacteria used in self-replicating concrete absorb carbon dioxide, resulting in a lower carbon footprint. This self-replicating concrete is not meant to replace standard concrete, but to create a new class of materials, with a mixture of strength, ecological benefits, and biological functionality. Calcium carbonate biocement Biocement is a sand aggregate material produced through the process of microbiologically induced calcite precipitation (MICP). It is an environmentally friendly material which can be produced using a variety of stocks, from agricultural waste to mine tailings. Synthesis and fabrication Microscopic organisms are the key component in the formation of bioconcrete, as they provide the nucleation site for CaCO to precipitate on the surface. Microorganisms such as Sporosarcina pasteurii are useful in this process, as they create highly alkaline environments where dissolved inorganic carbon (DIC) is present at high amounts. These factors are essential for microbiologically induced calcite precipitation (MICP), which is the main mechanism in which bioconcrete is formed. Other organisms that can be used to induce this process include photosynthesizing microorganisms such as microalgae, cyanobacteria, and sulphate reducing bacteria (SRB) such as Desulfovibrio desulfuricans. Calcium carbonate nucleation depends on four major factors: Calcium concentration DIC concentration pH levels Availability of nucleation sites As long as calcium ion concentrations are high enough, microorganisms can create such an environment through processes such as ureolysis. Advancements in optimizing methods to use microorganisms to facilitate carbonate precipitation are rapidly developing. Properties Biocement is able to "self-heal" due to bacteria, calcium lactate, nitrogen, and phosphorus components that are mixed into the material. These components have the ability to remain active in biocement for up to 200 years. Biocement like any other concrete can crack due to external forces and stresses. Unlike normal concrete however, the microorganisms in biocement can germinate when introduced to water. Rain can supply this water which is an environment that biocement would find itself in. Once introduced to water, the bacteria will activate and feed on the calcium lactate that was part of the mixture. This feeding process also consumes oxygen which converts the originally water-soluble calcium lactate into insoluble limestone. This limestone then solidifies on surface it is lying on, which in this case is the cracked area, thereby sealing the crack up. Oxygen is one of the main elements that cause corrosion in materials such as metals. When biocement is used in steel reinforced concrete structures, the microorganisms consume the oxygen thereby increasing corrosion resistance. This property also allows for water resistance as it actually induces healing, and reducing overall corrosion. Water concrete aggregates are what are used to prevent corrosion and these also have the ability to be recycled. There are different methods to form these such as through crushing or grinding of the biocement. The permeability of biocement is also higher compared to normal cement. This is due to the higher porosity of biocement. Higher porosity can lead to larger crack propagation when exposed to strong enough forces. Biocement is now roughly 20% composed of a self healing agent. This decreases its mechanical strength. The mechanical strength of bioconcrete is about 25% weaker than normal concrete, making its compressive strength lower. Organisms such as Pesudomonas aeruginosa are effective in creating biocement. These are unsafe to be near humans so these must be avoided. Uses Biocement is currently used in applications such as in sidewalks and pavements in buildings. There are ideas of biological building constructions as well. The uses of biocement are still not widespread because there is currently not a feasible method of mass-producing biocement to such a high extent. There is also much more definitive testing that needs to be done to confidently use biocement in such large scale applications where mechanical strength can not be compromised. The cost of biocement is also twice as much as normal concrete. Different uses in smaller applications however include spray bars, hoses, drop lines, and bee nesting. Biocement is still in its developmental stages however its potential proves promising for its future uses. Mycelium composites Mycelium composites are materials that are based on mycelium – the mass of branching, thread-like hyphae produced by fungi. There are several ways to synthesize and fabricate mycelium composites, lending to different properties and use cases of the finish product. Mycelium composites are economical and sustainable. Synthesis and fabrication Mycelium-based composites are usually synthesised by using different kinds of fungi, especially mushrooms. An individual microbe of fungi is introduced to different types of organic substances to form a composite. The selection of fungal species is important for creating a product with specific properties. Some of the fungal species that are used to make composites are G. lucidum, Ganoderma sp. P. ostretus, Pleurotus sp., T. versicolor, Trametes sp., etc. A dense network is formed when the mycelium of the microbe of fungi degrades and colonises the organic substance. Plant waste is a common organic substrate that is used in mycelium-based composites. Fungal mycelium is incubated with a plant waste product to produce sustainable alternatives mostly for petroleum-based materials. The mycelium and organic substrate need time to incubate properly and this time is crucial as it is the period that these particles interact together and bind to form a dense network and hence form a composite. During this incubation period, mycelium uses essential nutrients such as carbon, minerals, and water from the waste plant product. Some of the organic substrate components include cotton, wheat grains, rice husks, sorghum fibres, agricultural waste, sawdust, bread particles, banana peel, coffee residue, etc.  The composites are synthesised and fabricated using different techniques such as adding carbohydrates, altering fermentation conditions, using different fabrication technology, altering post-processing stages, and modifying genetics or biochemicals to form products with certain properties. Fabrication of most of the mycelium composites are by using plastic molds, so the mycelium can be grown directly into the desired shape.  Other fabrication methods include laminate skin mold, vacuum skin mold, glass mold, plywood mold, wooden mold, petri dish mold, tile mold, etc. During fabrication process, it is essential to have a sterilised environment, a controlled environment condition of light, temperature (25-35 °C) and humidity around 60-65% for the best results. One way to synthesise a mycelium based composite is by mixing different composition ratios of fibers, water and mycelium together and putting in a PVC molds in layers while compressing each layer and letting it incubate for couple of days. Mycelium based composites can be processed in foam, laminate and mycelium sheet by using processing techniques such as later cutting, cold and heat compression, etc. Mycelium composites tend to absorb water when they are newly fabricated, therefore this property can be changed by drying the product. Properties One of the advantages about using mycelium based composites is that properties can be altered depending on fabrication process and the use of different fungus. Properties depend on type of fungus used and where they are grown. Additionally, fungi has an ability to degrade the cellulose component of the plant to make composites in a preferable manner. Some important mechanical properties such as compressive strength, morphology, tensile strength, hydrophobicity, and flexural strength can be modified as well for different use of the composite. To increase the tensile strength, the composite can go through heat pressing. The properties of a mycelium composite are affected by its substrate; for example, a mycelium composite made out of 75 wt% rice hulls has a density of 193 kg/m3, while 75 wt% wheat grains has 359 kg/m3. Another method to increase the density of the composite would be by deleting a hydrophobin gene. These composites also have the ability of self-fusion which increases their strength. Mycelium based composites are usually compact, porous, lightweight and a good insulator. The main property of these composites is that they are entirely natural, therefore sustainable. Another advantage of mycelium based composites is that this substance acts as an insulator, is fireproof, nontoxic, water-resistant, rapidly growing, and able to bond with neighboring mycelium products. Mycelium-based foams (MBFs) and sandwich components are two common types of composite. MBFs are the most efficient type because of their low density property, high quality, and sustainability. The density of MBFs can be decreased by using substrates that are smaller than 2 mm in diameter. These composites have higher thermal conductivity as well. Uses One of the most common use of mycelium based composites is for the alternatives for petroleum and polystyrene based materials. These synthetic foams are usually used for sustainable design and architecture products. The use of mycelium based composites are based on their properties. There are several bio-sustainable companies Further applications Beyond the use of living building materials, the application of microbially induced calcium carbonate precipitation (MICP) has the possibility of helping remove pollutants from wastewater, soil, and the air. Currently, heavy metals and radionuclei provide a challenge to remove from water sources and soil. Radionuclei in ground water do not respond to traditional methods of pumping and treating the water, and for heavy metals contaminating soil, the methods of removal include phytoremediation and chemical leaching do work; however, these treatments are expensive, lack longevity in effectiveness, and can destroy the productivity of the soil for future uses. By using ureolytic bacteria that is capable of CaCO3 precipitation, the pollutants can move into the calc-be structure, thereby removing them from the soil or water. This works through substitution of calcium ions for pollutants that then form solid particles and can be removed. It's reported that 95% of these solid particles can be removed by using ureolytic bacteria. However, when calcium scaling in pipelines occurs, MICP cannot be used as it is calcium-based. Instead of calcium, it is possible to add a low concentration of urea to remove up to 90% of the calcium ions. Another further application involves a self-constructed foundation that forms in response to pressure through the use of engineering bacteria. The engineered bacteria could be used to detect increased pressure in soil, and then cement the soil particles in place, effectively solidifying the soil. Within soil, pore pressure consists of two factors: the amount of applied stress, and how quickly water in the soil is able to drain. Through analyzing the biological behavior of the bacteria in response to a load and the mechanical behavior of the soil, a computational model can be created. With this model, certain genes within the bacteria can be identified and modified to respond a certain way to a certain pressure. However, the bacteria analyzed in this study was grown in a highly controlled lab, so real soil environments may not be as ideal. This is a limitation of the model and study it originated from, but it still remains a possible application of living building materials. References Construction Building materials
Living building material
Physics,Engineering
3,424
639
https://en.wikipedia.org/wiki/Alkane
In organic chemistry, an alkane, or paraffin (a historical trivial name that also has other meanings), is an acyclic saturated hydrocarbon. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Alkanes have the general chemical formula . The alkanes range in complexity from the simplest case of methane (), where n = 1 (sometimes called the parent molecule), to arbitrarily large and complex molecules, like pentacontane () or 6-ethyl-2-methyl-5-(1-methylethyl) octane, an isomer of tetradecane (). The International Union of Pure and Applied Chemistry (IUPAC) defines alkanes as "acyclic branched or unbranched hydrocarbons having the general formula , and therefore consisting entirely of hydrogen atoms and saturated carbon atoms". However, some sources use the term to denote any saturated hydrocarbon, including those that are either monocyclic (i.e. the cycloalkanes) or polycyclic, despite them having a distinct general formula (e.g. cycloalkanes are ). In an alkane, each carbon atom is sp3-hybridized with 4 sigma bonds (either C–C or C–H), and each hydrogen atom is joined to one of the carbon atoms (in a C–H bond). The longest series of linked carbon atoms in a molecule is known as its carbon skeleton or carbon backbone. The number of carbon atoms may be considered as the size of the alkane. One group of the higher alkanes are waxes, solids at standard ambient temperature and pressure (SATP), for which the number of carbon atoms in the carbon backbone is greater than about 17. With their repeated – units, the alkanes constitute a homologous series of organic compounds in which the members differ in molecular mass by multiples of 14.03 u (the total mass of each such methylene-bridge unit, which comprises a single carbon atom of mass 12.01 u and two hydrogen atoms of mass ~1.01 u each). Methane is produced by methanogenic bacteria and some long-chain alkanes function as pheromones in certain animal species or as protective waxes in plants and fungi. Nevertheless, most alkanes do not have much biological activity. They can be viewed as molecular trees upon which can be hung the more active/reactive functional groups of biological molecules. The alkanes have two main commercial sources: petroleum (crude oil) and natural gas. An alkyl group is an alkane-based molecular fragment that bears one open valence for bonding. They are generally abbreviated with the symbol for any organyl group, R, although Alk is sometimes used to specifically symbolize an alkyl group (as opposed to an alkenyl group or aryl group). Structure and classification Ordinarily the C-C single bond distance is . Saturated hydrocarbons can be linear, branched, or cyclic. The third group is sometimes called cycloalkanes. Very complicated structures are possible by combining linear, branch, cyclic alkanes. Isomerism Alkanes with more than three carbon atoms can be arranged in various ways, forming structural isomers. The simplest isomer of an alkane is the one in which the carbon atoms are arranged in a single chain with no branches. This isomer is sometimes called the n-isomer (n for "normal", although it is not necessarily the most common). However, the chain of carbon atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of carbon atoms. For example, for acyclic alkanes: C1: methane only C2: ethane only C3: propane only C4: 2 isomers: butane and isobutane C5: 3 isomers: pentane, isopentane, and neopentane C6: 5 isomers: hexane, 2-methylpentane, 3-methylpentane, 2,2-dimethylbutane, and 2,3-dimethylbutane C7: 9 isomers: heptane, 2-methylhexane, 3-methylhexane, 2,2-dimethylpentane, 2,3-dimethylpentane, 2,4-dimethylpentane, 3,3-dimethylpentane, 3-ethylpentane, 2,2,3-trimethylbutane C8: 18 isomers: octane, 2-methylheptane, 3-methylheptane, 4-methylheptane, 2,2-dimethylhexane, 2,3-dimethylhexane, 2,4-dimethylhexane, 2,5-dimethylhexane, 3,3-dimethylhexane, 3,4-dimethylhexane, 3-ethylhexane, 2,2,3-trimethylpentane, 2,2,4-trimethylpentane, 2,3,3-trimethylpentane, 2,3,4-trimethylpentane, 3-ethyl-2-methylpentane, 3-ethyl-3-methylpentane, 2,2,3,3-tetramethylbutane C9: 35 isomers C10: 75 isomers C12: 355 isomers C32: 27,711,253,769 isomers C60: 22,158,734,535,770,411,074,184 isomers, many of which are not stable Branched alkanes can be chiral. For example, 3-methylhexane and its higher homologues are chiral due to their stereogenic center at carbon atom number 3. The above list only includes differences of connectivity, not stereochemistry. In addition to the alkane isomers, the chain of carbon atoms may form one or more rings. Such compounds are called cycloalkanes, and are also excluded from the above list because changing the number of rings changes the molecular formula. For example, cyclobutane and methylcyclopropane are isomers of each other (C4H8), but are not isomers of butane (C4H10). Branched alkanes are more thermodynamically stable than their linear (or less branched) isomers. For example, the highly branched 2,2,3,3-tetramethylbutane is about 1.9 kcal/mol more stable than its linear isomer, n-octane. Nomenclature The IUPAC nomenclature (systematic way of naming compounds) for alkanes is based on identifying hydrocarbon chains. Unbranched, saturated hydrocarbon chains are named systematically with a Greek numerical prefix denoting the number of carbons and the suffix "-ane". In 1866, August Wilhelm von Hofmann suggested systematizing nomenclature by using the whole sequence of vowels a, e, i, o and u to create suffixes -ane, -ene, -ine (or -yne), -one, -une, for the hydrocarbons CnH2n+2, CnH2n, CnH2n−2, CnH2n−4, CnH2n−6. In modern nomenclature, the first three specifically name hydrocarbons with single, double and triple bonds; while "-one" now represents a ketone. Linear alkanes Straight-chain alkanes are sometimes indicated by the prefix "n-" or "n-"(for "normal") where a non-linear isomer exists. Although this is not strictly necessary and is not part of the IUPAC naming system, the usage is still common in cases where one wishes to emphasize or distinguish between the straight-chain and branched-chain isomers, e.g., "n-butane" rather than simply "butane" to differentiate it from isobutane. Alternative names for this group used in the petroleum industry are linear paraffins or n-paraffins. The first eight members of the series (in terms of number of carbon atoms) are named as follows: methane CH4 – one carbon and 4 hydrogen ethane C2H6 – two carbon and 6 hydrogen propane C3H8 – three carbon and 8 hydrogen butane C4H10 – four carbon and 10 hydrogen pentane C5H12 – five carbon and 12 hydrogen hexane C6H14 – six carbon and 14 hydrogen heptane C7H16 – seven carbons and 16 hydrogen octane C8H18 – eight carbons and 18 hydrogen The first four names were derived from methanol, ether, propionic acid and butyric acid. Alkanes with five or more carbon atoms are named by adding the suffix -ane to the appropriate numerical multiplier prefix with elision of any terminal vowel (-a or -o) from the basic numerical term. Hence, pentane, C5H12; hexane, C6H14; heptane, C7H16; octane, C8H18; etc. The numeral prefix is generally Greek; however, alkanes with a carbon atom count ending in nine, for example nonane, use the Latin prefix non-. Branched alkanes Simple branched alkanes often have a common name using a prefix to distinguish them from linear alkanes, for example n-pentane, isopentane, and neopentane. IUPAC naming conventions can be used to produce a systematic name. The key steps in the naming of more complicated branched alkanes are as follows: Identify the longest continuous chain of carbon atoms Name this longest root chain using standard naming rules Name each side chain by changing the suffix of the name of the alkane from "-ane" to "-yl" Number the longest continuous chain in order to give the lowest possible numbers for the side-chains Number and name the side chains before the name of the root chain If there are multiple side chains of the same type, use prefixes such as "di-" and "tri-" to indicate it as such, and number each one. Add side chain names in alphabetical (disregarding "di-" etc. prefixes) order in front of the name of the root chain Saturated cyclic hydrocarbons Though technically distinct from the alkanes, this class of hydrocarbons is referred to by some as the "cyclic alkanes." As their description implies, they contain one or more rings. Simple cycloalkanes have a prefix "cyclo-" to distinguish them from alkanes. Cycloalkanes are named as per their acyclic counterparts with respect to the number of carbon atoms in their backbones, e.g., cyclopentane (C5H10) is a cycloalkane with 5 carbon atoms just like pentane (C5H12), but they are joined up in a five-membered ring. In a similar manner, propane and cyclopropane, butane and cyclobutane, etc. Substituted cycloalkanes are named similarly to substituted alkanes – the cycloalkane ring is stated, and the substituents are according to their position on the ring, with the numbering decided by the Cahn–Ingold–Prelog priority rules. Trivial/common names The trivial (non-systematic) name for alkanes is 'paraffins'. Together, alkanes are known as the 'paraffin series'. Trivial names for compounds are usually historical artifacts. They were coined before the development of systematic names, and have been retained due to familiar usage in industry. Cycloalkanes are also called naphthenes. Branched-chain alkanes are called isoparaffins. "Paraffin" is a general term and often does not distinguish between pure compounds and mixtures of isomers, i.e., compounds of the same chemical formula, e.g., pentane and isopentane. In IUPAC The following trivial names are retained in the IUPAC system: isobutane for 2-methylpropane isopentane for 2-methylbutane neopentane for 2,2-dimethylpropane. Non-IUPAC Some non-IUPAC trivial names are occasionally used: cetane, for hexadecane cerane, for hexacosane Physical properties All alkanes are colorless. Alkanes with the lowest molecular weights are gases, those of intermediate molecular weight are liquids, and the heaviest are waxy solids. Table of alkanes Boiling point Alkanes experience intermolecular van der Waals forces. The cumulative effects of these intermolecular forces give rise to greater boiling points of alkanes. Two factors influence the strength of the van der Waals forces: the number of electrons surrounding the molecule, which increases with the alkane's molecular weight the surface area of the molecule Under standard conditions, from CH4 to C4H10 alkanes are gaseous; from C5H12 to C17H36 they are liquids; and after C18H38 they are solids. As the boiling point of alkanes is primarily determined by weight, it should not be a surprise that the boiling point has an almost linear relationship with the size (molecular weight) of the molecule. As a rule of thumb, the boiling point rises 20–30 °C for each carbon added to the chain; this rule applies to other homologous series. A straight-chain alkane will have a boiling point higher than a branched-chain alkane due to the greater surface area in contact, and thus greater van der Waals forces, between adjacent molecules. For example, compare isobutane (2-methylpropane) and n-butane (butane), which boil at −12 and 0 °C, and 2,2-dimethylbutane and 2,3-dimethylbutane which boil at 50 and 58 °C, respectively. On the other hand, cycloalkanes tend to have higher boiling points than their linear counterparts due to the locked conformations of the molecules, which give a plane of intermolecular contact. Melting points The melting points of the alkanes follow a similar trend to boiling points for the same reason as outlined above. That is, (all other things being equal) the larger the molecule the higher the melting point. However, alkanes' melting points follow a more complex pattern, due to variations in the properties of their solid crystals. One difference in crystal structure that even-numbered alkanes (from hexane onwards) tend to form denser-packed crystals compared to their odd-numbered neighbors. This causes them to have a greater enthalpy of fusion (amount of energy required to melt them), raising their melting point. A second difference in crystal structure is that even-numbered alkanes (from octane onwards) tend to form more rotationally-ordered crystals compared to their odd-numbered neighbors. This causes them to have a greater entropy of fusion (increase in disorder from the solid to the liquid state), lowering their melting point. While these effects operate in opposing directions, the first effect tends to be slightly stronger, leading even-numbered alkanes to have slightly higher melting points than the average of their odd-numbered neighbors. This trend does not apply to methane, which has an unusually high melting point, higher than both ethane and propane. This is because it has a very low entropy of fusion, attributable to its high molecular symmetry and the rotational disorder in solid methane near its melting point (Methane I). The melting points of branched-chain alkanes can be either higher or lower than those of the corresponding straight-chain alkanes, again depending on these two factors. More symmetric alkanes tend towards higher melting points, due to enthalpic effects when they form ordered crystals, and entropic effects when they form disordered crystals (e.g. neopentane). Conductivity and solubility Alkanes do not conduct electricity in any way, nor are they substantially polarized by an electric field. For this reason, they do not form hydrogen bonds and are insoluble in polar solvents such as water. Since the hydrogen bonds between individual water molecules are aligned away from an alkane molecule, the coexistence of an alkane and water leads to an increase in molecular order (a reduction in entropy). As there is no significant bonding between water molecules and alkane molecules, the second law of thermodynamics suggests that this reduction in entropy should be minimized by minimizing the contact between alkane and water: Alkanes are said to be hydrophobic as they are insoluble in water. Their solubility in nonpolar solvents is relatively high, a property that is called lipophilicity. Alkanes are, for example, miscible in all proportions among themselves. The density of the alkanes usually increases with the number of carbon atoms but remains less than that of water. Hence, alkanes form the upper layer in an alkane–water mixture. Molecular geometry The molecular structure of the alkanes directly affects their physical and chemical characteristics. It is derived from the electron configuration of carbon, which has four valence electrons. The carbon atoms in alkanes are described as sp3 hybrids; that is to say that, to a good approximation, the valence electrons are in orbitals directed towards the corners of a tetrahedron which are derived from the combination of the 2s orbital and the three 2p orbitals. Geometrically, the angle between the bonds are cos−1(−) ≈ 109.47°. This is exact for the case of methane, while larger alkanes containing a combination of C–H and C–C bonds generally have bonds that are within several degrees of this idealized value. Bond lengths and bond angles An alkane has only C–H and C–C single bonds. The former result from the overlap of an sp3 orbital of carbon with the 1s orbital of a hydrogen; the latter by the overlap of two sp3 orbitals on adjacent carbon atoms. The bond lengths amount to 1.09 × 10−10 m for a C–H bond and 1.54 × 10−10 m for a C–C bond. The spatial arrangement of the bonds is similar to that of the four sp3 orbitals—they are tetrahedrally arranged, with an angle of 109.47° between them. Structural formulae that represent the bonds as being at right angles to one another, while both common and useful, do not accurately depict the geometry. Conformation The spatial arrangement of the C-C and C-H bonds are described by the torsion angles of the molecule is known as its conformation. In ethane, the simplest case for studying the conformation of alkanes, there is nearly free rotation about a carbon–carbon single bond. Two limiting conformations are important: eclipsed conformation and staggered conformation. The staggered conformation is 12.6 kJ/mol (3.0 kcal/mol) lower in energy (more stable) than the eclipsed conformation (the least stable). In highly branched alkanes, the bond angle may differ from the optimal value (109.5°) to accommodate bulky groups. Such distortions introduce a tension in the molecule, known as steric hindrance or strain. Strain substantially increases reactivity. Spectroscopic properties Spectroscopic signatures for alkanes are obtainable by the major characterization techniques. Infrared spectroscopy The C-H stretching mode gives a strong absorptions between 2850 and 2960 cm−1 and weaker bands for the C-C stretching mode absorbs between 800 and 1300 cm−1. The carbon–hydrogen bending modes depend on the nature of the group: methyl groups show bands at 1450 cm−1 and 1375 cm−1, while methylene groups show bands at 1465 cm−1 and 1450 cm−1. Carbon chains with more than four carbon atoms show a weak absorption at around 725 cm−1. NMR spectroscopy The proton resonances of alkanes are usually found at δH = 0.5–1.5. The carbon-13 resonances depend on the number of hydrogen atoms attached to the carbon: δC = 8–30 (primary, methyl, –CH3), 15–55 (secondary, methylene, –CH2–), 20–60 (tertiary, methyne, C–H) and quaternary. The carbon-13 resonance of quaternary carbon atoms is characteristically weak, due to the lack of nuclear Overhauser effect and the long relaxation time, and can be missed in weak samples, or samples that have not been run for a sufficiently long time. Mass spectrometry Since alkanes have high ionization energies, their electron impact mass spectra show weak currents for their molecular ions. The fragmentation pattern can be difficult to interpret, but in the case of branched chain alkanes, the carbon chain is preferentially cleaved at tertiary or quaternary carbons due to the relative stability of the resulting free radicals. The mass spectra for straight-chain alkanes is illustrated by that for dodecane: the fragment resulting from the loss of a single methyl group (M − 15) is absent, fragments are more intense than the molecular ion and are spaced by intervals of 14 mass units, corresponding to loss of CH2 groups. Chemical properties Alkanes are only weakly reactive with most chemical compounds. They only reacts with the strongest of electrophilic reagents by virtue of their strong C–H bonds (~100 kcal/mol) and C–C bonds (~90 kcal/mol). They are also relatively unreactive toward free radicals. This inertness is the source of the term paraffins (with the meaning here of "lacking affinity"). In crude oil the alkane molecules have remained chemically unchanged for millions of years. Acid-base behavior The acid dissociation constant (pKa) values of all alkanes are estimated to range from 50 to 70, depending on the extrapolation method, hence they are extremely weak acids that are practically inert to bases (see: carbon acids). They are also extremely weak bases, undergoing no observable protonation in pure sulfuric acid (H0 ~ −12), although superacids that are at least millions of times stronger have been known to protonate them to give hypercoordinate alkanium ions (see: methanium ion). Thus, a mixture of antimony pentafluoride (SbF5) and fluorosulfonic acid (HSO3F), called magic acid, can protonate alkanes. Reactions with oxygen (combustion reaction) All alkanes react with oxygen in a combustion reaction, although they become increasingly difficult to ignite as the number of carbon atoms increases. The general equation for complete combustion is: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO2 or CnH2n+2 + () O2 → (n + 1) H2O + n CO2 In the absence of sufficient oxygen, carbon monoxide or even soot can be formed, as shown below: CnH2n+2 + (n + ) O2 → (n + 1) H2O + n CO CnH2n+2 + (n + ) O2 → (n + 1) H2O + n C For example, methane: 2 CH4 + 3 O2 → 4 H2O + 2 CO CH4 + O2 → 2 H2O + C See the alkane heat of formation table for detailed data. The standard enthalpy change of combustion, ΔcH⊖, for alkanes increases by about 650 kJ/mol per CH2 group. Branched-chain alkanes have lower values of ΔcH⊖ than straight-chain alkanes of the same number of carbon atoms, and so can be seen to be somewhat more stable. Biodegradation Some organisms are capable of metalbolizing alkanes. The methane monooxygenases convert methane to methanol. For higher alkanes, cytochrome P450 convert alkanes to alcohols, which are then susceptible to degradation. Free radical reactions Free radicals, molecules with unpaired electrons, play a large role in most reactions of alkanes. Free radical halogenation reactions occur with halogens, leading to the production of haloalkanes. The hydrogen atoms of the alkane are progressively replaced by halogen atoms. The reaction of alkanes and fluorine is highly exothermic and can lead to an explosion. These reactions are an important industrial route to halogenated hydrocarbons. There are three steps: Initiation the halogen radicals form by homolysis. Usually, energy in the form of heat or light is required. Chain reaction or Propagation then takes place—the halogen radical abstracts a hydrogen from the alkane to give an alkyl radical. This reacts further. Chain termination where the radicals recombine. Experiments have shown that all halogenation produces a mixture of all possible isomers, indicating that all hydrogen atoms are susceptible to reaction. The mixture produced, however, is not statistical: Secondary and tertiary hydrogen atoms are preferentially replaced due to the greater stability of secondary and tertiary free-radicals. An example can be seen in the monobromination of propane: In the Reed reaction, sulfur dioxide and chlorine convert hydrocarbons to sulfonyl chlorides under the influence of light. Under some conditions, alkanes will undergo Nitration. C-H activation Certain transition metal complexes promote non-radical reactions with alkanes, resulting in so C–H bond activation reactions. Cracking Cracking breaks larger molecules into smaller ones. This reaction requires heat and catalysts. The thermal cracking process follows a homolytic mechanism with formation of free radicals. The catalytic cracking process involves the presence of acid catalysts (usually solid acids such as silica-alumina and zeolites), which promote a heterolytic (asymmetric) breakage of bonds yielding pairs of ions of opposite charges, usually a carbocation. Carbon-localized free radicals and cations are both highly unstable and undergo processes of chain rearrangement, C–C scission in position beta (i.e., cracking) and intra- and intermolecular hydrogen transfer or hydride transfer. In both types of processes, the corresponding reactive intermediates (radicals, ions) are permanently regenerated, and thus they proceed by a self-propagating chain mechanism. The chain of reactions is eventually terminated by radical or ion recombination. Isomerization and reformation Dragan and his colleague were the first to report about isomerization in alkanes. Isomerization and reformation are processes in which straight-chain alkanes are heated in the presence of a platinum catalyst. In isomerization, the alkanes become branched-chain isomers. In other words, it does not lose any carbons or hydrogens, keeping the same molecular weight. In reformation, the alkanes become cycloalkanes or aromatic hydrocarbons, giving off hydrogen as a by-product. Both of these processes raise the octane number of the substance. Butane is the most common alkane that is put under the process of isomerization, as it makes many branched alkanes with high octane numbers. Other reactions In steam reforming, alkanes react with steam in the presence of a nickel catalyst to give hydrogen and carbon monoxide. Occurrence Occurrence of alkanes in the Universe Alkanes form a small portion of the atmospheres of the outer gas planets such as Jupiter (0.1% methane, 2 ppm ethane), Saturn (0.2% methane, 5 ppm ethane), Uranus (1.99% methane, 2.5 ppm ethane) and Neptune (1.5% methane, 1.5 ppm ethane). Titan (1.6% methane), a satellite of Saturn, was examined by the Huygens probe, which indicated that Titan's atmosphere periodically rains liquid methane onto the moon's surface. Also on Titan, the Cassini mission has imaged seasonal methane/ethane lakes near the polar regions of Titan. Methane and ethane have also been detected in the tail of the comet Hyakutake. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which is thought to imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Alkanes have also been detected in meteorites such as carbonaceous chondrites. Occurrence of alkanes on Earth Traces of methane gas (about 0.0002% or 1745 ppb) occur in the Earth's atmosphere, produced primarily by methanogenic microorganisms, such as Archaea in the gut of ruminants. The most important commercial sources for alkanes are natural gas and oil. Natural gas contains primarily methane and ethane, with some propane and butane: oil is a mixture of liquid alkanes and other hydrocarbons. These hydrocarbons were formed when marine animals and plants (zooplankton and phytoplankton) died and sank to the bottom of ancient seas and were covered with sediments in an anoxic environment and converted over many millions of years at high temperatures and high pressure to their current form. Natural gas resulted thereby for example from the following reaction: C6H12O6 → 3 CH4 + 3 CO2 These hydrocarbon deposits, collected in porous rocks trapped beneath impermeable cap rocks, comprise commercial oil fields. They have formed over millions of years and once exhausted cannot be readily replaced. The depletion of these hydrocarbons reserves is the basis for what is known as the energy crisis. Alkanes have a low solubility in water, so the content in the oceans is negligible; however, at high pressures and low temperatures (such as at the bottom of the oceans), methane can co-crystallize with water to form a solid methane clathrate (methane hydrate). Although this cannot be commercially exploited at the present time, the amount of combustible energy of the known methane clathrate fields exceeds the energy content of all the natural gas and oil deposits put together. Methane extracted from methane clathrate is, therefore, a candidate for future fuels. Biological occurrence Aside from petroleum and natural gas, alkanes occur significantly in nature only as methane, which is produced by some archaea by the process of methanogenesis. These organisms are found in the gut of termites and cows. The methane is produced from carbon dioxide or other organic compounds. Energy is released by the oxidation of hydrogen: CO2 + 4 H2 → CH4 + 2 H2O It is probable that our current deposits of natural gas were formed in a similar way. Certain types of bacteria can metabolize alkanes: they prefer even-numbered carbon chains as they are easier to degrade than odd-numbered chains. Alkanes play a negligible role in higher organisms, with rare exception. Some yeasts, e.g., Candida tropicale, Pichia sp., Rhodotorula sp., can use alkanes as a source of carbon or energy. The fungus Amorphotheca resinae prefers the longer-chain alkanes in aviation fuel, and can cause serious problems for aircraft in tropical regions. In plants, the solid long-chain alkanes are found in the plant cuticle and epicuticular wax of many species, but are only rarely major constituents. They protect the plant against water loss, prevent the leaching of important minerals by the rain, and protect against bacteria, fungi, and harmful insects. The carbon chains in plant alkanes are usually odd-numbered, between 27 and 33 carbon atoms in length, and are made by the plants by decarboxylation of even-numbered fatty acids. The exact composition of the layer of wax is not only species-dependent but also changes with the season and such environmental factors as lighting conditions, temperature or humidity. The Jeffrey pine is noted for producing exceptionally high levels of n-heptane in its resin, for which reason its distillate was designated as the zero point for one octane rating. Floral scents have also long been known to contain volatile alkane components, and n-nonane is a significant component in the scent of some roses. Emission of gaseous and volatile alkanes such as ethane, pentane, and hexane by plants has also been documented at low levels, though they are not generally considered to be a major component of biogenic air pollution. Edible vegetable oils also typically contain small fractions of biogenic alkanes with a wide spectrum of carbon numbers, mainly 8 to 35, usually peaking in the low to upper 20s, with concentrations up to dozens of milligrams per kilogram (parts per million by weight) and sometimes over a hundred for the total alkane fraction. Alkanes are found in animal products, although they are less important than unsaturated hydrocarbons. One example is the shark liver oil, which is approximately 14% pristane (2,6,10,14-tetramethylpentadecane, C19H40). They are important as pheromones, chemical messenger materials, on which insects depend for communication. In some species, e.g. the support beetle Xylotrechus colonus, pentacosane (C25H52), 3-methylpentaicosane (C26H54) and 9-methylpentaicosane (C26H54) are transferred by body contact. With others like the tsetse fly Glossina morsitans morsitans, the pheromone contains the four alkanes 2-methylheptadecane (C18H38), 17,21-dimethylheptatriacontane (C39H80), 15,19-dimethylheptatriacontane (C39H80) and 15,19,23-trimethylheptatriacontane (C40H82), and acts by smell over longer distances. Waggle-dancing honey bees produce and release two alkanes, tricosane and pentacosane. Ecological relations One example, in which both plant and animal alkanes play a role, is the ecological relationship between the sand bee (Andrena nigroaenea) and the early spider orchid (Ophrys sphegodes); the latter is dependent for pollination on the former. Sand bees use pheromones in order to identify a mate; in the case of A. nigroaenea, the females emit a mixture of tricosane (C23H48), pentacosane (C25H52) and heptacosane (C27H56) in the ratio 3:3:1, and males are attracted by specifically this odor. The orchid takes advantage of this mating arrangement to get the male bee to collect and disseminate its pollen; parts of its flower not only resemble the appearance of sand bees but also produce large quantities of the three alkanes in the same ratio as female sand bees. As a result, numerous males are lured to the blooms and attempt to copulate with their imaginary partner: although this endeavor is not crowned with success for the bee, it allows the orchid to transfer its pollen, which will be dispersed after the departure of the frustrated male to other blooms. Production Petroleum refining The most important source of alkanes is natural gas and crude oil. Alkanes are separated in an oil refinery by fractional distillation. Unsaturated hydrocarbons are converted to alkanes by hydrogenation: (R = alkyl) Another route to alkanes is hydrogenolysis, which entails cleavage of C-heteroatom bonds using hydrogen. In industry, the main substrates are organonitrogen and organosulfur impurities, i.e. the heteroatoms are N and S. The specific processes are called hydrodenitrification and hydrodesulfurization: Hydrogenolysis can be applied to the conversion of virtually any functional group into hydrocarbons. Substrates include haloalkanes, alcohols, aldehydes, ketones, carboxylic acids, etc. Both hydrogenolysis and hydrogenation are practiced in refineries. The can be effected by using lithium aluminium hydride, Clemmenson reduction and other specialized routes. Coal Coal is a more traditional precursor to alkanes. A wide range of technologies have been intensively practiced for centuries. Simply heating coal gives alkanes, leaving behind coke. Relevant technologies include the Bergius process and coal liquifaction. Partial combustion of coal and related solid organic compounds generates carbon monoxide, which can be hydrogenated using the Fischer–Tropsch process. This technology allows the synthesize liquid hydrocarbons, including alkanes. This method is used to produce substitutes for petroleum distillates. Laboratory preparation Rarely is there any interest in the synthesis of alkanes, since they are usually commercially available and less valued than virtually any precursor. The best-known method is hydrogenation of alkenes. Many C-X bonds can be converted to C-H bonds using lithium aluminium hydride, Clemmenson reduction, and other specialized routes. Hydrolysis of Alkyl Grignard reagents and alkyl lithium compounds gives alkanes. Applications Fuels The dominant use of alkanes is as fuels. Propane and butane, easily liquified gases, are commonly known as liquified petroleum gas (LPG). From pentane to octane the alkanes are highly volatile liquids. They are used as fuels in internal combustion engines, as they vaporize easily on entry into the combustion chamber without forming droplets, which would impair the uniformity of the combustion. Branched-chain alkanes are preferred as they are much less prone to premature ignition, which causes knocking, than their straight-chain homologues. This propensity to premature ignition is measured by the octane rating of the fuel, where 2,2,4-trimethylpentane (isooctane) has an arbitrary value of 100, and heptane has a value of zero. Apart from their use as fuels, the middle alkanes are also good solvents for nonpolar substances. Alkanes from nonane to, for instance, hexadecane (an alkane with sixteen carbon atoms) are liquids of higher viscosity, less and less suitable for use in gasoline. They form instead the major part of diesel and aviation fuel. Diesel fuels are characterized by their cetane number, cetane being an old name for hexadecane. However, the higher melting points of these alkanes can cause problems at low temperatures and in polar regions, where the fuel becomes too thick to flow correctly. Precursors to chemicals By the process of cracking, alkanes can be converted to alkenes. Simple alkenes are precursors to polymers, such as polyethylene and polypropylene. When the cracking is taken to extremes, alkanes can be converted to carbon black, which is a significant tire component. Chlorination of methane gives chloromethanes, which are used as solvents and building blocks for complex compounds. Similarly treatment of methane with sulfur gives carbon disulfide. Still other chemicals are prepared by reaction with sulfur trioxide and nitric oxide Other Some light hydrocarbons are used as aerosol sprays. Alkanes from hexadecane upwards form the most important components of fuel oil and lubricating oil. In the latter function, they work at the same time as anti-corrosive agents, as their hydrophobic nature means that water cannot reach the metal surface. Many solid alkanes find use as paraffin wax, for example, in candles. This should not be confused however with true wax, which consists primarily of esters. Alkanes with a chain length of approximately 35 or more carbon atoms are found in bitumen, used, for example, in road surfacing. However, the higher alkanes have little value and are usually split into lower alkanes by cracking. Hazards Alkanes are highly flammable, but they have low toxicities. Methane "is toxicologically virtually inert." Alkanes can be asphyxiants and narcotic. See also Alkene Alkyne Cycloalkane Higher alkanes Aliphatic compound Notes References Further reading Virtual Textbook of Organic Chemistry Visualizations of the low-temperature crystal structures of alkanes (methane to nonane) Hydrocarbons
Alkane
Chemistry
8,738
1,839,180
https://en.wikipedia.org/wiki/Fort%20Douaumont
Fort Douaumont (, ) was the largest and highest fort on the ring of 19 large defensive works which had protected the city of Verdun, France, since the 1890s. By 1915, the French General Staff had concluded that even the best-protected forts of Verdun could not withstand bombardments from the German 420 mm (16.5 in) Gamma guns. These new super-heavy howitzers had easily taken several large Belgian forts out of action in August 1914. Fort Douaumont and other Verdun forts were judged ineffective and had been partly disarmed and left virtually undefended since 1915. On 25 February 1916, Fort Douaumont was entered and occupied without a fight by a small German raiding party comprising only 19 officers and 79 men, entering via an open window by the moat. The easy fall of Fort Douaumont, only three days after the beginning of the Battle of Verdun, shocked the French Army. It set the stage for the rest of a battle which lasted nine months, at enormous human cost. Douaumont was finally recaptured by three infantry divisions of the Second Army, during the First Offensive Battle of Verdun on 24 October 1916. This event brought closure to the battle in 1916. History Construction work started in 1885 near the village of Douaumont, on some of the highest ground in the area and the fort was continually reinforced until 1913. It has a total surface area of and is approximately long, with two subterranean levels protected by a steel reinforced concrete roof thick resting on a sand cushion. These improvements had been completed by 1903. The entrance to the fort was at the rear. Two main tunnels ran east–west, one above the other, with barrack rooms and corridors to outlying parts of the fort branched off of the main tunnels. The fort was equipped with numerous armed posts, a 155 mm rotating/retractable gun turret, a 75 mm gun rotating/retractable gun turret, four other 75 mm guns in flanking "Bourges Casemates" that swept the intervals and several machine-gun turrets. Entry into the moat around the fort was interdicted by Hotchkiss anti-personnel revolving cannons located in wall casemates or "Coffres" present at each corner. With hindsight, Douaumont was much better prepared to withstand the heaviest bombardments than the Belgian forts that had been crushed by German Gamma howitzers in 1914. The German invasion of Belgium in 1914 had forced military planners to radically rethink the utility of fortification in war. The Belgian forts had been quickly destroyed by German artillery and easily overrun. However, the Belgian forts were built with unreinforced concrete, with many layers that easily broke apart. In August 1915, General Joseph Joffre approved the reduction of the garrison at Douaumont and at other Verdun forts. Douaumont was stripped of all its weaponry except for the two turreted guns that were too difficult to remove: a 155 mm and a 75 mm gun. The two "Casemates de Bourges" bunkers, one on each side of the fort, were disarmed of their four 75s. The garrison was mostly middle-aged reservists, under the command of the city's military governor and not the field army. Capture On 21 February 1916, the German 5th Army began an offensive which started the Battle of Verdun. Douaumont was the largest and highest fort on the two concentric rings of forts protecting the city and thus the keystone to the city defenses. The German offensive was already four days old and progressing rapidly from the north when, on 24 February, it came within reach of Fort Douaumont. Fort Douaumont was still only manned by a maintenance crew of only 56 troops and a few gunners. The highest-ranked soldier in the fort was an NCO named Chenot. On 25 February, elements of the German 24th Brandenburg Regiment (6 Infanterie-Division, III Armeekorps) approached Fort Douaumont from the north, as a reconnaissance or raiding party. Most of the French garrison had already gone to the lower levels of the fort to escape the incessant German shelling with large-calibre guns. A battery of super-heavy 420 mm M-Gerät howitzers was intermittently pounding the fort, damaging the 75 mm gun turret. The occupants had been without communication with the outside world for some time. The observation cupolas were unoccupied. Only a small gunnery team was manning the 155 mm gun turret, which was firing at distant targets. The dry moats which could have been swept by French machine-gun fire from the wall "casemates" or "coffres" had been left undefended. About 10 combat engineers from the Brandenburg regiment, led by Pioneer-Sergeant Kunze, managed to approach the fort unopposed. Visibility was poor due to bad weather, and French machine gunners in the village of Douaumont thought the Germans were French colonial troops returning from a patrol. Kunze and his men reached the moat and found that the wall casemates (coffres) defending the moat were unoccupied. Kunze managed to climb inside one of them to open a door. Kunze's men refused to go inside the fortification, fearing an ambush. Armed only with a rifle, the Pioneer-Sergeant entered alone. He wandered around the empty tunnels until he found the artillery team, captured them and locked them up. By now, another group from the Brandenburg regiment, led by reserve-officer Lieutenant Radtke, was also entering the fort through its unoccupied defences. Radtke then made contact with Kunze's troops and organised them before they spread out, capturing a few more French defenders and securing the fort. Later, more columns of German troops under Hauptmann Haupt and Oberleutnant von Brandis arrived. No shots were ever fired in the capture of Fort Douaumont. The only casualty was one of Kunze's men, who scraped a knee. Despite being the last officer to enter the fort, von Brandis was the one who dispatched the report on the capture of Douaumont to the German High Command. A few days later, the Prussian officer was telling Crown Prince Wilhelm about its heroic seizure. No mention was made of the efforts of Lieutenant Radtke or Sergeant Kunze. Instead von Brandis became the "Hero of Douaumont" and was awarded the Pour le Mérite, (Haupt received it later, too). Kunze, who broke in and locked up the garrison and Radtke, who took command during the fort's capture, received no award. It was not until the 1930s, after historians from the German Great War committee had time to review the capture of Fort Douaumont that credit was belatedly given. Kunze, now a member of the Ordnungspolizei, received a promotion and the order of Pour le Mérite, while Lieutenant Radtke got an autographed portrait of former Crown Prince Wilhelm. Douaumont, the keystone of the system of forts that was to protect Verdun against a German invasion, had been given up without a fight. In the words of one French divisional commander, its loss would cost the French army 100,000 lives. Douaumont's easy fall was a disaster for the French and a glaring example of the lack of judgment prevailing in the General Staff at the time, under General Joffre. The French General Staff had decided in August 1915 to partially disarm all the Verdun forts, acting under the erroneous assumption that the forts could not resist the effects of modern heavy artillery. After its capture, Douaumont became an invulnerable shelter and operational base for German forces just behind their front line. The German soldiers at Verdun came to refer to the place as "Old Uncle Douaumont". Recapture The French Second Army made a first attempt to recapture the fort in late May 1916. They occupied the western end of the fort for 36 hours but were dislodged after suffering heavy losses, mostly from German artillery and trench mortars nearby. The Germans stubbornly held onto the fort, as it provided shelter for troops and served as first aid station and supply dump. French artillery continued to shell the fort, turning the area into a pockmarked moonscape, traces of which are still visible. On 8 May 1916, an unattended cooking fire had detonated grenades and flamethrower fuel, which detonated an ammunition cache. Apparently some of the soldiers tried to heat coffee using flamethrower fuel, which proved to be too flammable and spread to shells which were without caution placed right next to such environments. A firestorm ripped through the fort, killing hundreds of soldiers instantly, including the 12th Grenadiers regimental staff. Some of the 1,800 wounded and soot-blackened survivors attempting to escape from the inferno were mistaken for French colonial infantry and were fired upon by their comrades; 679 German soldiers perished in this fire. Their remains were gathered inside the fort at the time and placed into a casemate which was walled off. The site is underground, inside the fort and has long been an official German war grave. A commemorative plaque in German and a cross stand at the foot of the grave's sealing wall. The memorial is open to visitors. A French offensive involving three infantry divisions began on 24 October 1916, to recapture the fort. This took place on the same day and was carried out by the elite Régiment d'infanterie-chars de marine (At that time designated the Régiment d'infanterie coloniale du Maroc, R.I.C.M (Regiment of Colonial Infantry of Morocco)). Douaumont had been pounded for days by two super heavy long-range French railway guns named "Alsace" and "Lorraine", emplaced at Baleycourt, 8.1 miles (13km) south-west of Verdun. Douaumont had become untenable under their fire and was in the process of being evacuated when it was recaptured. Millions of smaller shells had been fired at the fort since its capture by the Germans to little avail and tens of thousands of men had died in attempts to recapture it. Gallery See also Fort Vaux Zone rouge (First World War) French villages destroyed in the First World War Douaumont Douaumont Ossuary Battle of Verdun Notes References Denizot, Alain, Douaumont: Vérité et Légende, Librairie Académique Perrin, 1998, . (in French) Holstein, Christina, Fort Douaumont (Revised Edition), Pen and Sword Military, 2010, (in English). External links Les forts Séré de Rivières le fort de Douaumont Douaumont ossuary GPS-Teamprojekt Verdun – Somme – 1916 Battle of Verdun Buildings and structures in Meuse (department) Séré de Rivières system World War I museums in France Friendly fire incidents of World War I
Fort Douaumont
Engineering
2,302
4,556,075
https://en.wikipedia.org/wiki/NeuroNames
NeuroNames is an integrated nomenclature for structures in the brain and spinal cord of the four species most studied by neuroscientists: human, macaque, rat and mouse. It offers a standard, controlled vocabulary of common names for structures, which is suitable for unambiguous neuroanatomical indexing of information in digital databases. Terms in the standard vocabulary have been selected for ease of pronunciation, mnemonic value, and frequency of use in recent neuroscientific publications. Structures and their relations to each other are defined in terms of the standard vocabulary. Currently NeuroNames contains standard names, synonyms and definitions of some 2,500 neuroanatomical entities. The nomenclature is maintained by the University of Washington and is the core component of a tool called "BrainInfo". BrainInfo helps one identify structures in the brain. One can either search by a structure name or locate the structure in a brain atlas and get information such as its location in the classical brain hierarchy, images of the structure, what cells it has, its connections and genes expressed there. Information can be accessed by any of some 16,000 synonyms in eight languages. NeuroNames is a source vocabulary of the Metathesaurus of the Unified Medical Language System. Further reading See also NeuroLex Neuroscience Information Framework Talairach coordinates External links Overview of NeuroNames BrainInfo NeuroNames Direct Link University of Washington Neuroanatomy Anatomical terminology Anatomy websites Biological databases
NeuroNames
Biology
311
35,814,269
https://en.wikipedia.org/wiki/Feral%20information%20systems
A feral information system is part of an information system developed by individuals and groups to help with day-to-day activities that is not condoned by management. It is called feral because it circumvents existing information technology systems or works around key system architecture. Overview A feral information system can be written for a variety of reasons. The general reason given is that they are ways of working around existing management information systems in order to support day-to-day work. Feral information systems are sometimes referred to as shadow systems. Reasons for feral information systems Reasons for feral information systems include: poor training practices in IT firms, inadequate systems, complex political relationships and a host of related issues. Research has linked feral information systems to poor operational planning. References Further reading Information systems
Feral information systems
Technology
156