id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
317,218
https://en.wikipedia.org/wiki/Starve-fed
In emulsion polymerization, starve-fed refers to a method of monomer addition where the monomer is introduced gradually into the reaction vessel at a rate that allows the majority of monomer to be consumed by the reaction before more is added. The purpose of this method is generally to control the distribution of different monomers into a copolymer. Many monomers have different reaction rates and so, if all the monomers are added to the system at the same time, tend to react in blocks. This blockiness in the polymer leads to significantly different properties in the final polymer from one with a more statistically random distribution of monomers. This method is utilized in synthesizing core-shell latex particles by emulsion polymerization, in order to carefully prepare the final structure. References Chemical processes
Starve-fed
[ "Chemistry" ]
165
[ "Chemical process engineering", "Chemical process stubs", "Chemical processes", "nan" ]
317,227
https://en.wikipedia.org/wiki/Micelle
A micelle () or micella () ( or micellae, respectively) is an aggregate (or supramolecular assembly) of surfactant amphipathic lipid molecules dispersed in a liquid, forming a colloidal suspension (also known as associated colloidal system). A typical micelle in water forms an aggregate with the hydrophilic "head" regions in contact with surrounding solvent, sequestering the hydrophobic single-tail regions in the micelle centre. This phase is caused by the packing behavior of single-tail lipids in a bilayer. The difficulty in filling the volume of the interior of a bilayer, while accommodating the area per head group forced on the molecule by the hydration of the lipid head group, leads to the formation of the micelle. This type of micelle is known as a normal-phase micelle (or oil-in-water micelle). Inverse micelles have the head groups at the centre with the tails extending out (or water-in-oil micelle). Micelles are approximately spherical in shape. Other shapes, such as ellipsoids, cylinders, and bilayers, are also possible. The shape and size of a micelle are a function of the molecular geometry of its surfactant molecules and solution conditions such as surfactant concentration, temperature, pH, and ionic strength. The process of forming micelles is known as micellisation and forms part of the phase behaviour of many lipids according to their polymorphism. History The ability of a soapy solution to act as a detergent has been recognized for centuries. However, it was only at the beginning of the twentieth century that the constitution of such solutions was scientifically studied. Pioneering work in this area was carried out by James William McBain at the University of Bristol. As early as 1913, he postulated the existence of "colloidal ions" to explain the good electrolytic conductivity of sodium palmitate solutions. These highly mobile, spontaneously formed clusters came to be called micelles, a term borrowed from biology and popularized by G.S. Hartley in his classic book Paraffin Chain Salts: A Study in Micelle Formation. The term micelle was coined in nineteenth century scientific literature as the elle diminutive of the Latin word (particle), conveying a new word for "tiny particle". Solvation Individual surfactant molecules that are in the system but are not part of a micelle are called "monomers". Micelles represent a molecular assembly, in which the individual components are thermodynamically in equilibrium with monomers of the same species in the surrounding medium. In water, the hydrophilic "heads" of surfactant molecules are always in contact with the solvent, regardless of whether the surfactants exist as monomers or as part of a micelle. However, the lipophilic "tails" of surfactant molecules have less contact with water when they are part of a micelle—this being the basis for the energetic drive for micelle formation. In a micelle, the hydrophobic tails of several surfactant molecules assemble into an oil-like core, the most stable form of which having no contact with water. By contrast, surfactant monomers are surrounded by water molecules that create a "cage" or solvation shell connected by hydrogen bonds. This water cage is similar to a clathrate and has an ice-like crystal structure and can be characterized according to the hydrophobic effect. The extent of lipid solubility is determined by the unfavorable entropy contribution due to the ordering of the water structure according to the hydrophobic effect. Micelles composed of ionic surfactants have an electrostatic attraction to the ions that surround them in solution, the latter known as counterions. Although the closest counterions partially mask a charged micelle (by up to 92%), the effects of micelle charge affect the structure of the surrounding solvent at appreciable distances from the micelle. Ionic micelles influence many properties of the mixture, including its electrical conductivity. Adding salts to a colloid containing micelles can decrease the strength of electrostatic interactions and lead to the formation of larger ionic micelles. This is more accurately seen from the point of view of an effective charge in hydration of the system. Energy of formation Micelles form only when the concentration of surfactant is greater than the critical micelle concentration (CMC), and the temperature of the system is greater than the critical micelle temperature, or Krafft temperature. The formation of micelles can be understood using thermodynamics: Micelles can form spontaneously because of a balance between entropy and enthalpy. In water, the hydrophobic effect is the driving force for micelle formation, despite the fact that assembling surfactant molecules is unfavorable in terms of both enthalpy and entropy of the system. At very low concentrations of the surfactant, only monomers are present in solution. As the concentration of the surfactant is increased, a point is reached at which the unfavorable entropy contribution, from clustering the hydrophobic tails of the molecules, is overcome by a gain in entropy due to release of the solvation shells around the surfactant tails. At this point, the lipid tails of a part of the surfactants must be segregated from the water. Hence, they start to form micelles. In broad terms, above the CMC, the loss of entropy due to assembly of the surfactant molecules is less than the gain in entropy by setting free the water molecules that were "trapped" in the solvation shells of the surfactant monomers. Also important are enthalpic considerations, such as the electrostatic interactions that occur between the charged parts of surfactants. Micelle packing parameter The micelle packing parameter equation is utilized to help "predict molecular self-assembly in surfactant solutions": where is the surfactant tail volume, is the tail length, and is the equilibrium area per molecule at the aggregate surface. Block copolymer micelles The concept of micelles was introduced to describe the core-corona aggregates of small surfactant molecules, however it has also extended to describe aggregates of amphiphilic block copolymers in selective solvents. It is important to know the difference between these two systems. The major difference between these two types of aggregates is in the size of their building blocks. Surfactant molecules have a molecular weight which is generally of a few hundreds of grams per mole while block copolymers are generally one or two orders of magnitude larger. Moreover, thanks to the larger hydrophilic and hydrophobic parts, block copolymers can have a much more pronounced amphiphilic nature when compared to surfactant molecules. Because of these differences in the building blocks, some block copolymer micelles behave like surfactant ones, while others do not. It is necessary therefore to make a distinction between the two situations. The former ones will belong to the dynamic micelles while the latter will be called kinetically frozen micelles. Dynamic micelles Certain amphiphilic block copolymer micelles display a similar behavior as surfactant micelles. These are generally called dynamic micelles and are characterized by the same relaxation processes assigned to surfactant exchange and micelle scission/recombination. Although the relaxation processes are the same between the two types of micelles, the kinetics of unimer exchange are very different. While in surfactant systems the unimers leave and join the micelles through a diffusion-controlled process, for copolymers the entry rate constant is slower than a diffusion controlled process. The rate of this process was found to be a decreasing power-law of the degree of polymerization of the hydrophobic block to the power 2/3. This difference is due to the coiling of the hydrophobic block of a copolymer exiting the core of a micelle. Block copolymers which form dynamic micelles are some of the tri-block poloxamers under the right conditions. Kinetically frozen micelles When block copolymer micelles do not display the characteristic relaxation processes of surfactant micelles, these are called kinetically frozen micelles. These can be achieved in two ways: when the unimers forming the micelles are not soluble in the solvent of the micelle solution, or if the core forming blocks are glassy at the temperature in which the micelles are found. Kinetically frozen micelles are formed when either of these conditions is met. A special example in which both of these conditions are valid is that of polystyrene-b-poly(ethylene oxide). This block copolymer is characterized by the high hydrophobicity of the core forming block, PS, which causes the unimers to be insoluble in water. Moreover, PS has a high glass transition temperature which is, depending on the molecular weight, higher than room temperature. Thanks to these two characteristics, a water solution of PS-PEO micelles of sufficiently high molecular weight can be considered kinetically frozen. This means that none of the relaxation processes, which would drive the micelle solution towards thermodynamic equilibrium, are possible. Pioneering work on these micelles was done by Adi Eisenberg. It was also shown how the lack of relaxation processes allowed great freedom in the possible morphologies formed. Moreover, the stability against dilution and vast range of morphologies of kinetically frozen micelles make them particularly interesting, for example, for the development of long circulating drug delivery nanoparticles. Inverse/reverse micelles In a non-polar solvent, it is the exposure of the hydrophilic head groups to the surrounding solvent that is energetically unfavourable, giving rise to a water-in-oil system. In this case, the hydrophilic groups are sequestered in the micelle core and the hydrophobic groups extend away from the center. These inverse micelles are proportionally less likely to form on increasing headgroup charge, since hydrophilic sequestration would create highly unfavorable electrostatic interactions. It is well established that for many surfactant/solvent systems a small fraction of the inverse micelles spontaneously acquire a net charge of +qe or -qe. This charging takes place through a disproportionation/comproportionation mechanism rather than a dissociation/association mechanism and the equilibrium constant for this reaction is on the order of 10−4 to 10−11, which means about every 1 in 100 to 1 in 100 000 micelles will be charged. Supermicelles Supermicelle is a hierarchical micelle structure (supramolecular assembly) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion-like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds; within the supermicelle structure they are loosely held together by hydrogen bonds, electrostatic or solvophobic interactions. Uses When surfactants are present above the critical micelle concentration (CMC), they can act as emulsifiers that will allow a compound that is normally insoluble (in the solvent being used) to dissolve. This occurs because the insoluble species can be incorporated into the micelle core, which is itself solubilized in the bulk solvent by virtue of the head groups' favorable interactions with solvent species. The most common example of this phenomenon is detergents, which clean poorly soluble lipophilic material (such as oils and waxes) that cannot be removed by water alone. Detergents clean also by lowering the surface tension of water, making it easier to remove material from a surface. The emulsifying property of surfactants is also the basis for emulsion polymerization. Micelles may also have important roles in chemical reactions. Micellar chemistry uses the interior of micelles to harbor chemical reactions, which in some cases can make multi-step chemical synthesis more feasible. Doing so can increase reaction yield, create conditions more favorable to specific reaction products (e.g. hydrophobic molecules), and reduce required solvents, side products, and required conditions (e.g. extreme pH). Because of these benefits, Micellular chemistry is thus considered a form of green chemistry. However, micelle formation may also inhibit chemical reactions, such as when reacting molecules form micelles that shield a molecular component vulnerable to oxidation. The use of cationic micelles of cetrimonium chloride, benzethonium chloride, and cetylpyridinium chloride can accelerate chemical reactions between negatively charged compounds (such as DNA or Coenzyme A) in an aqueous environment up to 5 million times. Unlike conventional micellar catalysis, the reactions occur solely on the charged micelles' surface. Micelle formation is essential for the absorption of fat-soluble vitamins and complicated lipids within the human body. Bile salts formed in the liver and secreted by the gall bladder allow micelles of fatty acids to form. This allows the absorption of complicated lipids (e.g., lecithin) and lipid-soluble vitamins (A, D, E, and K) within the micelle by the small intestine. During the process of milk-clotting, proteases act on the soluble portion of caseins, κ-casein, thus originating an unstable micellar state that results in clot formation. Micelles can also be used for targeted drug delivery as gold nanoparticles. See also Critical micelle concentration Micellar liquid chromatography Micellar solutions Micellar solubilization Lipid bilayer Liposome Vesicle (biology) References Supramolecular chemistry Colloidal chemistry Membrane biology
Micelle
[ "Chemistry", "Materials_science" ]
2,943
[ "Colloidal chemistry", "Membrane biology", "Surface science", "Colloids", "nan", "Molecular biology", "Nanotechnology", "Supramolecular chemistry" ]
317,263
https://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey%20algorithm
The Quine–McCluskey algorithm (QMC), also known as the method of prime implicants, is a method used for minimization of Boolean functions that was developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956. As a general principle this approach had already been demonstrated by the logician Hugh McColl in 1878, was proved by Archie Blake in 1937, and was rediscovered by Edward W. Samson and Burton E. Mills in 1954 and by Raymond J. Nelson in 1955. Also in 1955, Paul W. Abrahams and John G. Nordahl as well as Albert A. Mullin and Wayne G. Kellner proposed a decimal variant of the method. The Quine–McCluskey algorithm is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean F has been reached. It is sometimes referred to as the tabulation method. The Quine-McCluskey algorithm works as follows: Finding all prime implicants of the function. Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as other prime implicants that are necessary to cover the function. Complexity Although more practical than Karnaugh mapping when dealing with more than four variables, the Quine–McCluskey algorithm also has a limited range of use since the problem it solves is NP-complete. The running time of the Quine–McCluskey algorithm grows exponentially with the number of variables. For a function of n variables the number of prime implicants can be as large as , e.g. for 32 variables there may be over 534 × 1012 prime implicants. Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of which the Espresso heuristic logic minimizer was the de facto standard in 1995. For one natural class of functions , the precise complexity of finding all prime implicants is better-understood: Milan Mossé, Harry Sha, and Li-Yang Tan discovered a near-optimal algorithm for finding all prime implicants of a formula in conjunctive normal form. Step two of the algorithm amounts to solving the set cover problem; NP-hard instances of this problem may occur in this algorithm step. Example Input In this example, the input is a Boolean function in four variables, which evaluates to on the values and , evaluates to an unknown value on and , and to everywhere else (where these integers are interpreted in their binary form for input to for succinctness of notation). The inputs that evaluate to are called 'minterms'. We encode all of this information by writing This expression says that the output function f will be 1 for the minterms and (denoted by the 'm' term) and that we don't care about the output for and combinations (denoted by the 'd' term). The summation symbol denotes the logical sum (logical OR, or disjunction) of all the terms being summed over. Step 1: finding prime implicants First, we write the function as a table (where 'x' stands for don't care): {| class="wikitable" |- ! !! A !! B !! C !! D !! f |- | m0 || 0 || 0 || 0 || 0 || 0 |- | m1 || 0 || 0 || 0 || 1 || 0 |- | m2 || 0 || 0 || 1 || 0 || 0 |- | m3 || 0 || 0 || 1 || 1 || 0 |- | m4 || 0 || 1 || 0 || 0 || 1 |- | m5 || 0 || 1 || 0 || 1 || 0 |- | m6 || 0 || 1 || 1 || 0 || 0 |- | m7 || 0 || 1 || 1 || 1 || 0 |- | m8 || 1 || 0 || 0 || 0 || 1 |- | m9 || 1 || 0 || 0 || 1 || x |- | m10 || 1 || 0 || 1 || 0 || 1 |- | m11 || 1 || 0 || 1 || 1 || 1 |- | m12 || 1 || 1 || 0 || 0 || 1 |- | m13 || 1 || 1 || 0 || 1 || 0 |- | m14 || 1 || 1 || 1 || 0 || x |- | m15 || 1 || 1 || 1 || 1 || 1 |} One can easily form the canonical sum of products expression from this table, simply by summing the minterms (leaving out don't-care terms) where the function evaluates to one: f = A'BC'D' + AB'C'D' + AB'CD' + AB'CD + ABC'D' + ABCD. which is not minimal. So to optimize, all minterms that evaluate to one are first placed in a minterm table. Don't-care terms are also added into this table (names in parentheses), so they can be combined with minterms: {| class="wikitable" |- ! Numberof 1s !! Minterm !! BinaryRepresentation |- | rowspan="2" | 1 | m4 || |- | m8 || |- | rowspan="3" | 2 | (m9) || |- | m10 || |- | m12 || |- | rowspan="2" | 3 | m11 || |- | (m14) || |- || 4 | m15 || |} At this point, one can start combining minterms with other minterms in adjacent groups. If two terms differ by only a single digit, that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more are marked with an asterisk (). For instance 1000 and 1001 can be combined to give 100-, indicating that both minterms imply the first digit is 1 and the next two are 0. {| class="wikitable" |- ! Numberof 1s !! Minterm !! 0-Cube !! colspan="2" | Size 2 Implicants |- | rowspan="4" | 1 | m4 || || m(4,12) || |- | m8 || || m(8,9) || |- | colspan="2" || m(8,10) || |- | colspan="2" || m(8,12) || |- | rowspan="4" | 2 | m9 || || m(9,11) || |- | m10 || || m(10,11) || |- | colspan="2" || m(10,14) || |- | m12 || || m(12,14) || |- | rowspan="2" | 3 | m11 || || m(11,15) || |- | m14 || || m(14,15) || |- | rowspan="1" | 4 | m15 || || colspan="2" |} When going from Size 2 to Size 4, treat - as a third bit value. Match up the -'s first. The terms represent products and to combine two product terms they must have the same variables. One of the variables should be complemented in one term and uncomplemented in the other. The remaining variables present should agree. So to match two terms the -'s must align and all but one of the other digits must be the same. For instance, -110 and -100 can be combined to give -1-0, as can -110 and -010 to give --10, but -110 and 011- cannot since the -'s do not align. -110 corresponds to BCD' while 011- corresponds to A'BC, and BCD' + A'BC is not equivalent to a product term. {| class="wikitable" |- ! Numberof 1s !! Minterm !! 0-Cube !! colspan="2" | Size 2 Implicants !! colspan="2" | Size 4 Implicants |- | rowspan="4" | 1 | m4 || || m(4,12) || ||colspan="2" |- | m8 || || m(8,9) || || m(8,9,10,11) || |- | colspan="2" || m(8,10) || || m(8,10,12,14) || |- | colspan="2" || m(8,12) || ||colspan="2" |- | rowspan="4" | 2 | m9 || || m(9,11) || ||colspan="2" |- | m10 || || m(10,11) || || m(10,11,14,15) || |- | colspan="2" || m(10,14) || ||colspan="2" |- | m12 || || m(12,14) || ||colspan="2" |- | rowspan="2" | 3 | m11 || || m(11,15) || ||colspan="2" |- | m14 || || m(14,15) || ||colspan="2" |- | rowspan="1" | 4 | m15 || || colspan="2" | colspan="2" |} Note: In this example, none of the terms in the size 4 implicants table can be combined any further. In general this process should be continued (sizes 8, 16 etc.) until no more terms can be combined. Step 2: prime implicant chart None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table. Along the side goes the prime implicants that have just been generated (these are the ones that have been marked with a "" in the previous step), and along the top go the minterms specified earlier. The don't care terms are not placed on top—they are omitted from this section because they are not necessary inputs. {| class="wikitable" style="text-align:center;" |- ! || 4 || 8 || 10 || 11 || 12 || 15 || ⇒ || A || B || C || D |- | style="text-align:left;" | m(4,12) || || || || || || || ⇒ || || 1 || 0 || 0 |- | style="text-align:left;" | m(8,9,10,11) || || || || || || || ⇒ || 1 || 0 || || |- | style="text-align:left;" | m(8,10,12,14) || || || || || || || ⇒ || 1 || || || 0 |- | style="text-align:left;" | m(10,11,14,15) || || || || || || || ⇒ || 1 || || 1 || |} To find the essential prime implicants, we look for columns with only one "✓". If a column has only one "✓", this means that the minterm can only be covered by one prime implicant. This prime implicant is essential. For example: in the first column, with minterm 4, there is only one "✓". This means that m(4,12) is essential (hence marked by ). Minterm 15 also has only one "✓", so m(10,11,14,15) is also essential. Now all columns with one "✓" are covered. The rows with minterm m(4,12) and m(10,11,14,15) can now be removed, together with all the columns they cover. The second prime implicant can be 'covered' by the third and fourth, and the third prime implicant can be 'covered' by the second and first, and neither is thus essential. If a prime implicant is essential then, as would be expected, it is necessary to include it in the minimized boolean equation. In some cases, the essential prime implicants do not cover all minterms, in which case additional procedures for chart reduction can be employed. The simplest "additional procedure" is trial and error, but a more systematic way is Petrick's method. In the current example, the essential prime implicants do not handle all of the minterms, so, in this case, the essential implicants can be combined with one of the two non-essential ones to yield one equation: f = BC'D' + AB' + AC or f = BC'D' + AD' + AC Both of those final equations are functionally equivalent to the original, verbose equation: f = A'BC'D' + AB'C'D' + AB'C'D + AB'CD' + AB'CD + ABC'D' + ABCD' + ABCD. Algorithm Step 1: Finding the prime implicants The pseudocode below recursively computes the prime implicants given the list of minterms of a boolean function. It does this by trying to merge all possible minterms and filtering out minterms that have been merged until no more merges of the minterms can be performed and hence, the prime implicants of the function have been found. // Computes the prime implicants from a list of minterms. // each minterm is of the form "1001", "1010", etc and can be represented with a string. function getPrimeImplicants(list minterms) is primeImplicants ← empty list merges ← new boolean array of length equal to the number of minterms, each set to false numberOfmerges ← 0 mergedMinterm, minterm1, minterm2 ← empty strings for i = 0 to length(minterms) do for c = i + 1 to length(minterms) do minterm1 ← minterms[i] minterm2 ← minterms[c] // Checking that two minterms can be merged if CheckDashesAlign(minterm1, minterm2) && CheckMintermDifference(minterm1, minterm2) then mergedMinterm ← MergeMinterms(minterm1, minterm2) if primeImplicants Does Not Contain mergedMinterm then primeImplicants.Add(mergedMinterm) numberOfMerges ← numberOfMerges + 1 merges[i] ← true merges[c] ← true // Filtering all minterms that have not been merged as they are prime implicants. Also removing duplicates for j = 0 to length(minterms) do if merges[j] == false && primeImplicants Does Not Contain minterms[j] then primeImplicants.Add(minterms[j]) // if no merges have taken place then all of the prime implicants have been found so return, otherwise // keep merging the minterms. if numberOfmerges == 0 then return primeImplicants else return getPrimeImplicants(primeImplicants) In this example the CheckDashesAlign and CheckMintermDifference functions perform the necessary checks for determining whether two minterms can be merged. The function MergeMinterms merges the minterms and adds the dashes where necessary. The utility functions below assume that each minterm will be represented using strings. function MergeMinterms(minterm1, minterm2) is mergedMinterm ← empty string for i = 0 to length(minterm1) do //If the bits vary then replace it with a dash, otherwise the bit remains in the merged minterm. if minterm[i] != minterm2[i] then mergedMinterm ← mergedMinterm + '-' else mergedMinterm ← mergedMinterm + minterm1[i] return mergedMinterm function CheckDashesAlign(minterm1, minterm2) is for i = 0 to length(minterm1) do // If one minterm has dashes and the other does not then the minterms cannot be merged. if minterm1[i] != '-' && minterm2[i] == '-' then return false return true function CheckMintermDifference(minterm1, minterm2) is // minterm1 and minterm2 are strings representing all of the currently found prime implicants and merged // minterms. Examples include '01--' and '10-0' m1, m2 ← integer representation of minterm1 and minterm2 with the dashes removed, these are replaced with 0 // ^ here is a bitwise XOR res ← m1 ^ m2 return res != 0 && (res & res - 1) == 0 Step 2: Prime implicant chart The pseudocode below can be split into two sections: Creating the prime implicant chart using the prime implicants Reading the prime implicant chart to find the essential prime implicants. Creating the prime implicant chart The prime implicant chart can be represented by a dictionary where each key is a prime implicant and the corrresponding value is an empty string that will store a binary string once this step is complete. Each bit in the binary string is used to represent the ticks within the prime implicant chart. The prime implicant chart can be created using the following steps: Iterate through each key (prime implicant of the dictionary). Replace each dash in the prime implicant with the \d character code. This creates a regular expression that can be checked against each of the minterms, looking for matches. Iterate through each minterm, comparing the regular expression with the binary representation of the minterm, if there is a match append a "1" to the corresponding string in the dictionary. Otherwise append a "0". Repeat for all prime implicants to create the completed prime implicant chart. When written in pseudocode, the algorithm described above is: function CreatePrimeImplicantChart(list primeImplicants, list minterms) primeImplicantChart ← new dictionary with key of type string and value of type string // Creating the empty chart with the prime implicants as the key and empty strings as the value. for i = 0 to length(primeImplicants) do // Adding a new prime implicant to the chart. primeImplicantChart.Add(primeImplicants[i], "") for i = 0 to length(primeImplicantChart.Keys) do primeImplicant ← primeImplicantChart.Keys[i] // Convert the "-" to "\d" which can be used to find the row of ticks above. regularExpression ← ConvertToRegularExpression(primeImplicant) for j = 0 to length(minterms) do // If there is a match between the regular expression and the minterm than append a 1 otherwise 0. if regularExpression.matches(minterms[j]) then primeImplicantChart[primeImplicant] += "1" else primeImplicantChart[primeImplicant] += "0" // The prime implicant chart is complete so return the completed chart. return primeImplicantChart The utility function, ConvertToRegularExpression, is used to convert the prime implicant into the regular expression to check for matches between the implicant and the minterms. function ConvertToRegularExpression(string primeImplicant) regularExpression ← new string for i = 0 to length(primeImplicant) do if primeImplicant[i] == "-" then // Add the literal character "\d". regularExpression += @"\d" else regularExpression += primeImplicant[i] return regularExpression Finding the essential prime implicants Using the function, CreatePrimeImplicantChart, defined above, we can find the essential prime implicants by simply iterating column by column of the values in the dictionary, and where a single "1" is found then an essential prime implicant has been found. See also Blake canonical form Buchberger's algorithm – analogous algorithm for algebraic geometry Petrick's method Qualitative comparative analysis (QCA) References Further reading (viii+635 pages) (NB. This book was reprinted by Chin Jih in 1969.) (47 pages) (4 pages) (7 pages) (22 pages) (16 pages) (NB. External links Tutorial Tutorial on Quine-McCluskey and Petrick's method. For a fully worked out example visit: http://www.cs.ualberta.ca/~amaral/courses/329/webslides/Topic5-QuineMcCluskey/sld024.htm Boolean algebra Willard Van Orman Quine
Quine–McCluskey algorithm
[ "Mathematics" ]
4,909
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
317,304
https://en.wikipedia.org/wiki/National%20Space%20Society
The National Space Society (NSS) is an American international nonprofit 501(c)(3) educational and scientific organization specializing in space advocacy. It is a member of the Independent Charities of America and an annual participant in the Combined Federal Campaign. The society's vision is: "People living and working in thriving communities beyond the Earth, and the use of the vast resources of space for the dramatic betterment of humanity." The society supports human spaceflight and robotic spaceflight, by both public (e.g., NASA, Russian Federal Space Agency and Japan Aerospace Exploration Agency) and private sector (e.g., SpaceX, Blue Origin, Virgin Galactic, etc.) organizations. The major goals of the National Space Society are: Defending Earth: Protecting humanity from dangerous space objects (asteroid impact avoidance). Clean Energy from Space: Enabling everyone to benefit from space solar power. Developing Space: Making the vast resources of space available to all. Space Settlement: Moving civilization into space and making us an interplanetary species. History The society was established in the United States on March 28, 1987, by the merger of the National Space Institute, founded in 1974 by Wernher von Braun, and the L5 Society, founded in 1975 based on the concepts of Gerard K. O'Neill. The society has an elected volunteer Board of Directors and a Board of Governors. The Board of Directors provides day-to-day operational oversight for the organization, and the Board of Governors provide strategic oversight and advisory to the Directors in the form of recommendations and guidance with respect to the broad strategies, overall policies, objectives, and goals of the Society. The Chairman of the Board of Governors is Karlton Johnson, USAF-Retired. In this capacity, he provides overall senior executive leadership to enhance the effectiveness and performance of the Board of Governors in support of the Society's goals, imparts advice and guidance to the Board of Directors to enhance its conduct of business operations, and serves as the primary spokesperson for the Board of Governors. The Chairman of the Board of Directors is Kirby Ikin. Karlton Johnson is currently the organization's Chief Executive Officer. Serving the space community for nearly 50 years in its various forms, the National Space Society has remained a conduit for education, substantive dialogue, and impact player in the commercial and private space sector communities. The organization garnered the "Five-Star Best in America" award by the Independent Charities of America organization in 2005. In 2014, the National Space Society launched the Enterprise In Space program in order to ignite interest in space and science, technology, engineering, art and math (STEAM) education. In 2023, the National Space Society elected Isaac Arthur as President for a two-year term. Ad Astra The Society publishes a magazine , which appears quarterly in print and electronic form. International Space Development Conference The society hosts an annual International Space Development Conference (ISDC) held in major cities throughout the United States, often during or close to the Memorial Day weekend. NSS Chapters network As listed in each quarterly issue of , a large number of NSS chapters exist around the world. The chapters may serve a local area such as a school, city or town, or have a topical or special interest focus, such as a rocketry or astronomy club, or educational/community outreach program. Chapters are the peripheral organs of the society by organizing events, communicating with the public on the merits and benefits of space exploration, and working to educate political leaders. National Space Society of Australia A strong contingent of chapters is located in Australia. Prior to the NSI-L5 merger, the L5 Society had been developing chapters around the world, and in Australia, three chapters had been established. The 'Southern Cross L5 Society' was formed in 1979, with groups in Sydney, Adelaide (in 1984) and Brisbane (in 1986). It was decided in late 1989 to create the National Space Society of Australia (NSSA) which could act as an umbrella organization Similar efforts have taken hold in Brazil, Canada, and Mexico, as well as European countries that have a strong aerospace presence. These include France, Germany, and the Netherlands. Awards The society administers a number of awards. These are typically presented during the annual International Space Development Conference that NSS hosts. These awards are in recognition of individual volunteer effort, awards for NSS chapter work, the "Space Pioneer" award, and two significant awards which are presented in alternate years. Robert A. Heinlein Memorial Award The Robert A. Heinlein Memorial Award is given in even-numbered years (2004, 2006, etc.) to "honor those individuals who have made significant, lifetime contributions to the creation of a free spacefaring civilization." Heinlein Award Winners: 2024 – William Shatner 2022 – Lori Garver 2018 – Freeman Dyson 2016 – Jerry Pournelle 2014 – Elon Musk 2012 – Stephen Hawking 2010 – Peter Diamandis 2008 – Burt Rutan 2006 – Brigadier General Charles E. "Chuck" Yeager 2004 – James Lovell 2002 – Robert Zubrin 2000 – Neil Armstrong 1998 – Carl Sagan 1996 – Buzz Aldrin 1994 – Robert H. Goddard 1992 – Gene Roddenberry 1990 – Wernher von Braun 1988 – Arthur C. Clarke 1986 – Gerard K. O'Neill NSS Von Braun Award The NSS Von Braun Award is given in odd-numbered years (1993, 1995, etc.) "to recognize excellence in management of and leadership for a space-related project where the project is significant and successful and the manager has the loyalty of a strong team that he or she has created." Awardees include: Von Braun Award Winners: 2023 – James Webb Space Telescope Team 2021 – Gwynn Shotwell 2019 – Tory Bruno 2017 – Prof. Johann-Dietrich Wörner 2015 – Mars Curiosity Rover project Team 2013 – Dr. A.P.J. Abdul Kalam 2011 – JAXA Hayabusa Team 2009 – Elon Musk 2007 – Steven W. Squyres 2005 – Burt Rutan 2001 – Donna Shirley 1999 – Robert C. Seamans, Jr. 1997 – George Mueller 1995 – Max Hunter 1993 – Dr. Ernst Stuhlinger Space Pioneer Awards Space Pioneer Awards or NSS Space Pioneer Awards are the annual awards given by National Space Society, an independent non-profit educational membership organisation, to individuals and teams who have opened the space frontier. Space Pioneer Award Winners: 2024 — Rod Pyle, Brian McManus, José Hernández 2023 — Jared Isaacman, Dr. Pascal Lee, Dr. David Livingston 2022 — Peter Beck, Rocket Lab, Kathryn Lueders, Ingenuity Mars Helocopter Team 2021 — Dr. Robert D. Braun, Robert Manning 2020 - Isaac Arthur, Steve Jurvetson, Dr. Peggy Whitson, Mission Juno Team, Dr. Scott J. Bolton, Dr. Phil Plait Other scholarships and award activities Other scholarships and award activities NSS provides or assists with include the following awards: The NSS-ISU scholarship, worth $10,000, to the International Space University was offered from 2005–2008. The 2005 recipient was Robert Guinness of St. Louis. The Gerard K. O'Neill Space Settlement Contest, an annual competition for students in grades 6 to 12 to design and present a permanent space settlement in the form of a research paper, essay, or artwork; Affiliations The National Space Society is a founding executive member of the Alliance for Space Development. See also L5 Society Space advocacy Space colonization Space exploration Vision for Space Exploration Space Kingdom of Asgardia References "National Space Society 'blitzes' Congress on NASA budget" Space.com – Mar. 5, 2007 "National Space Society to Host 26th Annual Conference in Dallas, Convening Pioneers from Government and Private Space Programs" SpaceRef.com – Feb. 21, 2007 GuideStar – National Space Society Information on NSS listed in GuideStar, a national database of nonprofit organizations External links Ad Astra ("To the Stars') The magazine of the National Space Society Island One Society, ISDC conferences archive Space organizations Space advocacy organizations International scientific organizations Non-profit organizations based in Washington, D.C. Scientific societies based in the United States 501(c)(3) organizations 1987 in science Organizations established in 1987 1987 establishments in Washington, D.C.
National Space Society
[ "Astronomy" ]
1,706
[ "Space advocacy organizations", "Astronomy organizations", "Space organizations" ]
317,311
https://en.wikipedia.org/wiki/El%20Ni%C3%B1o%E2%80%93Southern%20Oscillation
El Niño–Southern Oscillation (ENSO) is a global climate phenomenon that emerges from variations in winds and sea surface temperatures over the tropical Pacific Ocean. Those variations have an irregular pattern but do have some semblance of cycles. The occurrence of ENSO is not predictable. It affects the climate of much of the tropics and subtropics, and has links (teleconnections) to higher-latitude regions of the world. The warming phase of the sea surface temperature is known as "El Niño" and the cooling phase as "La Niña". The Southern Oscillation is the accompanying atmospheric oscillation, which is coupled with the sea temperature change. El Niño is associated with higher than normal air sea level pressure over Indonesia, Australia and across the Indian Ocean to the Atlantic. La Niña has roughly the reverse pattern: high pressure over the central and eastern Pacific and lower pressure through much of the rest of the tropics and subtropics. The two phenomena last a year or so each and typically occur every two to seven years with varying intensity, with neutral periods of lower intensity interspersed. El Niño events can be more intense but La Niña events may repeat and last longer. A key mechanism of ENSO is the Bjerknes feedback (named after Jacob Bjerknes in 1969) in which the atmospheric changes alter the sea temperatures that in turn alter the atmospheric winds in a positive feedback. Weaker easterly trade winds result in a surge of warm surface waters to the east and reduced ocean upwelling on the equator. In turn, this leads to warmer sea surface temperatures (called El Niño), a weaker Walker circulation (an east-west overturning circulation in the atmosphere) and even weaker trade winds. Ultimately the warm waters in the western tropical Pacific are depleted enough so that conditions return to normal. The exact mechanisms that cause the oscillation are unclear and are being studied. Each country that monitors the ENSO has a different threshold for what constitutes an El Niño or La Niña event, which is tailored to their specific interests. El Niño and La Niña affect the global climate and disrupt normal weather patterns, which as a result can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term surface cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on timescales of around ten years. The countries most affected by ENSO are developing countries that are bordering the Pacific Ocean and are dependent on agriculture and fishing. In climate change science, ENSO is known as one of the internal climate variability phenomena. Future trends in ENSO due to climate change are uncertain, although climate change exacerbates the effects of droughts and floods. The IPCC Sixth Assessment Report summarized the scientific knowledge in 2021 for the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase". The scientific consensus is also that "it is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". Definition and terminology The El Niño–Southern Oscillation is a single climate phenomenon that periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases in the oscillation which are deemed to occur when specific ocean and atmospheric conditions are reached or exceeded. An early recorded mention of the term "El Niño" ("The Boy" in Spanish) to refer to climate occurred in 1892, when Captain Camilo Carrillo told the geographical society congress in Lima that Peruvian sailors named the warm south-flowing current "El Niño" because it was most noticeable around Christmas. Although pre-Columbian societies were certainly aware of the phenomenon, the indigenous names for it have been lost to history. The capitalized term El Niño refers to the Christ Child, Jesus, because periodic warming in the Pacific near South America is usually noticed around Christmas. Originally, the term El Niño applied to an annual weak warm ocean current that ran southwards along the coast of Peru and Ecuador at about Christmas time. However, over time the term has evolved and now refers to the warm and negative phase of the El Niño–Southern Oscillation (ENSO). The original phrase, El Niño de Navidad, arose centuries ago, when Peruvian fishermen named the weather phenomenon after the newborn Christ. La Niña ("The Girl" in Spanish) is the colder counterpart of El Niño, as part of the broader ENSO climate pattern. In the past, it was also called an anti-El Niño and El Viejo, meaning "the old man." A negative phase exists when atmospheric pressure over Indonesia and the west Pacific is abnormally high and pressure over the east Pacific is abnormally low, during El Niño episodes, and a positive phase is when the opposite occurs during La Niña episodes, and pressure over Indonesia is low and over the west Pacific is high. Fundamentals On average, the temperature of the ocean surface in the tropical East Pacific is roughly cooler than in the tropical West Pacific. The sea surface temperature (SST) of the West Pacific northeast of Australia averages around . SSTs in the East Pacific off the western coast of South America are closer to . Strong trade winds near the equator push water away from the East Pacific and towards the West Pacific. This water is slowly warmed by the Sun as it moves west along the equator. The ocean surface near Indonesia is typically around higher than near Peru because of the buildup of water in the West Pacific. The thermocline, or the transitional zone between the warmer waters near the ocean surface and the cooler waters of the deep ocean, is pushed downwards in the West Pacific due to this water accumulation. The total weight of a column of ocean water is almost the same in the western and east Pacific. Because the warmer waters of the upper ocean are slightly less dense than the cooler deep ocean, the thicker layer of warmer water in the western Pacific means the thermocline there must be deeper. The difference in weight must be enough to drive any deep water return flow. Consequently, the thermocline is tilted across the tropical Pacific, rising from an average depth of about in the West Pacific to a depth of about in the East Pacific. Cooler deep ocean water takes the place of the outgoing surface waters in the East Pacific, rising to the ocean surface in a process called upwelling. Along the western coast of South America, water near the ocean surface is pushed westward due to the combination of the trade winds and the Coriolis effect. This process is known as Ekman transport. Colder water from deeper in the ocean rises along the continental margin to replace the near-surface water. This process cools the East Pacific because the thermocline is closer to the ocean surface, leaving relatively little separation between the deeper cold water and the ocean surface. Additionally, the northward-flowing Humboldt Current carries colder water from the Southern Ocean to the tropics in the East Pacific. The combination of the Humboldt Current and upwelling maintains an area of cooler ocean waters off the coast of Peru. The West Pacific lacks a cold ocean current and has less upwelling as the trade winds are usually weaker than in the East Pacific, allowing the West Pacific to reach warmer temperatures. These warmer waters provide energy for the upward movement of air. As a result, the warm West Pacific has on average more cloudiness and rainfall than the cool East Pacific. ENSO describes a quasi-periodic change of both oceanic and atmospheric conditions over the tropical Pacific Ocean. These changes affect weather patterns across much of the Earth. The tropical Pacific is said to be in one of three states of ENSO (also called "phases") depending on the atmospheric and oceanic conditions. When the tropical Pacific roughly reflects the average conditions, the state of ENSO is said to be in the neutral phase. However, the tropical Pacific experiences occasional shifts away from these average conditions. If trade winds are weaker than average, the effect of upwelling in the East Pacific and the flow of warmer ocean surface waters towards the West Pacific lessen. This results in a cooler West Pacific and a warmer East Pacific, leading to a shift of cloudiness and rainfall towards the East Pacific. This situation is called El Niño. The opposite occurs if trade winds are stronger than average, leading to a warmer West Pacific and a cooler East Pacific. This situation is called La Niña and is associated with increased cloudiness and rainfall over the West Pacific. Bjerknes feedback The close relationship between ocean temperatures and the strength of the trade winds was first identified by Jacob Bjerknes in 1969. Bjerknes also hypothesized that ENSO was a positive feedback system where the associated changes in one component of the climate system (the ocean or atmosphere) tend to reinforce changes in the other. For example, during El Niño, the reduced contrast in ocean temperatures across the Pacific results in weaker trade winds, further reinforcing the El Niño state. This process is known as Bjerknes feedback. Although these associated changes in the ocean and atmosphere often occur together, the state of the atmosphere may resemble a different ENSO phase than the state of the ocean or vice versa. Because their states are closely linked, the variations of ENSO may arise from changes in both the ocean and atmosphere and not necessarily from an initial change of exclusively one or the other. Conceptual models explaining how ENSO operates generally accept the Bjerknes feedback hypothesis. However, ENSO would perpetually remain in one phase if Bjerknes feedback were the only process occurring. Several theories have been proposed to explain how ENSO can change from one state to the next, despite the positive feedback. These explanations broadly fall under two categories. In one view, the Bjerknes feedback naturally triggers negative feedbacks that end and reverse the abnormal state of the tropical Pacific. This perspective implies that the processes that lead to El Niño and La Niña also eventually bring about their end, making ENSO a self-sustaining process. Other theories view the state of ENSO as being changed by irregular and external phenomena such as the Madden–Julian oscillation, tropical instability waves, and westerly wind bursts. Walker circulation The three phases of ENSO relate to the Walker circulation, which was named after Gilbert Walker who discovered the Southern Oscillation during the early twentieth century. The Walker circulation is an east-west overturning circulation in the vicinity of the equator in the Pacific. Upward air is associated with high sea temperatures, convection and rainfall, while the downward branch occurs over cooler sea surface temperatures in the east. During El Niño, as the sea surface temperatures change so does the Walker Circulation. Warming in the eastern tropical Pacific weakens or reverses the downward branch, while cooler conditions in the west lead to less rain and downward air, so the Walker Circulation first weakens and may reverse. Southern Oscillation The Southern Oscillation is the atmospheric component of ENSO. This component is an oscillation in surface air pressure between the tropical eastern and the western Pacific Ocean waters. The strength of the Southern Oscillation is measured by the Southern Oscillation Index (SOI). The SOI is computed from fluctuations in the surface air pressure difference between Tahiti (in the Pacific) and Darwin, Australia (on the Indian Ocean). El Niño episodes have negative SOI, meaning there is lower pressure over Tahiti and higher pressure in Darwin. La Niña episodes on the other hand have positive SOI, meaning there is higher pressure in Tahiti and lower in Darwin. Low atmospheric pressure tends to occur over warm water and high pressure occurs over cold water, in part because of deep convection over the warm water. El Niño episodes are defined as sustained warming of the central and eastern tropical Pacific Ocean, thus resulting in a decrease in the strength of the Pacific trade winds, and a reduction in rainfall over eastern and northern Australia. La Niña episodes are defined as sustained cooling of the central and eastern tropical Pacific Ocean, thus resulting in an increase in the strength of the Pacific trade winds, and the opposite effects in Australia when compared to El Niño. Although the Southern Oscillation Index has a long station record going back to the 1800s, its reliability is limited due to the latitudes of both Darwin and Tahiti being well south of the Equator, so that the surface air pressure at both locations is less directly related to ENSO. To overcome this effect, a new index was created, named the Equatorial Southern Oscillation Index (EQSOI). To generate this index, two new regions, centered on the Equator, were defined. The western region is located over Indonesia and the eastern one over the equatorial Pacific, close to the South American coast. However, data on EQSOI goes back only to 1949. Sea surface height (SSH) changes up or down by several centimeters in Pacific equatorial region with the ESNO: El Niño causes a positive SSH anomaly (raised sea level) because of thermal expansion while La Niña causes a negative SSH anomaly (lowered sea level) via contraction. Three phases of sea surface temperature The El Niño–Southern Oscillation is a single climate phenomenon that quasi-periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases which require certain changes to take place in both the ocean and the atmosphere before an event is declared. The cool phase of ENSO is La Niña, with SST in the eastern Pacific below average, and air pressure high in the eastern Pacific and low in the western Pacific. The ENSO cycle, including both El Niño and La Niña, causes global changes in temperature and rainfall. Neutral phase If the temperature variation from climatology is within 0.5 °C (0.9 °F), ENSO conditions are described as neutral. Neutral conditions are the transition between warm and cold phases of ENSO. Sea surface temperatures (by definition), tropical precipitation, and wind patterns are near average conditions during this phase. Close to half of all years are within neutral periods. During the neutral ENSO phase, other climate anomalies/patterns such as the sign of the North Atlantic Oscillation or the Pacific–North American teleconnection pattern exert more influence. El Niño phase El Niño conditions are established when the Walker circulation weakens or reverses and the Hadley circulation strengthens, leading to the development of a band of warm ocean water in the central and east-central equatorial Pacific (approximately between the International Date Line and 120°W), including the area off the west coast of South America, as upwelling of cold water occurs less or not at all offshore. This warming causes a shift in the atmospheric circulation, leading to higher air pressure in the western Pacific and lower in the eastern Pacific, with rainfall reducing over Indonesia, India and northern Australia, while rainfall and tropical cyclone formation increases over the tropical Pacific Ocean. The low-level surface trade winds, which normally blow from east to west along the equator, either weaken or start blowing from the other direction. El Niño phases are known to happen at irregular intervals of two to seven years, and lasts nine months to two years. The average period length is five years. When this warming occurs for seven to nine months, it is classified as El Niño "conditions"; when its duration is longer, it is classified as an El Niño "episode". It is thought that there have been at least 30 El Niño events between 1900 and 2024, with the 1982–83, 1997–98 and 2014–16 events among the strongest on record. Since 2000, El Niño events have been observed in 2002–03, 2004–05, 2006–07, 2009–10, 2014–16, 2018–19, and 2023–24. Major ENSO events were recorded in the years 1790–93, 1828, 1876–78, 1891, 1925–26, 1972–73, 1982–83, 1997–98, 2014–16, and 2023–24. During strong El Niño episodes, a secondary peak in sea surface temperature across the far eastern equatorial Pacific Ocean sometimes follows the initial peak. La Niña phase An especially strong Walker circulation causes La Niña, which is considered to be the cold oceanic and positive atmospheric phase of the broader El Niño–Southern Oscillation (ENSO) weather phenomenon, as well as the opposite of weather pattern, where sea surface temperature across the eastern equatorial part of the central Pacific Ocean will be lower than normal by 3–5 °C (5.4–9 °F). The phenomenon occurs as strong winds blow warm water at the ocean's surface away from South America, across the Pacific Ocean towards Indonesia. As this warm water moves west, cold water from the deep sea rises to the surface near South America. The movement of so much heat across a quarter of the planet, and particularly in the form of temperature at the ocean surface, can have a significant effect on weather across the entire planet. Tropical instability waves visible on sea surface temperature maps, showing a tongue of colder water, are often present during neutral or La Niña conditions. La Niña is a complex weather pattern that occurs every few years, often persisting for longer than five months. El Niño and La Niña can be indicators of weather changes across the globe. Atlantic and Pacific hurricanes can have different characteristics due to lower or higher wind shear and cooler or warmer sea surface temperatures. A timeline of all La Niña episodes between 1900 and 2023. Note that each forecast agency has a different criteria for what constitutes a La Niña event, which is tailored to their specific interests. La Niña events have been observed for hundreds of years, and occurred on a regular basis during the early parts of both the 17th and 19th centuries. Since the start of the 20th century, La Niña events have occurred during the following years: Transitional phases Transitional phases at the onset or departure of El Niño or La Niña can also be important factors on global weather by affecting teleconnections. Significant episodes, known as Trans-Niño, are measured by the Trans-Niño index (TNI). Examples of affected short-time climate in North America include precipitation in the Northwest US and intense tornado activity in the contiguous US. Variations ENSO Modoki The first ENSO pattern to be recognised, called Eastern Pacific (EP) ENSO, to distinguish if from others, involves temperature anomalies in the eastern Pacific. However, in the 1990s and 2000s, variations of ENSO conditions were observed, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but an anomaly also arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) ENSO, "dateline" ENSO (because the anomaly arises near the dateline), or ENSO "Modoki" (Modoki is Japanese for "similar, but different"). There are variations of ENSO additional to the EP and CP types, and some scientists argue that ENSO exists as a continuum, often with hybrid types. The effects of the CP ENSO are different from those of the EP ENSO. The El Niño Modoki is associated with more hurricanes more frequently making landfall in the Atlantic. La Niña Modoki leads to a rainfall increase over northwestern Australia and northern Murray–Darling basin, rather than over the eastern portion of the country as in a conventional EP La Niña. Also, La Niña Modoki increases the frequency of cyclonic storms over Bay of Bengal, but decreases the occurrence of severe storms in the Indian Ocean overall. The first recorded El Niño that originated in the central Pacific and moved toward the east was in 1986. Recent Central Pacific El Niños happened in 1986–87, 1991–92, 1994–95, 2002–03, 2004–05 and 2009–10. Furthermore, there were "Modoki" events in 1957–59, 1963–64, 1965–66, 1968–70, 1977–78 and 1979–80. Some sources say that the El Niños of 2006-07 and 2014-16 were also Central Pacific El Niños. Recent years when La Niña Modoki events occurred include 1973–1974, 1975–1976, 1983–1984, 1988–1989, 1998–1999, 2000–2001, 2008–2009, 2010–2011, and 2016–2017. The recent discovery of ENSO Modoki has some scientists believing it to be linked to global warming. However, comprehensive satellite data go back only to 1979. More research must be done to find the correlation and study past El Niño episodes. More generally, there is no scientific consensus on how/if climate change might affect ENSO. There is also a scientific debate on the very existence of this "new" ENSO. A number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO. Likewise, following the asymmetric nature of the warm and cold phases of ENSO, some studies could not identify similar variations for La Niña, both in observations and in the climate models, but some sources could identify variations on La Niña with cooler waters on central Pacific and average or warmer water temperatures on both eastern and western Pacific, also showing eastern Pacific Ocean currents going to the opposite direction compared to the currents in traditional La Niñas. ENSO Costero Coined by the Peruvian (ENFEN), ENSO Costero, or ENSO Oriental, is the name given to the phenomenon where the sea-surface temperature anomalies are mostly focused on the South American coastline, especially from Peru and Ecuador. Studies point many factors that can lead to its occurrence, sometimes accompanying, or being accompanied, by a larger EP ENSO occurrence, or even displaying opposite conditions from the observed ones in the other Niño regions when accompanied by Modoki variations. ENSO Costero events usually present more localized effects, with warm phases leading to increased rainfall over the coast of Ecuador, northern Peru and the Amazon rainforest, and increased temperatures over the northern Chilean coast, and cold phases leading to droughts on the peruvian coast, and increased rainfall and decreased temperatures on its mountainous and jungle regions. Because they don't influence the global climate as much as the other types, these events present lesser and weaker correlations to other significant ENSO features, neither always being triggered by Kelvin waves, nor always being accompanied by proportional Southern Oscillation responses. According to the Coastal Niño Index (ICEN), strong El Niño Costero events include 1957, 1982–83, 1997–98 and 2015–16, and La Niña Costera ones include 1950, 1954–56, 1962, 1964, 1966, 1967–68, 1970–71, 1975–76 and 2013. Monitoring and declaration of conditions Currently, each country has a different threshold for what constitutes an El Niño event, which is tailored to their specific interests, for example: In the United States, an El Niño is declared when the Climate Prediction Center, which monitors the sea surface temperatures in the Niño 3.4 region and the tropical Pacific, forecasts that the sea surface temperature will be above average or more for the next several seasons. The Niño 3.4 region stretches from the 120th to 170th meridians west longitude astride the equator five degrees of latitude on either side, are monitored. It is approximately to the southeast of Hawaii. The most recent three-month average for the area is computed, and if the region is more than 0.5 °C (0.9 °F) above (or below) normal for that period, then an El Niño (or La Niña) is considered in progress. The Australian Bureau of Meteorology looks at the trade winds, Southern Oscillation Index, weather models and sea surface temperatures in the Niño 3 and 3.4 regions, before declaring an ENSO event. The Japan Meteorological Agency declares that an ENSO event has started when the average five month sea surface temperature deviation for the Niño 3 region is over for six consecutive months or longer. The Peruvian government declares that an ENSO Costero is under way if the sea surface temperature deviation in the Niño 1+2 regions equal or exceed for at least three months. The United Kingdom's Met Office also uses a several month period to determine ENSO state. When this warming or cooling occurs for only seven to nine months, it is classified as El Niño/La Niña "conditions"; when it occurs for more than that period, it is classified as El Niño/La Niña "episodes". Effects of ENSO on global climate In climate change science, ENSO is known as one of the internal climate variability phenomena. The other two main ones are Pacific decadal oscillation and Atlantic multidecadal oscillation. La Niña impacts the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on decadal timescales. Climate change There is no sign that there are actual changes in the ENSO physical phenomenon due to climate change. Climate models do not simulate ENSO well enough to make reliable predictions. Future trends in ENSO are uncertain as different models make different predictions. It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the global warming, and then (e.g., after the lower layers of the ocean get warmer, as well), El Niño will become weaker. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. The consequences of ENSO in terms of the temperature anomalies and precipitation and weather extremes around the world are clearly increasing and associated with climate change. For example, recent scholarship (since about 2019) has found that climate change is increasing the frequency of extreme El Niño events. Previously there was no consensus on whether climate change will have any influence on the strength or duration of El Niño events, as research alternately supported El Niño events becoming stronger and weaker, longer and shorter. Over the last several decades, the number of El Niño events increased, and the number of La Niña events decreased, although observation of ENSO for much longer is needed to detect robust changes. Studies of historical data show the recent El Niño variation is most likely linked to global warming. For example, some results, even after subtracting the positive influence of decadal variation, are shown to be possibly present in the ENSO trend, the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years. A study published in 2023 by CSIRO researchers found that climate change may have increased by two times the likelihood of strong El Niño events and nine times the likelihood of strong La Niña events. The study stated it found a consensus between different models and experiments. The IPCC Sixth Assessment Report summarized the state of the art of research in 2021 into the future of ENSO as follows: "In the long term, it is very likely that the precipitation variance related to El Niño–Southern Oscillation will increase" and "It is very likely that rainfall variability related to changes in the strength and spatial extent of ENSO teleconnections will lead to significant changes at regional scale". and "There is medium confidence that both ENSO amplitude and the frequency of high-magnitude events since 1950 are higher than over the period from 1850 and possibly as far back as 1400". Investigations regarding tipping points The ENSO is considered to be a potential tipping element in Earth's climate. Global warming can strengthen the ENSO teleconnection and resulting extreme weather events. For example, an increase in the frequency and magnitude of El Niño events have triggered warmer than usual temperatures over the Indian Ocean, by modulating the Walker circulation. This has resulted in a rapid warming of the Indian Ocean, and consequently a weakening of the Asian Monsoon. Effects of ENSO on weather patterns El Niño affects the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. Tropical cyclones Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. Areas west of Japan and Korea tend to experience many fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E, which would favor the Japanese archipelago. Based on modeled and observed accumulated cyclone energy (ACE), El Niño years usually result in less active hurricane seasons in the Atlantic Ocean, but instead favor a shift to tropical cyclone activity in the Pacific Ocean, compared to La Niña years favoring above average hurricane development in the Atlantic and less so in the Pacific basin. Over the Atlantic Ocean, vertical wind shear is increased, which inhibits tropical cyclone genesis and intensification, by causing the westerly winds to be stronger. The atmosphere over the Atlantic Ocean can also be drier and more stable during El Niño events, which can inhibit tropical cyclone genesis and intensification. Within the Eastern Pacific basin: El Niño events contribute to decreased easterly vertical wind shear and favor above-normal hurricane activity. However, the impacts of the ENSO state in this region can vary and are strongly influenced by background climate patterns. The Western Pacific basin experiences a change in the location of where tropical cyclones form during El Niño events, with tropical cyclone formation shifting eastward, without a major change in how many develop each year. As a result of this change, Micronesia is more likely, and China less likely, to be affected by tropical cyclones. A change in the location of where tropical cyclones form also occurs within the Southern Pacific Ocean between 135°E and 120°W, with tropical cyclones more likely to occur within the Southern Pacific basin than the Australian region. As a result of this change tropical cyclones are 50% less likely to make landfall on Queensland, while the risk of a tropical cyclone is elevated for island nations like Niue, French Polynesia, Tonga, Tuvalu, and the Cook Islands. Remote influence on tropical Atlantic Ocean A study of climate records has shown that El Niño events in the equatorial Pacific are generally associated with a warm tropical North Atlantic in the following spring and summer. About half of El Niño events persist sufficiently into the spring months for the Western Hemisphere Warm Pool to become unusually large in summer. Occasionally, El Niño's effect on the Atlantic Walker circulation over South America strengthens the easterly trade winds in the western equatorial Atlantic region. As a result, an unusual cooling may occur in the eastern equatorial Atlantic in spring and summer following El Niño peaks in winter. Cases of El Niño-type events in both oceans simultaneously have been linked to severe famines related to the extended failure of monsoon rains. Impacts on humans and ecosystems Economic impacts When El Niño conditions last for many months, extensive ocean warming and the reduction in easterly trade winds limits upwelling of cold nutrient-rich deep water, and its economic effect on local fishing for an international market can be serious. Developing countries that depend on their own agriculture and fishing, particularly those bordering the Pacific Ocean, are usually most affected by El Niño conditions. In this phase of the Oscillation, the pool of warm water in the Pacific near South America is often at its warmest in late December. More generally, El Niño can affect commodity prices and the macroeconomy of different countries. It can constrain the supply of rain-driven agricultural commodities; reduce agricultural output, construction, and services activities; increase food prices; and may trigger social unrest in commodity-dependent poor countries that primarily rely on imported food. A University of Cambridge Working Paper shows that while Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa face a short-lived fall in economic activity in response to an El Niño shock, other countries may actually benefit from an El Niño weather shock (either directly or indirectly through positive spillovers from major trading partners), for instance, Argentina, Canada, Mexico and the United States. Furthermore, most countries experience short-run inflationary pressures following an El Niño shock, while global energy and non-fuel commodity prices increase. The IMF estimates a significant El Niño can boost the GDP of the United States by about 0.5% (due largely to lower heating bills) and reduce the GDP of Indonesia by about 1.0%. Health and social impacts Extreme weather conditions related to the El Niño cycle correlate with changes in the incidence of epidemic diseases. For example, the El Niño cycle is associated with increased risks of some of the diseases transmitted by mosquitoes, such as malaria, dengue fever, and Rift Valley fever. Cycles of malaria in India, Venezuela, Brazil, and Colombia have now been linked to El Niño. Outbreaks of another mosquito-transmitted disease, Australian encephalitis (Murray Valley encephalitis—MVE), occur in temperate south-east Australia after heavy rainfall and flooding, which are associated with La Niña events. A severe outbreak of Rift Valley fever occurred after extreme rainfall in north-eastern Kenya and southern Somalia during the 1997–98 El Niño. ENSO conditions have also been related to Kawasaki disease incidence in Japan and the west coast of the United States, via the linkage to tropospheric winds across the north Pacific Ocean. ENSO may be linked to civil conflicts. Scientists at The Earth Institute of Columbia University, having analyzed data from 1950 to 2004, suggest ENSO may have had a role in 21% of all civil conflicts since 1950, with the risk of annual civil conflict doubling from 3% to 6% in countries affected by ENSO during El Niño years relative to La Niña years. Ecological consequences During the 1982–83, 1997–98 and 2015–16 ENSO events, large extensions of tropical forests experienced a prolonged dry period that resulted in widespread fires, and drastic changes in forest structure and tree species composition in Amazonian and Bornean forests. Their impacts do not restrict only vegetation, since declines in insect populations were observed after extreme drought and terrible fires during El Niño 2015–16. Declines in habitat-specialist and disturbance-sensitive bird species and in large-frugivorous mammals were also observed in Amazonian burned forests, while temporary extirpation of more than 100 lowland butterfly species occurred at a burned forest site in Borneo. In seasonally dry tropical forests, which are more drought tolerant, researchers found that El Niño induced drought increased seedling mortality. In a research published in October 2022, researchers studied seasonally dry tropical forests in a national park in Chiang Mai in Thailand for 7 years and observed that El Niño increased seedling mortality even in seasonally dry tropical forests and may impact entire forests in long run. Coral bleaching Following the El Nino event in 1997 – 1998, the Pacific Marine Environmental Laboratory attributes the first large-scale coral bleaching event to the warming waters. Most critically, global mass bleaching events were recorded in 1997-98 and 2015–16, when around 75-99% losses of live coral were registered across the world. Considerable attention was also given to the collapse of Peruvian and Chilean anchovy populations that led to a severe fishery crisis following the ENSO events in 1972–73, 1982–83, 1997–98 and, more recently, in 2015–16. In particular, increased surface seawater temperatures in 1982-83 also lead to the probable extinction of two hydrocoral species in Panamá, and to a massive mortality of kelp beds along 600 km of coastline in Chile, from which kelps and associated biodiversity slowly recovered in the most affected areas even after 20 years. All these findings enlarge the role of ENSO events as a strong climatic force driving ecological changes all around the world – particularly in tropical forests and coral reefs. Impacts by region Observations of ENSO events since 1950 show that impacts associated with such events depend on the time of year. While certain events and impacts are expected to occur, it is not certain that they will happen. The impacts that generally do occur during most El Niño events include below-average rainfall over Indonesia and northern South America, and above average rainfall in southeastern South America, eastern equatorial Africa, and the southern United States. Africa La Niña results in wetter-than-normal conditions in southern Africa from December to February, and drier-than-normal conditions over equatorial east Africa over the same period. The effects of El Niño on rainfall in southern Africa differ between the summer and winter rainfall areas. Winter rainfall areas tend to get higher rainfall than normal and summer rainfall areas tend to get less rain. The effect on the summer rainfall areas is stronger and has led to severe drought in strong El Niño events. Sea surface temperatures off the west and south coasts of South Africa are affected by ENSO via changes in surface wind strength. During El Niño the south-easterly winds driving upwelling are weaker which results in warmer coastal waters than normal, while during La Niña the same winds are stronger and cause colder coastal waters. These effects on the winds are part of large scale influences on the tropical Atlantic and the South Atlantic High-pressure system, and changes to the pattern of westerly winds further south. There are other influences not known to be related to ENSO of similar importance. Some ENSO events do not lead to the expected changes. Antarctica Many ENSO linkages exist in the high southern latitudes around Antarctica. Specifically, El Niño conditions result in high-pressure anomalies over the Amundsen and Bellingshausen Seas, causing reduced sea ice and increased poleward heat fluxes in these sectors, as well as the Ross Sea. The Weddell Sea, conversely, tends to become colder with more sea ice during El Niño. The exact opposite heating and atmospheric pressure anomalies occur during La Niña. This pattern of variability is known as the Antarctic dipole mode, although the Antarctic response to ENSO forcing is not ubiquitous. Asia In Western Asia, during the region's November–April rainy season, there is increased precipitation in the El Niño phase and reduced precipitation in the La Niña phase on average. During El Niño years: As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it takes the rain with it, causing extensive drought in the western Pacific and rainfall in the normally dry eastern Pacific. Singapore experienced the driest February in 2010 since records began in 1869, with only 6.3 mm of rain falling in the month. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell. During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat in China. In March 2008, La Niña caused a drop in sea surface temperatures over Southeast Asia by . It also caused heavy rains over the Philippines, Indonesia, and Malaysia. Australia Across most of the continent, El Niño and La Niña have more impact on climate variability than any other factor. There is a strong correlation between the strength of La Niña and rainfall: the greater the sea surface temperature and Southern Oscillation difference from normal, the larger the rainfall change. During El Niño events, the shift in rainfall away from the Western Pacific may mean that rainfall across Australia is reduced. Over the southern part of the continent, warmer than average temperatures can be recorded as weather systems are more mobile and fewer blocking areas of high pressure occur. The onset of the Indo-Australian Monsoon in tropical Australia is delayed by two to six weeks, which as a consequence means that rainfall is reduced over the northern tropics. The risk of a significant bushfire season in south-eastern Australia is higher following an El Niño event, especially when it is combined with a positive Indian Ocean Dipole event. Europe El Niño's effects on Europe are controversial, complex and difficult to analyze, as it is one of several factors that influence the weather over the continent and other factors can overwhelm the signal. North America La Niña causes mostly the opposite effects of El Niño: above-average precipitation across the northern Midwest, the northern Rockies, Northern California, and the Pacific Northwest's southern and eastern regions. Meanwhile, precipitation in the southwestern and southeastern states, as well as southern California, is below average. This also allows for the development of many stronger-than-average hurricanes in the Atlantic and fewer in the Pacific. ENSO is linked to rainfall over Puerto Rico. During an El Niño, snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well-below normal across the Upper Midwest and Great Lakes states. During a La Niña, snowfall is above normal across the Pacific Northwest and western Great Lakes. In Canada, La Niña will, in general, cause a cooler, snowier winter, such as the near-record-breaking amounts of snow recorded in the La Niña winter of 2007–2008 in eastern Canada. In the spring of 2022, La Niña caused above-average precipitation and below-average temperatures in the state of Oregon. April was one of the wettest months on record, and La Niña effects, while less severe, were expected to continue into the summer. Over North America, the main temperature and precipitation impacts of El Niño generally occur in the six months between October and March. In particular, the majority of Canada generally has milder than normal winters and springs, with the exception of eastern Canada where no significant impacts occur. Within the United States, the impacts generally observed during the six-month period include wetter-than-average conditions along the Gulf Coast between Texas and Florida, while drier conditions are observed in Hawaii, the Ohio Valley, Pacific Northwest and the Rocky Mountains. Study of more recent weather events over California and the southwestern United States indicate that there is a variable relationship between El Niño and above-average precipitation, as it strongly depends on the strength of the El Niño event and other factors. Though it has been historically associated with high rainfall in California, the effects of El Niño depend more strongly on the "flavor" of El Niño than its presence or absence, as only "persistent El Niño" events lead to consistently high rainfall. To the north across Alaska, La Niña events lead to drier than normal conditions, while El Niño events do not have a correlation towards dry or wet conditions. During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track. During La Niña events, the storm track shifts far enough northward to bring wetter than normal winter conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. During the El Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. Isthmus of Tehuantepec The synoptic condition for the Tehuantepecer, a violent mountain-gap wind in between the mountains of Mexico and Guatemala, is associated with high-pressure system forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores-Bermuda high pressure system. Wind magnitude is greater during El Niño years than during La Niña years, due to the more frequent cold frontal incursions during El Niño winters. Tehuantepec winds reach to , and on rare occasions . The wind's direction is from the north to north-northeast. It leads to a localized acceleration of the trade winds in the region, and can enhance thunderstorm activity when it interacts with the Intertropical Convergence Zone. The effects can last from a few hours to six days. Between 1942 and 1957, La Niña had an impact that caused isotope changes in the plants of Baja California, and that had helped scientists to study his impact. Pacific islands During an El Niño event, New Zealand tends to experience stronger or more frequent westerly winds during their summer, which leads to an elevated risk of drier than normal conditions along the east coast. There is more rain than usual though on New Zealand's West Coast, because of the barrier effect of the North Island mountain ranges and the Southern Alps. Fiji generally experiences drier than normal conditions during an El Niño, which can lead to drought becoming established over the Islands. However, the main impacts on the island nation is felt about a year after the event becomes established. Within the Samoan Islands, below average rainfall and higher than normal temperatures are recorded during El Niño events, which can lead to droughts and forest fires on the islands. Other impacts include a decrease in the sea level, possibility of coral bleaching in the marine environment and an increased risk of a tropical cyclone affecting Samoa. In the late winter and spring during El Niño events, drier than average conditions can be expected in Hawaii. On Guam during El Niño years, dry season precipitation averages below normal, but the probability of a tropical cyclone is more than triple what is normal, so extreme short duration rainfall events are possible. On American Samoa during El Niño events, precipitation averages about 10 percent above normal, while La Niña events are associated with precipitation averaging about 10 percent below normal. South America The effects of El Niño in South America are direct and strong. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. Because El Niño's warm pool feeds thunderstorms above, it creates increased rainfall across the east-central and eastern Pacific Ocean, including several portions of the South American west coast. The effects of El Niño in South America are direct and stronger than in North America. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. The effects during the months of February, March, and April may become critical along the west coast of South America, El Niño reduces the upwelling of cold, nutrient-rich water that sustains large fish populations, which in turn sustain abundant sea birds, whose droppings support the fertilizer industry. The reduction in upwelling leads to fish kills off the shore of Peru. The local fishing industry along the affected coastline can suffer during long-lasting El Niño events. Peruvian fisheries collapsed during the 1970s due to overfishing following the 1972 El Niño Peruvian anchoveta reduction. The fisheries were previously the world's largest, however, this collapse led to the decline of these fisheries. During the 1982–83 event, jack mackerel and anchoveta populations were reduced, scallops increased in warmer water, but hake followed cooler water down the continental slope, while shrimp and sardines moved southward, so some catches decreased while others increased. Horse mackerel have increased in the region during warm events. Shifting locations and types of fish due to changing conditions create challenges for the fishing industry. Peruvian sardines have moved during El Niño events to Chilean areas. Other conditions provide further complications, such as the government of Chile in 1991 creating restrictions on the fishing areas for self-employed fishermen and industrial fleets. Southern Brazil and northern Argentina also experience wetter than normal conditions during El Niño years, but mainly during the spring and early summer. Central Chile receives a mild winter with large rainfall, and the Peruvian-Bolivian Altiplano is sometimes exposed to unusual winter snowfall events. Drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. During a time of La Niña, drought affects the coastal regions of Peru and Chile. From December to February, northern Brazil is wetter than normal. La Niña causes higher than normal rainfall in the central Andes, which in turn causes catastrophic flooding on the Llanos de Mojos of Beni Department, Bolivia. Such flooding is documented from 1853, 1865, 1872, 1873, 1886, 1895, 1896, 1907, 1921, 1928, 1929 and 1931. Galápagos Islands The Galápagos Islands are a chain of volcanic islands, nearly 600 miles west of Ecuador, South America. in the Eastern Pacific Ocean. These islands support a wide diversity of terrestrial and marine species. The ecosystem is based on the normal trade winds which influence upwelling of cold, nutrient rich waters to the islands. During an El Niño event the trade winds weaken and sometimes blow from west to east, which causes the Equatorial current to weaken, raising surface water temperatures and decreasing nutrients in waters surrounding the Galápagos. El Niño causes a trophic cascade which impacts entire ecosystems starting with primary producers and ending with critical animals such as sharks, penguins, and seals. The effects of El Niño can become detrimental to populations that often starve and die back during these years. Rapid evolutionary adaptations are displayed amongst animal groups during El Niño years to mitigate El Niño conditions. History In geologic timescales Evidence is also strong for El Niño events during the early Holocene epoch 10,000 years ago. Different modes of ENSO-like events have been registered in paleoclimatic archives, showing different triggering methods, feedbacks and environmental responses to the geological, atmospheric and oceanographic characteristics of the time. These paleorecords can be used to provide a qualitative basis for conservation practices. Scientists have also found chemical signatures of warmer sea surface temperatures and increased rainfall caused by El Niño in coral specimens that are around 13,000 years old. In a paleoclimate study published in 2024, the authors suggest that El Niños had a strong influence on Earth's hothouse climate during the Permian-Triassic extinction event. The increasing intensity and duration of El Niño events were associated with active volcanism, which resulted in the dieback of vegetation, an increase in the amount of carbon dioxide in the atmosphere, a significant warming and disturbances in the circulation of air masses. During human history ENSO conditions have occurred at two- to seven-year intervals for at least the past 300 years, but most of them have been weak. El Niño may have led to the demise of the Moche and other pre-Columbian Peruvian cultures. A recent study suggests a strong El Niño effect between 1789 and 1793 caused poor crop yields in Europe, which in turn helped touch off the French Revolution. The extreme weather produced by El Niño in 1876–77 gave rise to the most deadly famines of the 19th century. The 1876 famine alone in northern China killed up to 13 million people. The phenomenon had long been of interest because of its effects on the guano industry and other enterprises that depend on biological productivity of the sea. It is recorded that as early as 1822, cartographer Joseph Lartigue, of the French frigate La Clorinde under Baron Mackau, noted the "counter-current" and its usefulness for traveling southward along the Peruvian coast. Charles Todd, in 1888, suggested droughts in India and Australia tended to occur at the same time; Norman Lockyer noted the same in 1904. An El Niño connection with flooding was reported in 1894 by Victor Eguiguren (1852–1919) and in 1895 by Federico Alfonso Pezet (1859–1929). In 1924, Gilbert Walker (for whom the Walker circulation is named) coined the term "Southern Oscillation". He and others (including Norwegian-American meteorologist Jacob Bjerknes) are generally credited with identifying the El Niño effect. The major 1982–83 El Niño led to an upsurge of interest from the scientific community. The period 1990–95 was unusual in that El Niños have rarely occurred in such rapid succession. An especially intense El Niño event in 1998 caused an estimated 16% of the world's reef systems to die. The event temporarily warmed air temperature by 1.5 °C, compared to the usual increase of 0.25 °C associated with El Niño events. Since then, mass coral bleaching has become common worldwide, with all regions having suffered "severe bleaching". Around 1525, when Francisco Pizarro made landfall in Peru, he noted rainfall in the deserts, the first written record of the impacts of El Niño. Related patterns Madden–Julian oscillation Link to the El Niño-Southern oscillation Pacific decadal oscillation Mechanisms Pacific Meridional Mode See also For La Niña: 2000 Mozambique flood (attributed to La Niña) 2010 Pakistan floods (attributed to La Niña) 2010–2011 Queensland floods (attributed to La Niña) 2010–2012 La Niña event 2010–2011 Southern Africa floods (attributed to La Niña) 2010–2013 Southern United States and Mexico drought (attributed to La Niña) 2011 East Africa drought (attributed to La Niña) 2020 Atlantic hurricane season (unprecedented severity fueled by La Niña) 2021 eastern Australia floods (severity fueled by La Niña) 2022 Suriname floods (attributed to La Niña) 2023 Auckland Anniversary Weekend floods (attributed to La Niña) 2020–2023 La Niña event For El Niño: 1982–83 El Niño event 1997 Pacific hurricane season (severity fueled by El Niño) 1997–98 El Niño event 2014–2016 El Niño event 2015 Pacific hurricane season (severity fueled by El Niño) 2023–2024 El Niño event References External links Provides current phase of ENSO according to the Australian interpretation. Tropical meteorology Physical oceanography Natural history of the Americas Natural history of Oceania Effects of climate change Regional climate effects Weather hazards Spanish words and phrases Climate oscillations
El Niño–Southern Oscillation
[ "Physics" ]
11,055
[ "Physical phenomena", "Applied and interdisciplinary physics", "Weather hazards", "Weather", "Physical oceanography" ]
4,068,993
https://en.wikipedia.org/wiki/Microecosystem
Microecosystems can exist in locations which are precisely defined by critical environmental factors within small or tiny spaces. Such factors may include temperature, pH, chemical milieu, nutrient supply, presence of symbionts or solid substrates, gaseous atmosphere (aerobic or anaerobic) etc. Some examples Pond microecosystems These microecosystems with limited water volume are often only of temporary duration and hence colonized by organisms which possess a drought-resistant spore stage in the lifecycle, or by organisms which do not need to live in water continuously. The ecosystem conditions applying at a typical pond edge can be quite different from those further from shore. Extremely space-limited water ecosystems can be found in, for example, the water collected in bromeliad leaf bases and the "pitchers" of Nepenthes. Animal gut microecosystems These include the buccal region (especially cavities in the gingiva), rumen, caecum etc. of mammalian herbivores or even invertebrate digestive tracts. In the case of mammalian gastrointestinal microecology, microorganisms such as protozoa, bacteria, as well as curious incompletely defined organisms (such as certain large structurally complex Selenomonads, Quinella ovalis "Quin's Oval", Magnoovum eadii "Eadie's Oval", Oscillospira etc.) can exist in the rumen as incredibly complex, highly enriched mixed populations, (see Moir and Masson images ). This type of microecosystem can adjust rapidly to changes in the nutrition or health of the host animal (usually a ruminant such as cow, sheep, goat etc.); see Hungate's "The Rumen and its microbes 1966). Even within a small closed system such as the rumen there may exist a range of ecological conditions: Many organisms live freely in the rumen fluid whereas others require the substrate and metabolic products supplied by the stomach wall tissue with its folds and interstices. Interesting questions are also posed concerning the transfer of the strict anaerobe organisms in the gut microflora/microfauna to the next host generation. Here, mutual licking and coprophagia certainly play important roles. Soil microecosystems A typical soil microecosystem may be restricted to less than a millimeter in its total depth range owing to steep variation in humidity and/or atmospheric gas composition. The soil grain size and physical and chemical properties of the substrate may also play important roles. Because of the predominant solid phase in these systems they are notoriously difficult to study microscopically without simultaneously disrupting the fine spatial distribution of their components. Terrestrial hot-spring microecosystems These are defined by gradients of water temperature, nutrients, dissolved gases, salt concentrations etc. Along the path of terrestrial water flow the resulting temperature gradient continuum alone may provide many different minute microecosystems, starting with thermophilic bacteria such as Archaea "Archaebacteria" ( or more), followed by conventional thermophiles (), cyanobacteria (blue-green algae) such as the motile filaments of Oscillatoria (), protozoa such as Amoeba, rotifers, then green algae () etc. Of course other factors than temperature also play important roles. Hot springs can provide classic and straightforward ecosystems for microecology studies as well as providing a haven for hitherto undescribed organisms. Deep-sea microecosystems The best known contain rare specialized organisms, found only in the immediate vicinity (sometimes within centimeters) of underwater volcanic vents (or "smokers"). These ecosystems require extremely advanced diving and collection techniques for their scientific exploration. Closed microecosystem One that is sealed and completely independent of outside factors, except for temperature and light. A good example would be a plant contained in a sealed jar and submerged under water. No new factors would be able to enter this ecosystem. References Ecosystems Environmental science Ecology
Microecosystem
[ "Biology", "Environmental_science" ]
861
[ "Symbiosis", "Ecosystems", "Ecology", "nan" ]
4,069,108
https://en.wikipedia.org/wiki/IEC%2061508
IEC 61508 is an international standard published by the International Electrotechnical Commission (IEC) consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES). IEC 61508 is a basic functional safety standard applicable to all industries. It defines functional safety as: “part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system which depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.” The fundamental concept is that any safety-related system must work correctly or fail in a predictable (safe) way. The standard has two fundamental principles: An engineering process called the safety life cycle is defined based on best practices in order to discover and eliminate design errors and omissions. A probabilistic failure approach to account for the safety impact of device failures. The safety life cycle has 16 phases which roughly can be divided into three groups as follows: Phases 1–5 address analysis Phases 6–13 address realisation Phases 14–16 address operation. All phases are concerned with the safety function of the system. The standard has seven parts: Parts 1–3 contain the requirements of the standard (normative) Part 4 contains definitions Parts 5–7 are guidelines and examples for development and thus informative. Central to the standard are the concepts of probabilistic risk for each safety function. The risk is a function of frequency (or likelihood) of the hazardous event and the event consequence severity. The risk is reduced to a tolerable level by applying safety functions which may consist of E/E/PES, associated mechanical devices, or other technologies. Many requirements apply to all technologies but there is strong emphasis on programmable electronics especially in Part 3. IEC 61508 has the following views on risks: Zero risk can never be reached, only probabilities can be reduced Non-tolerable risks must be reduced (ALARP) Optimal, cost effective safety is achieved when addressed in the entire safety lifecycle Specific techniques ensure that mistakes and errors are avoided across the entire life-cycle. Errors introduced anywhere from the initial concept, risk analysis, specification, design, installation, maintenance and through to disposal could undermine even the most reliable protection. IEC 61508 specifies techniques that should be used for each phase of the life-cycle. The seven parts of the first edition of IEC 61508 were published in 1998 and 2000. The second edition was published in 2010. Hazard and risk analysis The standard requires that hazard and risk assessment be carried out for bespoke systems: 'The EUC (equipment under control) risk shall be evaluated, or estimated, for each determined hazardous event'. The standard advises that 'Either qualitative or quantitative hazard and risk analysis techniques may be used' and offers guidance on a number of approaches. One of these, for the qualitative analysis of hazards, is a framework based on 6 categories of likelihood of occurrence and 4 of consequence. Categories of likelihood of occurrence Consequence categories These are typically combined into a risk class matrix Where: Class I: Unacceptable in any circumstance; Class II: Undesirable: tolerable only if risk reduction is impracticable or if the costs are grossly disproportionate to the improvement gained; Class III: Tolerable if the cost of risk reduction would exceed the improvement; Class IV: Acceptable as it stands, though it may need to be monitored. Safety integrity level The safety integrity level (SIL) provides a target to attain for each safety function. A risk assessment effort yields a target SIL for each safety function. For any given design the achieved SIL is evaluated by three measures: 1. Systematic Capability (SC) which is a measure of design quality. Each device in the design has an SC rating. The SIL of the safety function is limited to smallest SC rating of the devices used. Requirement for SC are presented in a series of tables in Part 2 and Part 3. The requirements include appropriate quality control, management processes, validation and verification techniques, failure analysis etc. so that one can reasonably justify that the final system attains the required SIL. 2. Architecture Constraints which are minimum levels of safety redundancy presented via two alternative methods - Route 1h and Route 2h. 3. Probability of Dangerous Failure Analysis Probabilistic analysis The probability metric used in step 3 above depends on whether the functional component will be exposed to high or low demand: high demand is defined as more than once per year and low demand is defined as less than or equal to once per year (IEC-61508-4). For functions that operate continuously (continuous mode) or functions that operate frequently (high demand mode), SIL specifies an allowable frequency of dangerous failure. For functions that operate intermittently (low demand mode), SIL specifies an allowable probability that the function will fail to respond on demand. Note the difference between function and system. The system implementing the function might be in operation frequently (like an ECU for deploying an air-bag), but the function (like air-bag deployment) might be in demand intermittently. IEC 61508 certification Certification is third party attestation that a product, process, or system meets all requirements of the certification program. Those requirements are listed in a document called the certification scheme. IEC 61508 certification programs are operated by impartial third party organizations called certification bodies (CB). These CBs are accredited to operate following other international standards including ISO/IEC 17065 and ISO/IEC 17025. Certification bodies are accredited to perform the auditing, assessment, and testing work by an accreditation body (AB). There is often one national AB in each country. These ABs operate per the requirements of ISO/IEC 17011, a standard that contains requirements for the competence, consistency, and impartiality of accreditation bodies when accrediting conformity assessment bodies. ABs are members of the International Accreditation Forum (IAF) for work in management systems, products, services, and personnel accreditation or the International Laboratory Accreditation Cooperation (ILAC) for laboratory accreditation. A Multilateral Recognition Arrangement (MLA) between ABs will ensure global recognition of accredited CBs. IEC 61508 certification programs have been established by several global Certification Bodies. Each has defined their own scheme based upon IEC 61508 and other functional safety standards. The scheme lists the referenced standards and specifies procedures which describes their test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program. IEC 61508 certification programs are being offered globally by several recognized CBs including exida, Intertek, SGS-TÜV Saar, TÜV Nord, TÜV Rheinland, TÜV SÜD and UL. Industry/application specific variants Automotive ISO 26262 is an adaptation of IEC 61508 for Automotive Electric/Electronic Systems. It is being widely adopted by the major car manufacturers. Before the launch of ISO 26262, the development of software for safety related automotive systems was predominantly covered by the Motor Industry Software Reliability Association (MISRA) guidelines. The MISRA project was conceived to develop guidelines for the creation of embedded software in road vehicle electronic systems. A set of guidelines for the development of vehicle based software was published in November 1994. This document provided the first automotive industry interpretation of the principles of the, then emerging, IEC 61508 standard. Today MISRA is most widely known for its guidelines on how to use the C and C++ languages. MISRA C has gone on to become the de facto standard for embedded C programming in the majority of safety-related industries, and is also used to improve software quality even where safety is not the main consideration. Rail IEC 62279 provides a specific interpretation of IEC 61508 for railway applications. It is intended to cover the development of software for railway control and protection including communications, signaling and processing systems. EN 50128 and EN 50657 are equivalent CENELEC standards of IEC 62279. Process industries The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power. IEC 61511 is a technical standard which sets out practices in the engineering of systems that ensure the safety of an industrial process through the use of instrumentation. Power plants IEC 61513 provides requirements and recommendations for the instrumentation and control for systems important to safety of nuclear power plants. It indicates the general requirements for systems that contain conventional hardwired equipment, computer-based equipment or a combination of both types of equipment. An overview list of safety norms specific for nuclear power plants is published by ISO. Machinery IEC 62061 is the machinery-specific implementation of IEC 61508. It provides requirements that are applicable to the system level design of all types of machinery safety-related electrical control systems and also for the design of non-complex subsystems or devices. Testing software Software written in accordance with IEC 61508 may need to be unit tested, depending up on the SIL it needs to achieve. The main requirement in Unit Testing is to ensure that the software is fully tested at the function level and that all possible branches and paths are taken through the software. In some higher SIL level applications, the software code coverage requirement is much tougher and an MC/DC code coverage criterion is used rather than simple branch coverage. To obtain the MC/DC (modified condition/decision coverage) coverage information, one will need a Unit Testing tool, sometimes referred to as a Software Module Testing tool. See also Functional safety Safety standards FMEDA Spurious trip level Time-triggered system (A software architecture used to achieve IEC 61508 compliance) Software quality References Further reading Related safety standards ISO 26262 (is an adaption of IEC 61508 with minor differences) IEC 60730 (Household) DO-178C (Aerospace) Textbooks W. Goble, "Control Systems Safety Evaluation and Reliability" (3rd Edition , Hardcover, 458 pages). I. van Beurden, W. Goble, "Safety Instrumented System Design-Techniques and Design Verification" (1st Edition , 430 pages). M.J.M. Houtermans, "SIL and Functional Safety in a Nutshell" (Risknowlogy Best Practices, 1st Edition, eBook in PDF, ePub, and iBook format, 40 Pages) SIL and Functional Safety in a Nutshell - eBook introducing SIL and Functional Safety M. Medoff, R. Faller, "Functional Safety - An IEC 61508 SIL 3 Compliant Development Process" (3rd Edition, Hardcover, 371 pages, www.exida.com) C. O'Brien, L. Stewart, L. Bredemeyer, "Final Elements in Safety Instrumented Systems - IEC 61511 Compliant Systems and IEC 61508 Compliant Products" (1st Edition, 2018, , Hardcover, 305 pages, www.exida.com) Münch, Jürgen; Armbrust, Ove; Soto, Martín; Kowalczyk, Martin. “Software Process Definition and Management“, Springer, 2012. M.Punch, "Functional Safety for the Mining Industry – An Integrated Approach Using AS(IEC)61508, AS(IEC) 62061 and AS4024.1." (1st Edition, , in A4 paperback, 150 pages). D.Smith, K Simpson, "Safety Critical Systems Handbook: A Straightforward Guide to Functional Safety, IEC 61508 (2010 Edition) And Related Standards, Including Process IEC 61511 and Machinery IEC 62061 and ISO 13849" (3rd Edition , Hardcover, 288 Pages). External links IEC 61508-1:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems- Parts 1 IEC Functional Safety zone 61508 Association A cross-industry group of organizations with an interest in achieving a dependable and cost-effective method for demonstrating compliance with IEC 61508 and related standards. Electrical standards 61508 Safety engineering
IEC 61508
[ "Physics", "Technology", "Engineering" ]
2,506
[ "Systems engineering", "Electrical standards", "Electrical systems", "Safety engineering", "Computer standards", "IEC standards", "Physical systems" ]
4,069,165
https://en.wikipedia.org/wiki/Knight%20engine
The Knight engine is an internal combustion engine, designed by American Charles Yale Knight (1868-1940), that uses sleeve valves instead of the more common poppet valve construction. These engines were manufactured in the large quantities in USA, Knight's design was made a commercial success by development in England, while the French developed the Knight engine more intensively than any other nation. Ultimately Knight patents were issued in at least eight countries and were actually built by about thirty firms. History At first Knight tried making the entire engine cylinder reciprocate to open and close the exhaust and inlet ports. Though he patented this arrangement, he soon abandoned it in favor of a double sliding sleeve principle. Backed by Chicago entrepreneur L.B. Kilbourne, an experimental engine was built in Oak Park, Illinois in 1903. Research and development continued until 1905, when a prototype passed stringent tests in Elyria, Ohio. Having developed a practicable engine (at a cost of around $150,000), Knight and Kilbourne showed a complete "Silent Knight" touring car at the 1906 Chicago Auto Show. Fitted with a 4-cylinder, engine, the car was priced at $3,500. Knight engine Knight's design has two cast-iron sleeves per cylinder, bronze in some models, sliding inside the other, with the piston inside the inner sleeve. The sleeves are operated by small connecting rods actuated by an eccentric shaft and have ports cut out at their upper ends. The cylinder head (known as the "junk head") is like a fixed, inverted piston with its own set of rings projecting down inside the inner sleeve. The heads are individually detachable for each cylinder. The design is remarkably quiet and the sleeve valves need little attention. It was, however, more expensive to manufacture due to the precision grinding required on the sleeves' surfaces. Continental declared the Single Sleeve-valve engines were cheaper and easier to manufacture than poppet valve motors, even though it used more oil at high speeds and was harder to start in cold weather. The engine's design allows a more central location for the spark plugs to provide a better flame path, large ports for improved gas flow and hemispherical combustion chambers that in turn allows increased power. Additionally, the sleeve valves required very much less maintenance than poppet valves of the era, which needed adjustment, grinding and even replacement after only a few thousand miles. However, the adiabatic and isothermal characteristics accompanying the increased power afforded by the large (relative to contemporary poppet valve designs) port areas in the sleeves proved the double-sleeve valve concept's Achilles heel. Much of the advantage to be gained from increased volumetric efficiency could not be realized due to the inability to transfer resultant heat in a sufficiently steep gradient to avoid excessive internal temperatures, however, Harry Ricardo pointed, about the single Sleeve-valve, Burt-McCollum type, that as long as oil film between Sleeve and cylinder wall is kept thin enough, sleeves are transparent to heat. As a consequence of these thermal conditions, and contrary to conventional practice, the induction port area was reduced to substantially less than that of the exhaust port. Later engines having thinner, steel and white-metal coated sleeves possess improved levels of heat dissipation, but thermal transfer problems remain characteristic of the design, thus limiting development of the potential inherent in the double-sleeve valve engine. Improvements in design and materials of the more usual poppet valve engine eliminated most of the advantages initially held by the sleeve-valved variant, so that by the early 1930s manufacture of the Silent Knight had ceased, with only a couple of French automobile makers continuing to the end of the 1930's. Knight and Kilbourne had hoped to interest US automobile manufacturers in the engine so that they could grant licenses for its manufacture, but initially there were no takers. Pierce-Arrow of Buffalo, New York tested the engine against one of their own and found that it was more powerful at speeds above and would also go faster. However, they dismissed it as unsuitable for their range of cars because they believed that anything over was unsafe. They also considered the oil consumption (about 2 quarts/litres per ) excessive. Knight also received some bad publicity at the same time when a prototype car was entered in the 1906 Glidden Tour, only to drop out on the first day due to mechanical failure. Daimler-Knight Having virtually ignored two written approaches by engineer Edward Manville, a director of Daimler, Knight changed his mind and decided to try to interest English manufacturers in his engine. In 1907 Knight went with one of his cars to London where he managed to see fellow-American Percy Martin, also a director of Daimler. Daimler's engineers tested the engine and the results were sufficiently encouraging for Daimler to set up a secret team to fully develop Knight's concept. On the project's completion, though, it was no longer "Wholly Knight". Knight obtained a British patent for his modified engine on June 6, 1908. In September Daimler announced that "Silent Knight" engines would be installed in some of its 1909 models. To combat criticism from its competitors, Daimler had the RAC (Royal Automobile Club) carry out their own independent tests on the Daimler-Knight. RAC engineers took two Knight engines and ran them under full load for 132 hours nonstop. The same engines were then installed in a touring car and driven for on the Brooklands race track, after which they were removed and again run on the bench for 5 hours. RAC engineers reported that, when the engines were dismantled, there was no perceptible wear, the cylinders and pistons were clean, and the valves showed no signs of wear either. The RAC was so impressed that it awarded Daimler the 1909 Dewar Trophy. The RAC reports caused Daimler's share price to rise, £0.85 to £18.75, and the company's competitors to fear that the poppet-valve engine would soon be obsolete. Walter Owen Bentley, the founder of Bentley Motors, was of the opinion that the Daimler-Knight engine performed as well as the comparable Rolls-Royce power plant. The Knight engine (improved significantly by Daimler's engineers) attracted the attention of the European automobile manufacturers. Daimler bought rights from Knight "for England and the colonies" and shared ownership of the European rights, in which it took 60%, with Minerva of Belgium. European rights were purchased from them and used by Panhard et Levassor and Mercedes. Attracted by the possibilities of the "Silent Knight" engine, Daimler's chairman had contacted Knight in Chicago and Knight settled in England near Coventry in 1907. Daimler contracted Dr. Frederick Lanchester as their consultant for the purpose and a major re-design and refinement of Knight's design took place in great secrecy. Knight's design was made a practical proposition. When unveiled in September 1908, the new engine caused a sensation. "Suffice it to say that mushroom valves, springs and cams, and many small parts, are swept away bodily, that we have an almost perfectly spherical explosion chamber, and a cast-iron sleeve or tube as that portion of the combustion chamber in which the piston travels." Daimler dropped poppet-valve engines altogether and kept their silent sleeve-valve engines until the mid-1930s. Many vehicles were described as being fault-prone due to lubrication of the cylinder and sleeve contact faces. Often, proper lubrication could not be guaranteed with the lubricants available at the time, especially with inadequate maintenance. This problem increased with engine speeds over 1600 rpm, at which point the sleeve-valve engine ceased to provide superior output. With a maximum attainable engine speed of about 1750 rpm, the long-term development potential for the engine was limited. North America Thomas Russell of the Canada Cycle & Motor Co. Ltd. had followed the Knight with interest and when he read about the RAC tests he went to England in 1909 to secure a license from Knight. Russell also came to an agreement with Daimler, by which the company would supply Daimler-Knight engines for two years. Russell went on to manufacture several models of Russell-Knight luxury cars in Canada. In August 1911, the engine was licensed by the US automobile makers Columbia, Stearns, and Stoddard-Dayton. A license was also purchased by the Atlas Engineering Company of Indianapolis to make engines, which appeared in 1914 as the Lyons-Knight. Columbia, Stoddard-Dayton, and Atlas went bankrupt shortly after and their licences were transferred to other companies. Edwards-Knight obtained one which they passed on to Willys, while Moline acquired another which they retained into the 1920s. In 1913 a Mercedes-Knight driven by Théodore Pilette was entered in the Indianapolis 500 where, despite having the smallest engine, it took fifth place averaging over the . Willys made improvements to the Knight engine which were patented and in 1916 announced their Willys-Knight 88-4. They went on to open a Canadian manufacturing plant at Toronto to build export models. By 1925 there were five operations in the US producing chassis with Knight engines so that Willys-Knight production was running at 250 cars per day. Willys announcing in the same year that there were over 180,000 Willys-Knight engines in use worldwide. Willys also took over Stearns that year, forming a separate syndicate for the purpose (the companies were not merged). Sales of Willys-Knight cars declined towards the end of the 1920s. Thanks to the work of Harry Ricardo and Charles F. Kettering, simpler poppet valve engines had become very efficient, their first appearance being in the 1924 Chrysler, and the Knight engine's high manufacturing cost began to tell against it. While Willys built Knight models into the 1930s, development work had ceased. The Knight patents expired in 1932. Although a 1933 Willys-Knight Streamline Six was announced in June of that year, it is doubtful if production was continued into 1933. These were the last sleeve-valve automobiles manufactured in the US. Europe The Knight engine, while it originated in USA, was developed to fruition in England gaining an earlier start in Europe, where it also lasted longer. Mercedes built their 4-litre Knight 16/50 until 1924, while the Simson Supra Knight of 1925-26 was probably the last German Knight-engined car. In France, besides Peugeot and Mors, two brands of luxury automobiles used the Knight engine as standard equipment between 1923 and 1940: Avions Voisin and Panhard et Levassor. Voisin also built an air-cooled radial engine using the Knight principle in 1935 which was their last use of Knight technology. The Panhard et Levassor Dynamic, produced until the summer of 1940, was the last Knight-engined passenger car to be built in series. Some Knight engine powered automobiles Major brands Stearns-Knight (1911-1929) Willys-Knight (1915-1933) European brands Daimler (1909-1932) Mercedes (1911-1924) Minerva Mors Panhard et Levassor Peugeot Voisin (1919-1938) Others Brewster Columbia (1912-1913) Falcon-Knight (1927-1929) Lyons-Knight (1913-1915) Moline-Knight (1914-1919) R&V Knight (1920-1924) Silent-Knight (1905-1907) Stoddard-Dayton Yellow Cab/Truck Co. (1923-1927) See also Charles Knight at Sleeve valve engines Notes References External links (video) Cutaway working model at the 2008 Midwest Old Threshers Reunion in Mt. Pleasant Iowa. Internal combustion piston engines Sleeve valve engines Defunct automotive companies of the United States
Knight engine
[ "Technology" ]
2,394
[ "Sleeve valve engines", "Engines" ]
4,069,270
https://en.wikipedia.org/wiki/Difference%20polynomials
In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases. Definition The general difference polynomial sequence is given by where is the binomial coefficient. For , the generated polynomials are the Newton polynomials The case of generates Selberg's polynomials, and the case of generates Stirling's interpolation polynomials. Moving differences Given an analytic function , define the moving difference of f as where is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck. Generating function The generating function for the general difference polynomials is given by This generating function can be brought into the form of the generalized Appell representation by setting , , and . See also Carlson's theorem Bernoulli polynomials of the second kind References Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. Polynomials Finite differences Factorial and binomial topics
Difference polynomials
[ "Mathematics" ]
321
[ "Mathematical analysis", "Factorial and binomial topics", "Polynomials", "Finite differences", "Combinatorics", "Algebra" ]
4,069,279
https://en.wikipedia.org/wiki/IEC%2061511
IEC standard 61511 is a technical standard which sets out practices in the engineering of systems that ensure the safety of an industrial process through the use of instrumentation. Such systems are referred to as Safety Instrumented Systems. The title of the standard is "Functional safety - Safety instrumented systems for the process industry sector". Scope The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power. The process sector standard does not cover nuclear power facilities or nuclear reactors. IEC 61511 covers the application of electrical, electronic and programmable electronic equipment. While IEC 61511 does apply to equipment using pneumatic or hydraulic systems to manipulate final elements, the standard does not cover the design and implementation of pneumatic or hydraulic logic solvers. This standard defines the functional safety requirements established by IEC 61508 in process industry sector terminology. IEC 61511 focuses attention on one type of instrumented safety system used within the process sector, the Safety Instrumented System (SIS). History In 1998 the IEC, which stands for International Electrotechnical Commission published a document, IEC 61508, entitled: “Functional safety of electrical/electronic/programmable electronic safety-related systems”. This document sets the standards for safety-related system design of hardware and software. IEC 61508 is generic functional safety standard, providing the framework and core requirements for sector specific standard. Three sector specific standards have been released using the IEC 61508 framework, IEC 61511 (process), IEC 61513 (nuclear) and IEC 62061 (manufacturing/machineries). IEC 61511 provides good engineering practices for the application of safety instrumented systems in the process sector. In the United States ANSI/ISA 84.00.01-2004 was issued in September 2004. It primarily mirrors IEC 61511 in content with the exception that it contains a grandfathering clause: For existing safety instrumented systems (SIS) designed and constructed in accordance with codes, standards, or practices prior to the issuance of this standard (e.g. ANSI/ISA 84.01-1996), the owner/operator shall determine and document that the equipment is designed, maintained, inspected, tested, and operated in a safe manner. The European standards body, CENELEC, has adopted the standard as EN 61511. This means that in each of the member states of the European Union, the standard is published as a national standard. For example, in Great Britain, it is published by the national standards body, BSI, as BS EN 61511. The content of these national publications is identical to that of IEC 61511. Note, however, that 61511 is not harmonized under any directive of the European Commission. The Standard IEC 61511 covers the design and management requirements for SISs throughout the entire safety life cycle. Its scope includes: initial concept, design, implementation, operation, and maintenance through to decommissioning. It starts in the earliest phase of a project and continues through startup. It contains sections that cover modifications that come along later, along with maintenance activities and the eventual decommissioning activities. The standard consists of three parts: Framework, definitions, system, hardware and software requirements Guidelines in the application of IEC 61511-1 Guidance for the determination of the required safety integrity levels ISA 84.01/IEC 61511 requires a management system for identified SIS. An SIS is composed of a separate and independent combination of sensors, logic solvers, final elements, and support systems that are designed and managed to achieve a specified safety integrity level (SIL). An SIS may implement one or more safety instrumented functions (SIFs), which are designed and implemented to address a specific process hazard or hazardous event. The SIS management system should define how an owner/operator intends to assess, design, engineer, verify, install, commission, validate, operate, maintain, and continuously improve their SIS. The essential roles of the various personnel assigned responsibility for the SIS should be defined and procedures developed, as necessary, to support the consistent execution of their responsibilities. ISA 84.01/IEC 61511 uses an order of magnitude metric, the SIL, to establish the necessary performance. A hazard and risk analysis is used to identify the required safety functions and risk reduction for specified hazardous events. Safety functions allocated to the SIS are safety instrumented functions; the allocated risk reduction is related to the SIL. The design and operating basis is developed to ensure that the SIS meets the required SIL. Field data are collected through operational and mechanical integrity program activities to assess actual SIS performance. When the required performance is not met, action should be taken to close the gap, ensuring safe and reliable operation. IEC 61511 references IEC 61508 (the master standard) for many items such as manufacturers of hardware and instruments and so IEC 61511 cannot be fully implemented without reference to IEC 61508. IEC 61511 is the process industry implementation of IEC 61508. IEC61511 is updated with Edition 2.0 References Electrical standards Safety Industrial processes
IEC 61511
[ "Physics" ]
1,058
[ "Physical systems", "Electrical standards", "Electrical systems" ]
4,069,515
https://en.wikipedia.org/wiki/Royal%20Doulton
Royal Doulton is an English ceramic and home accessories manufacturer that was founded in 1815. Operating originally in Vauxhall, London, and later moving to Lambeth, in 1882 it opened a factory in Burslem, Stoke-on-Trent, in the centre of English pottery. From the start, the backbone of the business was a wide range of utilitarian wares, mostly stonewares, including storage jars, tankards and the like, and later extending to drain pipes, lavatories, water filters, electrical porcelain and other technical ceramics. From 1853 to 1901, its wares were marked Doulton & Co., then from 1901, when a royal warrant was given, Royal Doulton. It always made some more decorative wares, initially still mostly stoneware, and from the 1860s, the firm made considerable efforts to get a reputation for design, in which it was largely successful, as one of the first British makers of art pottery. Initially this was done through artistic stonewares made in Lambeth, but in 1882 the firm bought a Burslem factory, which was mainly intended for making bone china tablewares and decorative items. It was a latecomer in this market compared to firms such as Royal Crown Derby, Royal Worcester, Wedgwood, Spode and Mintons, but made a place for itself in the later 19th century. Today Royal Doulton mainly produces tableware and figurines, but also cookware, glassware, and other home accessories such as linens, curtains and lighting. Three of its brands were Royal Doulton, Royal Albert, and (after a post-WWII merger) Mintons. These brands are now owned by WWRD Holdings Limited (Waterford Crystal, Wedgwood, Royal Doulton), based in Barlaston near Stoke-on-Trent. On 2 July 2015, the acquisition of WWRD by the Finnish company Fiskars Corporation was completed. History – 19th century The Royal Doulton company began as a partnership between John Doulton, Martha Jones, and John Watts, as Doulton bought (with £100) an interest in an existing factory at Vauxhall Walk, Lambeth, London, where Watts was the foreman. They traded as Jones, Watts & Doulton from 1815 until Martha Jones left the partnership in 1820, when the trade name was changed to Doulton & Watts. The business specialised in making salt glaze stoneware articles, including utilitarian or decorative bottles, jugs and jars, much of it intended for inns and pubs. In 1826 they took over a larger existing pottery on Lambeth High Street. The company took the name Doulton & Co. in 1854 after the retirement of John Watts in 1853, and a merger with Henry Doulton and Co. (see below), although the trading name of Doulton & Watts continued to be used for decades. For some of the 19th century there were three different businesses, run by the sons of John Doulton, and perhaps with cross-ownership, which later came back together by the end of the century. By 1897 the total employees exceeded 4,000. Pipes and other utilitarian wares Manufacturing of circular ceramic sewage pipes began in 1846, and was highly successful; Henry Doulton set up his own company specializing in this, Henry Doulton and Co., the first business to make these. This merged with the main business in 1854. His brother John Junior also later set up his own pipe-making business. Previously sewers were just channels made of brick, which began to leak as they aged. The 1846–1860 cholera pandemic, and the tracing by Dr John Snow of the 1854 Broad Street cholera outbreak in London to a water supply contaminated by sewage led to a huge programme of improving sewage disposal, and other forms of drainage using pipes. These and an expanding range of builder's and sanitary wares remained a bedrock of Doulton into the 20th century. Metal plumbing items such as taps and cast iron baths were added to the range later. Kitchen stonewares such as storage jars and mixing bowls, and laboratory and manufacturing ceramics, were other long-standing specialities. Further facilities were set up for making these in Paisley in Scotland, Smethwick, St Helens near Liverpool, and Rowley Regis in England, and eventually Paris. Decorative wares By the 1860s Henry Doulton became interested in more artistic wares than the utilitarian ceramics which had grown the business enormously. British stoneware had languished somewhat in artistic terms, although Wedgwood and others continued to produce jasperware and some other stonewares in a very refined style, competing with porcelain. The Doulton wares went further back to earlier salt-glazed styles, with a varied glaze finish. This "gave stoneware an entirely new impetus, realizing the potential of the material". As the company became interested in diversifying from its utilitarian wares into more decorative objects, it developed a number of earthenware and stoneware bodies. The so-called "Lambeth faience" (from 1872) was "a somewhat heavily potted creamware much used in decorative plaques and vases", often with underglaze painting. Other bodies were called "Impasto" (1879); "Silicon" (1880), "a vitrified unglazed stoneware decorated with coloured clays"; "Carrara" (1887), white earthenware, also used as architectural terracotta; "Marquetrie" (1887), "marbled clays in checker work", then glazed; "Chine" impressed with fabrics to texture the clay, these burnt away in the kiln. By 1871, Henry Doulton, John's son, launched a studio at the Lambeth pottery, and offered work to designers and artists from the nearby Lambeth School of Art. The first to be engaged was George Tinworth followed by artists such as the Barlow family (Florence, Hannah, and Arthur), Frank Butler, Mark Marshall, Eliza Simmance and John Eyre. John Bennett was in charge of the "Lambeth faience" department until he emigrated to America in 1876, where he had success with his own pottery. Doulton was rather unusual in that most of the Lambeth studio pieces were signed by the artist or artists, usually with initials or a monogram incised on the base. Many are also dated. Until 1882, "every piece of the company's art stoneware was a unique item" but after that some pieces were made in batches, as demand grew. There were initial technical difficulties in producing the "art" pieces; at first they were fired in the open kiln with other wares, but later saggars were used. They were not especially profitable, sometimes not profitable at all, but there were huge profits in other parts of the business. Like other manufacturers, Doulton took great trouble with the wares submitted to international exhibitions, where it was often a medal winner. The period 1870–1900 saw "the great years of Doulton's art stoneware", which remains popular with collectors. In 1882, Doulton purchased the small factory of Pinder, Bourne & Co, at Nile Street in Burslem, Staffordshire, which placed Doulton in the region known as The Potteries. Architectural ceramics Doulton also manufactured architectural terracotta (in fact usually stoneware), mainly at Lambeth, and would execute commissions for monumental sculpture in terracotta. Their late Victorian catalogues contained a wide range of architectural elements with, for example, tall Tudor-style chimney pots in many different designs. The Tudor originals of these were built up in shaped brick, but Doultons supplied them in a single piece. There were ranges of small Gothic arches, columns and capitals. When the Anglican St. Alban's Church was built in Copenhagen, Denmark, in 1887 with Alexandra, Princess of Wales as one of the driving forces, Doulton donated and manufactured an altarpiece, a pulpit and a font. They were executed in terracotta with glazed details to the design of Tinworth. The Hotel Russell in Russell Square (1900) has a large facade in buff terracotta, including life-size statues of "British queens" by Henry Charles Fehr, sculpted coats of arms and other large ornamental elements. This was somewhat old-fashioned for 1900, and the new taste for Art Nouveau favoured the glazed white "Carrara" material, which remained popular through to the Art Deco of the 1930s, often combined with bespoke decoration in bright colours, as at the Turkey Cafe in Leicester, also of 1900. William James Neatby was the Royal Doulton's chief designer from 1890 to 1901 and designed some of the finest Modern Style (British Art Nouveau style) architectural ceramics and sculptures. Everard's Printing Works is a leading surviving example of an exterior in Doulton's Carrara glazed architectural terra-cotta. One of the largest schemes they made is , now in Glasgow Green, given by Sir Henry Doulton for the International Exhibition of 1888. When the over life-size statue at the top was destroyed in a lightning strike in 1901, Doulton paid for a second hand-made statue to be produced. Sir Henry's mausoleum is another fine example of Doulton's exterior terracottas, as are the pedimental sculptures for the department store Harrods (1880s). By this time Doulton was popular for stoneware and ceramics, under the artistic direction of John Slater, who worked with figurines, vases, character jugs, and decorative pieces designed by the prolific Leslie Harradine. Lambeth continued to make studio pottery in small quantities per design, often in stoneware and typically ornamental forms like vases, while Burslem made larger quantities of more middle market bone china tablewares and figures. By 1904 over 1,200 people were employed at Burslem alone. The retirement and death of Sir Henry Doulton, both in 1897, led to the company going public at the start of 1899. 20th century In 1901 King Edward VII awarded the Burslem factory the Royal Warrant, allowing that part of the business to adopt new markings and a new name, Royal Doulton. The bathroom ceramics and other utilitarian wares initially continued to be branded Doulton and Co. The company added products during the first half of the 20th century, and the tableware and decorative wares tended to shift from stonewares to high-quality bone china. Figurines in fashionable styles became increasingly important, for example a series of young girls in bathing costumes, in a mild version of Art Deco. Figures continued to be important throughout the 20th century, but the peak of quality in modelling and painting is generally thought to have been between the world wars. The well-known artist Frank Brangwyn designed a pattern for a dinner service in 1930 (see gallery), which continued to be made for some time. He created the design, but specified that the factory painters actually decorating the pieces be allowed some freedom in interpreting his designs. 1938, Doulton acquired the works of George Skey and Co. in Tamworth, Staffordshire, which had been producing drain pipes, chimney pots and chemical stoneware. Doulton modified the factory to produce a range of technical ceramics, including porcelain insulators, chemical porcelain, grinding media and for other applications. A high voltage laboratory for the testing of insulators was subsequently built. The headquarters building and factory of Royal Doulton were in Lambeth in London, on the south bank of the Thames. This Art Deco building was designed by T.P.Bennett. In 1939 Gilbert Bayes created ceramic relief friezes that showed the history of pottery through the ages. In 1963, a ceramic filter company Aerox Ltd., of Stroud, Gloucester, was acquired and subsequently integrated with the water filter division of Doulton Industrial Porcelains. Following various mergers and acquisitions over the years this company still exists, and under the name Doulton., but is no longer connected to Royal Doulton. In 1969 Doulton bought Beswick Pottery, long a specialist in figurines, mostly of animals, including some Beatrix Potter characters. Their factory in Longton, Stoke-on-Trent was used to make the popular "Bunnykins" range of anthropomorphic rabbits, originally produced in 1936 to designs by the then managing director's daughter, Sister Barbara Bailey, who was a nun. 1972 Doulton was taken over by Pearson and Son Ltd., and a year later restructured the Doulton group into five divisions: Royal Doulton Tableware; Doulton Glass Industries; Doulton Engineering Group; Doulton Sanitaryware and Doulton Australia. The whole English pottery industry was losing ground in the post-war period, and Doulton's purchases of other companies was not enough to stem decline. The Lambeth factory closed in 1956 due to clean air regulations preventing urban production of salt glaze. Following closure, work was transferred to The Potteries. The factory building was demolished in 1978 and the friezes transferred to the Victoria & Albert Museum. The office building in Black Prince Road survives, complete with a frieze of potters and Sir Henry Doulton over the original main entrance, executed by Tinworth. In 1980 Pearson purchased Fairey Holdings, which historically had been well known for its aircraft. In the next few years some parts of Doulton were spun off, including the glass and sanitaryware divisions, Doulton Engineering (brought under the management of Fairey, with the insulator division merged with Allied Insulators in 1985). The Churchbank factory was closed in 2000. The Beswick factory in Longton closed and the Doulton factory in Baddeley Green closed in 2003. The Nile Street factory in Burslem closed on 30 September 2005, and was demolished in 2014. Corporate In 1971, S. Pearson & Son Ltd, a subsidiary of the Pearson industrial conglomerate acquired Doulton & Co. Pearson & Son owned Allied English Potteries and merged operations into Doulton & Co. All brands from Allied English Potteries and Doulton & Co. Ltd. including Royal Doulton, Minton, Beswick, Dunn Bennett, Booths, Colclough, Royal Albert, Royal Crown Derby, Paragon, Ridgway, Queen Anne, Royal Adderley and Royal Adderley Floral were moved under the umbrella of Royal Doulton Tableware Ltd. Royal Doulton Tableware Ltd was a subsidiary of Doulton & Co. Ltd, itself a subsidiary of the Pearson Group Doulton & Co. became Royal Doulton plc in 1993. Pearson spun off Royal Doulton in 1993. Waterford Wedgwood completed a takeover of Royal Doulton in 2005, acquiring all assets and brands. Parts of the business were progressively sold off. The sanitaryware division was bought by Stelrad. In 1983 David Edward Dunn Johnson bought the hotelware division of Royal Doulton, now renamed Steelite and, as of 2022, was still operating in Stoke-on-Trent. In 1995 Royal Doulton commissioned a new factory just outside Jakarta, Indonesia; this division is called PT Doulton. By 2009 the factory employed 1,500 persons producing bone china under both Wedgwood and Royal Doulton brands. Annual production was reported to be 5 to 7 million pieces. In order to reduce costs the majority of production of both brands has been transferred to Indonesia, with only a small number of high-end products continuing to be made in the UK. Royal Doulton Ltd., along with other Waterford Wedgwood companies, went into administration on 5 January 2009. Royal Doulton is now part of WWRD Holdings Limited. On 11 May 2015, Fiskars, a Finnish maker of home products, agreed to buy 100% of the holdings of WWRD. On 2 July 2015 the acquisition of WWRD by Fiskars Corporation was completed including the brands Waterford, Wedgwood, Royal Doulton, Royal Albert and Rogaška. The acquisition was approved by the US antitrust authorities. Cultural references In the comedy television series Keeping Up Appearances her Royal Doulton china "with the hand-painted periwinkles" was frequently mentioned with great pride by the main character, Hyacinth Bucket. A Royal Doulton bowl features prominently in the 2018 film Mary Poppins Returns, and is the basis for the song "The Royal Doulton Music Hall". In the James Bond 007 franchise films, Judi Dench's M character has a Royal Doulton's "Jack the Bulldog" figurine on her desk at MI6. Notable designers Hannah and Florence Barlow, two painter sisters Leslie Harradine Agnete Hoy Charles Noke Gallery See also List of Royal Doulton figurines List of Bunnykins figurines Notes and references References Battie, David, ed., Sotheby's Concise Encyclopedia of Porcelain, 1990, Conran Octopus, Furnival, W.J., Leadless decorative tiles, faience, and mosaic, 1904, W.J. Furnival, Stone, Staffordshire, reprint , 9781176325630, Google books Godden, Geoffrey, An Illustrated Encyclopaedia of British Pottery and Porcelain, 1992, Magna Books, "Grace's": "Doulton & Co.", Grace's Guide to British industrial history Hughes, G Bernard, The Country Life Pocket Book of China, 1965, Country Life Ltd Wood, Frank L., The World of British Stoneware: Its History, Manufacture and Wares, 2014, Troubador Publishing Ltd, , 9781783063673 External links Official website Examples in the collection of the Museum of New Zealand Te Papa Tongarewa Ceramics manufacturers of England English brands English pottery Figurine manufacturers Kitchenware brands Staffordshire pottery Waterford Wedgwood Companies based in Stoke-on-Trent Manufacturing companies established in 1815 1815 establishments in England Privately held companies of the United Kingdom British royal warrant holders Fiskars British porcelain Art pottery Manufacturer of architectural terracotta
Royal Doulton
[ "Engineering" ]
3,693
[ "Manufacturer of architectural terracotta", "Architecture" ]
4,069,580
https://en.wikipedia.org/wiki/Stirling%20polynomials
In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references. Definition and examples For nonnegative integers k, the Stirling polynomials, Sk(x), are a Sheffer sequence for defined by the exponential generating function The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials) each with exponential generating function given by the relation . The first 10 Stirling polynomials are given in the following table: {| class="wikitable" !k !! Sk(x) |- | 0 || |- | 1 || |- | 2 || |- | 3 || |- | 4 || |- | 5 || |- | 6 || |- | 7 || |- | 8 || |- | 9 || |} Yet another variant of the Stirling polynomials is considered in (see also the subsection on Stirling convolution polynomials below). In particular, the article by I. Gessel and R. P. Stanley defines the modified Stirling polynomial sequences, and where are the unsigned Stirling numbers of the first kind, in terms of the two Stirling number triangles for non-negative integers . For fixed , both and are polynomials of the input each of degree and with leading coefficient given by the double factorial term . Properties Below denote the Bernoulli polynomials and the Bernoulli numbers under the convention denotes a Stirling number of the first kind; and denotes Stirling numbers of the second kind. Special values: If and then: If and then: and: The sequence is of binomial type, since Moreover, this basic recursion holds: Explicit representations involving Stirling numbers can be deduced with Lagrange's interpolation formula: Here, are Laguerre polynomials. The following relations hold as well: By differentiating the generating function it readily follows that Stirling convolution polynomials Definition and examples Another variant of the Stirling polynomial sequence corresponds to a special case of the convolution polynomials studied by Knuth's article and in the Concrete Mathematics reference. We first define these polynomials through the Stirling numbers of the first kind as It follows that these polynomials satisfy the next recurrence relation given by These Stirling "convolution" polynomials may be used to define the Stirling numbers, and , for integers and arbitrary complex values of . The next table provides several special cases of these Stirling polynomials for the first few . {| class="wikitable" style="text-align: left;" ! n !! σn(x) |- | 0 || |- | 1 || |- | 2 || |- | 3 || |- | 4 || |- | 5 || |- | 6 || |- | 7 || |- | 8 || |- | 9 || |- | 10 || |- |} Generating functions This variant of the Stirling polynomial sequence has particularly nice ordinary generating functions of the following forms: More generally, if is a power series that satisfies , we have that We also have the related series identity and the Stirling (Sheffer) polynomial related generating functions given by Properties and relations For integers and , these polynomials satisfy the two Stirling convolution formulas given by and When , we also have that the polynomials, , are defined through their relations to the Stirling numbers and their relations to the Bernoulli numbers given by See also Bernoulli polynomials Bernoulli polynomials of the second kind Sheffer and Appell sequences Difference polynomials Special polynomial generating functions Gregory coefficients References External links Polynomials
Stirling polynomials
[ "Mathematics" ]
896
[ "Polynomials", "Algebra" ]
4,069,702
https://en.wikipedia.org/wiki/Tuyere
A tuyere or tuyère (; ) is a tube, nozzle or pipe allowing the blowing of air into a furnace or hearth. Air or oxygen is injected into a hearth under pressure from bellows or a blowing engine or other devices. This causes the fire to become hotter in front of the blast than it would otherwise have been, enabling metals to be smelted or melted or made hot enough to be worked in a forge, though these are blown only with air. This applies to any process where a blast is delivered under pressure to make a fire hotter. Archeologists have discovered tuyeres dating from the Iron Age; one example dates from between 770 BCE and 515 BCE. Following the introduction of hot blast, tuyeres are often water-cooled. Around the year 1500 new ironmaking techniques, including the blast furnace and finery forge, were introduced into England from France, along with the French technical terms relating to the new technology. "Tuyere" () is one of these French words, sometimes Anglicised as tue-iron or tue iron. Examples A bloomery normally had one tuyere. Early blast furnaces also had one tuyere, but were fed from bellows perhaps 12 feet (3.7m) long operated by a waterwheel. During the Industrial Revolution, the blast began to be provided using steam engines, initially Watt engines working blowing cylinders. Improvements in foundry practice enabled gas-tight cast iron pipes to be produced, enabling one engine to deliver blast to several sides of a furnace, through multiple tuyeres. A finery forge contained finery and chafery hearths, usually one of the latter and one to three of the former. Each hearth was equipped with its own set of bellows, blowing into it through a tuyere. The blacksmith's hearth at their forge has a tuyere, often blown by foot-operated bellows. Tuyeres were also used in smelting lead and copper in smeltmills. As of 2009 the world's largest blast furnace in Caofeidan, China operated by Shougang Jing Tang United Iron and Steel Ltd had 42 tuyeres, through which the hot blast is injected in the furnace. They are usually made from copper and cooled with a water jacket to withstand the extreme temperatures. References Steelmaking Industrial furnaces
Tuyere
[ "Chemistry" ]
485
[ "Metallurgical processes", "Steelmaking", "Industrial furnaces" ]
4,070,102
https://en.wikipedia.org/wiki/Source%20code%20escrow
Source code escrow is the deposit of the source code of software with a third-party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the software license agreement. Necessity of escrow As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guarded trade secrets. As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions. Escrow agreements Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties: one or several licensors, one or several licensees, the escrow agent. The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met. Source code escrow agreements provide for the following: They specify the subject and scope of the escrow. This is generally the source code of a specific software, accompanied by everything that the licensee requires to independently maintain the software, such as documentation, software tools or specialized hardware. They oblige the licensor to put updated versions of the software in escrow in specific intervals. They specify the conditions that must be met for the agent to release the source code to the licensee. Typical conditions include the bankruptcy of the licensor, the cancellation of a software development project or the express unwillingness of the licensor to fulfil his contractual maintenance obligations. Because it is often important to the licensee that the code be released as soon as possible once the conditions are met, the conditions tend to be worded as plainly and unambiguously as possible. They circumscribe the rights obtained by the licensee with respect to the source code after the release of the software. These rights are generally limited and may include the right to modify the source code for the purpose of fixing errors, or the right to continue independent development of the software. They specify the services provided by the escrow agent beyond a simple custody of the source code. Specialised agents may, for instance, verify that the source code storage media is readable, or even build the software based on the source code, verifying that its features match the binary version used by the licensee. They may provide that non-compete clauses in the licence agreement, such as any that prohibit the licensee from employing the licensor's employees, are void in the event of the release conditions being met, enabling the licensee to acquire the know-how required for the maintenance of the software. They also provide for the fees due to the escrow agent for his services. Whether a source code escrow agreement is entered into at all, and who bears its costs, is subject to agreement between the licensor and the licensee. Software license agreements often provide for a right of the licensee to demand that the source code be put into escrow, or to join an existing escrow agreement. Bankruptcy laws may interfere with the execution of a source code escrow agreement, if the bankrupt licensor's creditors are legally entitled to seize the licensor's assets – including the code in escrow – upon bankruptcy, preventing the release of the code to the licensee. Third party escrow agents Museums, archives and other GLAM organizations have begun to act as independent escrow agents due to growing digital obsolescence. Notable examples are the Internet Archive in 2007, the Library of Congress in 2006, ICHEG, Computer History Museum, or the MOMA. There are also some cases where software communities act as escrow agent, for instance for Wing Commander video game series or Ultima 9 of the Ultima series. Software open-sourcing to the public The escrow agreements described above are most applicable to custom-developed software which is not available to the general public. In some cases, source code for commercial off-the-shelf software may be deposited into escrow to be released as free and open-source software under an open source license when the original developer ceases development and/or when certain fundraising conditions are met (the threshold pledge system). For instance, the Blender graphics suite was released in this way following the bankruptcy of Not a Number Technologies; the widely used Qt toolkit is covered by a source code escrow agreement secured by the "KDE Free Qt Foundation". There are many cases of end-of-life open-sourcing which allow the community continued self-support, see List of commercial video games with later released source code. See also Source code repository for open source Orphan works References Further reading Computerworld (7/20/92, page 99): Don't Rush Into Source Code Escrow A Guide to IT Contracting: Checklists, Tools, and Techniques (, 2013) - Page 262 Software escrow agreement samples Escrow Computer law
Source code escrow
[ "Technology" ]
1,168
[ "Computer law", "Computing and society" ]
4,070,216
https://en.wikipedia.org/wiki/Abox
In computer science, the terms TBox and ABox are used to describe two different types of statements in knowledge bases. TBox statements are the "terminology component", and describe a domain of interest by defining classes and properties as a domain vocabulary. ABox statements are the "assertion component" — facts associated with the TBox's conceptual model or ontologies. Together ABox and TBox statements make up a knowledge base or a knowledge graph. ABox statements must be TBox-compliant: they are assertions that use the vocabulary defined by the TBox. TBox statements are sometimes associated with object-oriented classes and ABox statements associated with instances of those classes. Examples of ABox and TBox statements ABox statements typically deal with concrete entities. They specify what category an entity belongs to, or what relation one entity has to another entity. Item A is-an-instance-of Category C Item A has-this-relation-to Item B Examples: Niger is-a country. Chad is-a country Niger is-next-to Chad. Agadez is-a city. Agadez is-located-in Niger. TBox statements typically (or definitions of domain categories and implied relations) such as: An entity X can be a country or a city So Dagamanet is-a neighbourhood is not a fact you can specify, though it is a fact in real life. A is-next-to B if B is-next-to A So Niger is-next-to Chad implies Chad is-next-to Niger. X is a place if X is-a city or X is-a country. So Niger is-a country implies Niger is-a place. place A contains place B if place B is-located-in A. So Agadez is-located-in Niger implies Niger contains Agadez. TBox statements tend to be more permanent within a knowledge base and are used and stored as a schema or a data model. In contrast, ABox statements are much more dynamic in nature and tend to be stored as instance data within transactional systems within databases. With the newer, NoSQL databases and especially with RDF databases (see Triplestore) the storage distinction may no longer apply. Data and models can be stored using the same approach. However, models continue to be more permanent, have a different lifecycle and are typically stored as separate graphs within such database. See also Metadata Web Ontology Language References Ontology (information science) de:ABox
Abox
[ "Technology" ]
518
[ "Computing stubs", "Computer science", "Computer science stubs" ]
4,070,299
https://en.wikipedia.org/wiki/Lunar%20distance%20%28navigation%29
In celestial navigation, lunar distance, also called a lunar, is the angular distance between the Moon and another celestial body. The lunar distances method uses this angle and a nautical almanac to calculate Greenwich time if so desired, or by extension any other time. That calculated time can be used in solving a spherical triangle. The theory was first published by Johannes Werner in 1524, before the necessary almanacs had been published. A fuller method was published in 1763 and used until about 1850 when it was superseded by the marine chronometer. A similar method uses the positions of the Galilean moons of Jupiter. Purpose In celestial navigation, knowledge of the time at Greenwich (or another known place) and the measured positions of one or more celestial objects allows the navigator to calculate latitude and longitude. Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. After the method was first published in 1763 by British Astronomer Royal Nevil Maskelyne, based on pioneering work by Tobias Mayer, for about a hundred years (until about 1850) mariners lacking a chronometer used the method of lunar distances to determine Greenwich time as a key step in determining longitude. Conversely, a mariner with a chronometer could check its accuracy using a lunar determination of Greenwich time. The method saw usage all the way up to the beginning of the 20th century on smaller vessels that could not afford a chronometer or had to rely on this technique for correction of the chronometer. Method Summary The method relies on the relatively quick movement of the moon across the background sky, completing a circuit of 360 degrees in 27.3 days (the sidereal month), or 13.2 degrees per day. In one hour it will move approximately half a degree, roughly its own angular diameter, with respect to the background stars and the Sun. Using a sextant, the navigator precisely measures the angle between the moon and another body. That could be the Sun or one of a selected group of bright stars lying close to the Moon's path, near the ecliptic. At that moment, anyone on the surface of the earth who can see the same two bodies will, after correcting for parallax, observe the same angle. The navigator then consults a prepared table of lunar distances and the times at which they will occur. By comparing the corrected lunar distance with the tabulated values, the navigator finds the Greenwich time for that observation. Knowing Greenwich time and local time, the navigator can work out longitude. Local time can be determined from a sextant observation of the altitude of the Sun or a star. Then the longitude (relative to Greenwich) is readily calculated from the difference between local time and Greenwich Time, at 15 degrees per hour of difference. In practice Having measured the lunar distance and the heights of the two bodies, the navigator can find Greenwich time in three steps: Preliminaries: Almanac tables predict lunar distances between the centre of the Moon and the other body (published between 1767 and 1906 in Britain). However, the observer cannot accurately find the centre of the Moon (or Sun, which was the most frequently used second object). Instead, lunar distances are always measured to the sharply lit, outer edge (the limb, not terminator) of the Moon (or of the Sun). The first correction to the lunar distance is the distance between the limb of the Moon and its center. Since the Moon's apparent size varies with its varying distance from the Earth, almanacs give the Moon's and Sun's semidiameter for each day. Additionally the observed altitudes are cleared of semidiameter. Clearing: The lunar distance is corrected for the effects of parallax and atmospheric refraction on the observation. The almanac gives lunar distances as they would appear if the observer were at the center of a transparent Earth. Because the Moon is so much closer to the Earth than the stars are, the position of the observer on the surface of the Earth shifts the relative position of the Moon by up to an entire degree. The clearing correction for parallax and refraction is a trigonometric function of the observed lunar distance and the altitudes of the two bodies. Navigators used collections of mathematical tables to work these calculations by any of dozens of distinct clearing methods. For practical applications today the tables by Bruce Stark may be used for clearing the lunar distance. They are constructed such that only additions and subtractions of tabulated numbers are required instead of trigonometric evaluations. Finding the time: The navigator, having cleared the lunar distance, now consults a prepared table of lunar distances and the times at which they will occur in order to determine the Greenwich time of the observation. Predicting the position of the moon years in advance requires solving the three-body problem, since the earth, moon and sun were all involved. Euler developed the numerical method they used, called Euler's method, and received a grant from the Board of Longitude to carry out the computations. Having found the (absolute) Greenwich time, the navigator either compares it with the observed local apparent time (a separate observation) to find his longitude, or compares it with the Greenwich time on a chronometer (if available) if one wants to check the chronometer. Errors Almanac error By 1810, the errors in the almanac predictions had been reduced to about one-quarter of a minute of arc. By about 1860 (after lunar distance observations had mostly faded into history), the almanac errors were finally reduced to less than the error margin of a sextant in ideal conditions (one-tenth of a minute of arc). Lunar distance observation Later sextants (after ) could indicate angle to 0.1 arc-minutes, after the use of the vernier was popularized by its description in English in the book Navigatio Britannica published in 1750 by John Barrow, the mathematician and historian. In practice at sea, actual errors were somewhat larger. If the sky is cloudy or the Moon is new (hidden close to the glare of the Sun), lunar distance observations could not be performed. Total error A lunar distance changes with time at a rate of roughly half a degree, or 30 arc-minutes, in an hour. The two sources of error, combined, typically amount to about one-half arc-minute in Lunar distance, equivalent to one minute in Greenwich time, which corresponds to an error of as much as one-quarter of a degree of longitude, or about at the equator. In literature Captain Joshua Slocum, in making the first solo circumnavigation of the Earth in 1895–1898, somewhat anachronistically used the lunar method along with dead reckoning in his navigation. He comments in Sailing Alone Around the World on a sight taken in the South Pacific. After correcting an error he found in his log tables, the result was surprisingly accurate: I found from the result of three observations, after long wrestling with lunar tables, that her longitude agreed within five miles of that by dead-reckoning. This was wonderful; both, however, might be in error, but somehow I felt confident that both were nearly true, and that in a few hours more I should see land; and so it happened, for then I made out the island of Nukahiva, the southernmost of the Marquesas group, clear-cut and lofty. The verified longitude when abreast was somewhere between the two reckonings; this was extraordinary. All navigators will tell you that from one day to another a ship may lose or gain more than five miles in her sailing-account, and again, in the matter of lunars, even expert lunarians are considered as doing clever work when they average within eight miles of the truth... The result of these observations naturally tickled my vanity, for I knew it was something to stand on a great ship’s deck and with two assistants take lunar observations approximately near the truth. As one of the poorest of American sailors, I was proud of the little achievement alone on the sloop, even by chance though it may have been... The work of the lunarian, though seldom practised in these days of chronometers, is beautifully edifying, and there is nothing in the realm of navigation that lifts one’s heart up more in adoration. In his 1777 book, "A Voyage around the World", naturalist Georg Forster described his impressions of navigation with captain James Cook on board the ship HMS Resolution in the South Pacific. Cook had two of the new chronometers on board, one made by Larcum Kendall the other by John Arnold, following the lead of the famous John Harrison clocks. On March 12, 1774, approaching Easter Island, Forster found praiseworthy the method of lunar distances as the best and most precise method to determine longitude, as compared to clocks which may fail due to mechanical problems. See also Royal Observatory, Greenwich Josef de Mendoza y Ríos John Harrison History of longitude Longitude prize Henry Raper Bowditch's American Practical Navigator Nathaniel Bowditch References New and complete epitome of practical navigation containing all necessary instruction for keeping a ship's reckoning at sea ... to which is added a new and correct set of tables - by J. W. Norie 1828 Andrewes, William J.H. (Ed.): The Quest for Longitude. Cambridge, Mass. 1996 Forbes, Eric G.: The Birth of Navigational Science. London 1974 Jullien, Vincent (Ed.): Le calcul des longitudes: un enjeu pour les mathématiques, l`astronomie, la mesure du temps et la navigation. Rennes 2002 Howse, Derek: Greenwich Time and the Longitude. London 1997 Howse, Derek: Nevil Maskelyne. The Seaman's Astronomer. Cambridge 1989 National Maritime Museum (Ed.): 4 Steps to Longitude. London 1962 External links About Lunars... by George Huxtable. (Free tutorial) Navigation Spreadsheets: Lunar distance Navigational Algorithms - free software for Lunars Longitude by Lunars online Time and Position by C-program LUNARS-V13 An Essay on Lunar Distance Method, by Richard Dunn Geodesy Lunar science Navigation Celestial navigation
Lunar distance (navigation)
[ "Astronomy", "Mathematics" ]
2,102
[ "Applied mathematics", "Celestial navigation", "Geodesy" ]
4,070,704
https://en.wikipedia.org/wiki/Floorcloth
A floorcloth, or floor-cloth, is a household furnishing used for warmth, decoration, or to protect expensive carpets. They were primarily produced and used from the early 18th to the early 20th century and were also referred to as oilcloth, wax cloths, and painted canvas. Some still use floorcloths as a customizable alternative to rugs, and some artists have elected to use floorcloths as a medium of expression. Most modern floorcloths are made of heavy, unstretched canvas with two or more coats of gesso. They are then painted and varnished to make them waterproof. History Floorcloths had their start in 18th century England, and may have evolved from painted wall tapestries from the 1500s. Textiles were too costly to be used on the floor at that time. From 1578 to 1694 a number of British patents were issued for treating cloth with an oil-type of covering, but it is not known if these were for floor coverings. A British receipt from 1722 refers to "a floor oyled cloth," indicating that they were being used underfoot at that time A London painter and stainer, Nathan Smith, was issued a patent in 1763 for waxed cloth specifically as a floor covering. His recipe for the liquid coating included resin, tar, Spanish brown, beeswax and linseed oil. He set up a factory in Knightsbridge, in London, where the waxed cloth was manufactured and painted, initially freehand or with stencils, but later with wallpaper printing blocks. When American colonists became independent from England, they also began to create their own floorcloths. The first three US presidents, George Washington, John Adams, and Thomas Jefferson all used floorcloths, and Jefferson had plain green ones in the White House. It is hard to place a standard value on floorcloths, as they varied so much in cost and quality. While some were made at home, commercially-produced floor cloths were to be found in shops: in Boston, Samuel Perkins & Sons advertised "painted floor cloths or canvass carpets" in 1816, when they could be purchased for anywhere from $1.37 to $2.25 per square yard. In addition, some itinerant painters traveling in rural areas would sell their services as floorcloth painters. When floorcloths became worn, they were often cut up and reused in less prominent places in the home, and might even be later cut up further for use in small spaces such as closets or pantries. Thus, old floorcloths are not often found in museums, and rarely are found in the possession of collectors. Uses Floorcloths served several purposes: they protected floors, decorated a room, and also helped to insulate a space. Floorcloths might be covered with a carpet during cold weather, or might themselves have straw or newspaper put underneath them to help to keep the cold out. Historical floorcloths varied in size. They might cover a smaller space as an area rug does today, they might be of a size to reach wall to wall, or they might be of a size to be placed under a dining table to protect a costly carpet. These small protective floorcloths were called "covers" in the 18th century and "druggets" in the 19th. Design Initially used by the wealthy, the designs and patterns mimicked a range of other substances, including parquet flooring, tile, and marble. As these useful furnishings found their way into middle-class homes, the variety of patterns grew. The painting of floorcloths might be done at home, by professional painters, or in a factory, and thus the quality, intricacy, and value of the floorcloths varied enormously. Freehand painting of the cloths gave way to printed and stenciled patterns, and the stenciled floorcloths might be very intricate. One floorcloth at the Melrose Plantation in Natchez Mississippi mimicked an intensively patterned Brussels carpet. Waning use of floorcloths By the end of the 19th century, the single term still in use to refer to floorcloths was oil cloth. New materials and processes began to provide some competition for oil cloths, although they did continue to be produced through the early 20th century. A patent was issued in 1844 for kamptulicon, which was well regarded in Great Britain, but did not see much use in the United States. Interest in kamptulicon encouraged more experimentation. One result was the issuance of a patent to Frederick Walton in 1863 for linoleum. Both oil cloth and linoleum were being produced in the same factories, with linoleum more aggressively marketed. In the past few decades, the desire to decorate homes in a more personal way has revived the popularity of floorcloths. Unique designs are made in a variety of styles and colors, using many techniques. This gives today's floorcloths the ability to be created for any style interior. References External links Linens Rugs and carpets Floors
Floorcloth
[ "Engineering" ]
1,031
[ "Structural engineering", "Floors" ]
4,070,825
https://en.wikipedia.org/wiki/Confluency
In cell culture biology, confluence refers to the percentage of the surface of a culture dish that is covered by adherent cells. For example, 50 percent confluence means roughly half of the surface is covered, while 100 percent confluence means the surface is completely covered by the cells, and no more room is left for the cells to grow as a monolayer. The cell number refers to, trivially, the number of cells in a given region. Impact on research Many cell lines exhibit differences in growth rate or gene expression depending on the degree of confluence. Cells are typically passaged before becoming fully confluent in order to maintain their proliferation phenotype. Some cell types are not limited by contact inhibition, such as immortalized cells, and may continue to divide and form layers on top of the parent cells. To achieve optimal and consistent results, experiments are usually performed using cells at a particular confluence, depending on the cell type. Extracellular export of cell free material is also dependent on the cell confluence . Estimation Rule of thumb Comparing the amount of space covered by cells with unoccupied space using the naked eye can provide a rough estimate of confluency. Hemocytometer A hemocytometer can be used to count cells, giving the cell number. References Cell culture
Confluency
[ "Biology" ]
260
[ "Model organisms", "Cell culture" ]
4,071,139
https://en.wikipedia.org/wiki/SN%202006X
SN 2006X was a Type Ia supernova about 65 million light-years away in Messier 100, a spiral galaxy in the constellation Coma Berenices. The supernova was independently discovered in early February 2006 by Shoji Suzuki of Japan and Marco Migliardi of Italy. SN 2006X is particularly significant because it is a Type Ia supernova. These supernovae are used for measuring distances, so observations of these supernovae in nearby galaxies are needed for calibration. SN 2006X is located in a well-studied galaxy, and it was discovered two weeks before its peak brightness, so it may be extraordinarily useful for understanding supernovae and for calibrating supernovae for distance measurements. It may even be possible to identify the progenitor of this supernova. References External links Light curves and spectra on the Open Supernova Catalog Supernova 2006X in M100 Brightness measures for SN 2006X NASA page with images of SN 2006X Large collection of SN 2006X images Messier 100 Supernovae Coma Berenices 20060204
SN 2006X
[ "Chemistry", "Astronomy" ]
226
[ "Supernovae", "Astronomical events", "Constellations", "Explosions", "Coma Berenices" ]
4,071,245
https://en.wikipedia.org/wiki/Liesegang%20rings
Liesegang rings () are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection. Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices, and "Saturn rings" in a test tube. Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear. History The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge. He observed them in the course of experiments on the precipitation of reagents in blotting paper. In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate. After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings. Silver nitrate–potassium dichromate reaction The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants. If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube. After some hours, the continuous region of precipitation is followed by a clear region with no sensible precipitate, followed by a short region of precipitate further down the tube. This process continues down the tube forming several, up to perhaps a couple dozen, alternating regions of clear gel and precipitate rings. Some general observations Over the decades huge number of precipitation reactions have been used to study the phenomenon, and it seems quite general. Chromates, metal hydroxides, carbonates, and sulfides, formed with lead, copper, silver, mercury and cobalt salts are sometimes favored by investigators, perhaps because of the pretty, colored precipitates formed. The gels used are usually gelatin, agar or silicic acid gel. The concentration ranges over which the rings form in a given gel for a precipitating system can usually be found for any system by a little systematic empirical experimentation in a few hours. Often the concentration of the component in the agar gel should be substantially less concentrated (perhaps an order of magnitude or more) than the one placed on top of the gel. The first feature usually noted is that the bands which form farther away from the liquid-gel interface are generally farther apart. Some investigators measure this distance and report in some systems, at least, a systematic formula for the distance that they form at. The most frequent observation is that the distance apart that the rings form is proportional to the distance from the liquid-gel interface. This is by no means universal, however, and sometimes they form at essentially random, irreproducible distances. Another feature often noted is that the bands themselves do not move with time, but rather form in place and stay there. For very many systems the precipitate that forms is not the fine coagulant or flocs seen on mixing the two solutions in the absence of the gel, but rather coarse, crystalline dispersions. Sometimes the crystals are well separated from one another, and only a few form in each band. The precipitate that forms a band is not always a binary insoluble compound, but may be even a pure metal. Water glass of density 1.06 made acidic by sufficient acetic acid to make it gel, with 0.05 N copper sulfate in it, covered by a 1 percent solution of hydroxylamine hydrochloride produces large tetrahedrons of metallic copper in the bands. It is not possible to make any general statement of the effect of the composition of the gel. A system that forms nicely for one set of components, might fail altogether and require a different set of conditions if the gel is switched, say, from agar to gelatin. The essential feature of the gel required is that thermal convection in the tube be prevented altogether. Most systems will form rings in the absence of the gelling system if the experiment is carried out in a capillary, where convection does not disturb their formation. In fact, the system does not have to even be liquid. A tube plugged with cotton with a little ammonium hydroxide at one end, and a solution of hydrochloric acid at the other will show rings of deposited ammonium chloride where the two gases meet, if the conditions are chosen correctly. Ring formation has also been observed in solid glasses containing a reducible species. For example, bands of silver have been generated by immersing silicate glass in molten AgNO3 for extended periods of time (Pask and Parmelee, 1943). Theories Several different theories have been proposed to explain the formation of Liesegang rings. The chemist Wilhelm Ostwald in 1897 proposed a theory based on the idea that a precipitate is not formed immediately upon the concentration of the ions exceeding a solubility product, but a region of supersaturation occurs first. When the limit of stability of the supersaturation is reached, the precipitate forms, and a clear region forms ahead of the diffusion front because the precipitate that is below the solubility limit diffuses into the precipitate. This was argued to be a critically flawed theory when it was shown that seeding the gel with a colloidal dispersion of the precipitate (which would arguably prevent any significant region of supersaturation) did not prevent the formation of the rings. Another theory focuses on the adsorption of one or the other of the precipitating ions onto the colloidal particles of the precipitate which forms. If the particles are small, the absorption is large, diffusion is "hindered" and this somehow results in the formation of the rings. Still another proposal, the "coagulation theory" states that the precipitate first forms as a fine colloidal dispersion, which then undergoes coagulation by an excess of the diffusing electrolyte and this somehow results in the formation of the rings. Some more recent theories invoke an auto-catalytic step in the reaction that results in the formation of the precipitate. This would seem to contradict the notion that auto-catalytic reactions are, actually, quite rare in nature. The solution of the diffusion equation with proper boundary conditions, and a set of good assumptions on supersaturation, adsorption, auto-catalysis, and coagulation alone, or in some combination, has not been done yet, it appears, at least in a way that makes a quantitative comparison with experiment possible. However, a theoretical approach for the Matalon-Packter law predicting the position of the precipitate bands when the experiments are performed in a test tube, has been provided A general theory based on Ostwald's 1897 theory has recently been proposed. It can account for several important features sometimes seen, such as revert and helical banding. References Liesegang, R. E.,"Ueber einige Eigenschaften von Gallerten", Naturwissenschaftliche Wochenschrift, Vol. 11, Nr. 30, 353-362 (1896). J.A. Pask and C.W. Parmelee, "Study of Diffusion in Glass," Journal of the American Ceramic Society, Vol. 26, Nr. 8, 267-277 (1943). K. H. Stern, The Liesegang Phenomenon Chem. Rev. 54, 79-99 (1954). Ernest S. Hedges, Liesegang Rings and other Periodic Structures Chapman and Hall (1932). External links Liesegang rings Tout ce que la nature ne peut pas faire VI : Liesegang Rings A Thesis having a summary on reaction-diffusion processes and Liesegang banding (pp. 1-36) Chemical reactions Diffusion Petrology Physical chemistry Thermodynamics Articles containing video clips
Liesegang rings
[ "Physics", "Chemistry", "Mathematics" ]
1,822
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Diffusion", "Thermodynamics", "nan", "Physical chemistry", "Dynamical systems" ]
4,071,870
https://en.wikipedia.org/wiki/Thioacetamide
Thioacetamide is an organosulfur compound with the formula C2H5NS. This white crystalline solid is soluble in water and serves as a source of sulfide ions in the synthesis of organic and inorganic compounds. It is a prototypical thioamide. Research Thioacetamide is known to induce acute or chronic liver disease (fibrosis and cirrhosis) in the experimental animal model. Its administration in rat induces hepatic encephalopathy, metabolic acidosis, increased levels of transaminases, abnormal coagulation, and centrilobular necrosis, which are the main features of the clinical chronic liver disease so thioacetamide can precisely replicate the initiation and progression of human liver disease in an experimental animal model. Coordination chemistry Thioacetamide is widely used in classical qualitative inorganic analysis as an in situ source for sulfide ions. Thus, treatment of aqueous solutions of many metal cations to a solution of thioacetamide affords the corresponding metal sulfide: M2+ + CH3C(S)NH2 + H2O → MS + CH3C(O)NH2 + 2 H+ (M = Ni, Pb, Cd, Hg) Related precipitations occur for sources of soft trivalent cations (As3+, Sb3+, Bi3+) and monovalent cations (Ag+, Cu+). Preparation Thioacetamide is prepared by treating acetamide with phosphorus pentasulfide as shown in the following idealized reaction: CH3C(O)NH2 + 1/4 P4S10 → CH3C(S)NH2 + 1/4 P4S6O4 Structure The C2NH2S portion of the molecule is planar; the C-S, C-N, and C-C distances are 1.68, 1.31, and 1.50 Å, respectively. The short C-S and C-N distances indicate multiple bonding. Safety Thioacetamide is carcinogen class 2B. It is known to produce marked hepatotoxicity in exposed animals. Toxicity values are 301 mg/kg in rats (LD50, oral administration), 300 mg/kg in mice (LD50, intraperitoneal administration). This is evidenced by enzymatic changes, which include elevation in the levels of serum alanine transaminase, aspartate transaminase and aspartic acid. References IARC Group 2B carcinogens Thioamides Hepatotoxins
Thioacetamide
[ "Chemistry" ]
540
[ "Thioamides", "Functional groups" ]
4,071,997
https://en.wikipedia.org/wiki/Data%20architecture
Data architecture consist of models, policies, rules, and standards that govern which data is collected and how it is stored, arranged, integrated, and put to use in data systems and in organizations. Data is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture. Overview A data architecture aims to set data standards for all its data systems as a vision or a model of the eventual interactions between those data systems. Data integration, for example, should be dependent upon data architecture standards since data integration requires data interactions between two or more data systems. A data architecture, in part, describes the data structures used by a business and its computer applications software. Data architectures address data in storage, data in use, and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc. Essential to realizing the target state, data architecture describes how data is processed, stored, and used in an information system. It provides criteria for data processing operations to make it possible to design data flows and also control the flow of data in the system. The data architect is typically responsible for defining the target state, aligning during development and then following up to ensure enhancements are done in the spirit of the original blueprint. During the definition of the target state, the data architecture breaks a subject down to the atomic level and then builds it back up to the desired form. The data architect breaks the subject down by going through three traditional architectural stages: Conceptual - represents all business entities. Logical - represents the logic of how entities are related. Physical - the realization of the data mechanisms for a specific type of functionality. The "data" column of the Zachman Framework for enterprise architecture – In this second, broader sense, data architecture includes a complete analysis of the relationships among an organization's functions, available technologies, and data types. Data architecture should be defined in the planning phase of the design of a new data processing and storage system. The major types and sources of data necessary to support an enterprise should be identified in a manner that is complete, consistent, and understandable. The primary requirement at this stage is to define all of the relevant data entities, not to specify computer hardware items. A data entity is any real or abstract thing about which an organization or individual wishes to store data. Physical data architecture Physical data architecture of an information system is part of a technology plan. The technology plan is focused on the actual tangible elements to be used in the implementation of the data architecture design. Physical data architecture encompasses database architecture. Database architecture is a schema of the actual database technology that would support the designed data architecture. Elements of data architecture Certain elements must be defined during the design phase of the data architecture schema. For example, an administrative structure that is to be established in order to manage the data resources must be described. Also, the methodologies that are to be employed to store the data must be defined. In addition, a description of the database technology to be employed must be generated, as well as a description of the processes that are to manipulate the data. It is also important to design interfaces to the data by other systems, as well as a design for the infrastructure that is to support common data operations (i.e. emergency procedures, data imports, data backups, external transfers of data). Without the guidance of a properly implemented data architecture design, common data operations might be implemented in different ways, rendering it difficult to understand and control the flow of data within such systems. This sort of fragmentation is undesirable due to the potential increased cost and the data disconnects involved. These sorts of difficulties may be encountered with rapidly growing enterprises and also enterprises that service different lines of business. Properly executed, the data architecture phase of information system planning forces an organization to specify and describe both internal and external information flows. These are patterns that the organization may not have previously taken the time to conceptualize. It is therefore possible at this stage to identify costly information shortfalls, disconnects between departments, and disconnects between organizational systems that may not have been evident before the data architecture analysis. Constraints and influences Various constraints and influences will have an effect on data architecture design. These include enterprise requirements, technology drivers, economics, business policies and data processing needs. Enterprise requirements These generally include such elements as economical and effective system expansion, acceptable performance levels (especially system access speed), transaction reliability, and transparent data management. In addition, the conversion of raw data such as transaction records and image files into more useful information forms through such features as data warehouses is also a common organizational requirement, since this enables managerial decision making and other organizational processes. One of the architecture techniques is the split between managing transaction data and (master) reference data. Another is splitting data capture systems from data retrieval systems (as done in a data warehouse). Technology drivers These are usually suggested by the completed data architecture and database architecture designs. In addition, some technology drivers will derive from existing organizational integration frameworks and standards, organizational economics, and existing site resources (e.g. previously purchased software licensing). In many cases, the integration of multiple legacy systems requires the use of data virtualization technologies. Economics These are also important factors that must be considered during the data architecture phase. It is possible that some solutions, while optimal in principle, may not be potential candidates due to their cost. External factors such as the business cycle, interest rates, market conditions, and legal considerations could all have an effect on decisions relevant to data architecture. Business policies Business policies that also drive data architecture design include internal organizational policies, rules of regulatory bodies, professional standards, and applicable governmental laws that can vary by applicable agency. These policies and rules describe the manner in which the enterprise wishes to process its data. Data processing needs These include accurate and reproducible transactions performed in high volumes, data warehousing for the support of management information systems (and potential data mining), repetitive periodic reporting, ad hoc reporting, and support of various organizational initiatives as required (i.e. annual budgets, new product development). See also Controlled vocabulary Data mesh, a domain-oriented data architecture Disparate system Enterprise Information Security Architecture - (EISA) positions data security in the enterprise information framework. FDIC Enterprise Architecture Framework Information silo TOGAF References Further reading Bass, L.; John, B.; & Kates, J. (2001). Achieving Usability Through Software Architecture, Carnegie Mellon University. Lewis, G.; Comella-Dorda, S.; Place, P.; Plakosh, D.; & Seacord, R., (2001). Enterprise Information System Data Architecture Guide Carnegie Mellon University. Adleman, S.; Moss, L.; Abai, M. (2005). Data Strategy Addison-Wesley Professional. External links Achieving Usability Through Software Architecture, sei.cmu.edu 2001 The Logical Data Architecture, by Nirmal Baid Building a modern data and analytics architecture The “Right to Repair” Data Architecture with DataOps, the DataOps Blog TOGAF 9: Preparation Process Computer data Data engineering Enterprise architecture
Data architecture
[ "Technology", "Engineering" ]
1,468
[ "Software engineering", "Computer data", "Data engineering", "Data" ]
4,072,055
https://en.wikipedia.org/wiki/Cogging%20torque
Cogging torque of electrical motors is the torque due to the interaction between the permanent magnets of the rotor and the stator slots of a permanent magnet machine. It is also known as detent or no-current torque. This torque is position dependent and its periodicity per revolution depends on the number of magnetic poles and the number of teeth on the stator. Cogging torque is an undesirable component for the operation of such a motor. It is especially prominent at lower speeds, with the symptom of jerkiness. Cogging torque results in torque as well as speed ripple; however, at high speed the motor moment of inertia filters out the effect of cogging torque. Reducing the cogging torque A summary of techniques used for reducing cogging torque: Skewing stator stack or magnets Using fractional slots per pole Optimizing the magnet pole arc or width Almost all the techniques used against cogging torque also reduce the motor counter-electromotive force and so reduce the resultant running torque. A slotless and coreless permanent magnet motor does not have any cogging torque. See also Torque ripple Dual-rotor permanent magnet induction motor Footnotes and References Islam, M.S. Mir, S. Sebastian, T. Delphi Steering, Saginaw, MI, USA "Issues in reducing the cogging torque of mass-produced permanent-magnet brushless DC motor". External links [D. Hanselman] Electric motors Torque
Cogging torque
[ "Physics", "Technology", "Engineering" ]
299
[ "Force", "Physical quantities", "Engines", "Electric motors", "Electrical engineering", "Wikipedia categories named after physical quantities", "Torque" ]
4,072,145
https://en.wikipedia.org/wiki/Journal%20of%20Applied%20Meteorology%20and%20Climatology
The Journal of Applied Meteorology and Climatology (JAMC; formerly Journal of Applied Meteorology) is a scientific journal published by the American Meteorological Society. Applied research related to the physical meteorology, cloud physics, hydrology, weather modification, satellite meteorology, boundary layer processes, air pollution meteorology (including dispersion and chemical processes), agricultural and forest meteorology, and applied meteorological numerical models of all types. See also List of scientific journals List of scientific journals in earth and atmospheric sciences Atmospheric dispersion modeling Academic journals established in 1962 English-language journals American Meteorological Society academic journals Climatology journals
Journal of Applied Meteorology and Climatology
[ "Chemistry", "Engineering", "Environmental_science" ]
127
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
4,072,229
https://en.wikipedia.org/wiki/Huq%C3%BAqu%27ll%C3%A1h
Ḥuqúqu'lláh (, "Right of God") is a voluntary wealth tax paid by adherents of the Baháʼí Faith to support the work of the religion. Individuals following the practice calculate 19% of their discretionary income (after-tax income minus essential expenses) and send it to the head of the religion, which since 1963 has been the Universal House of Justice. Ḥuqúqu'lláh is a Baháʼí law established by Baháʼu'lláh in the Kitáb-i-Aqdas in 1873. It is separate and distinct from the general Baháʼí funds. It provides for the financial security of the community by funding promotional activities and the upkeep of properties, and it is a basis for a future welfare program. The Ḥuqúqu'lláh payment is considered a way to purify one's possessions. It is an individual obligation; nobody in the general community should know who has or has not contributed, nor should anyone be solicited individually for funds. Along with several other practices, it was initially only applicable to Baháʼís of the Middle East until 1992, when the authoritative English translation of the Kitáb-i-Aqdas was published and the Universal House of Justice made Ḥuqúqu'lláh universally applicable. A central office to receive payments was established at the Baháʼí World Centre in 1991, and payments are made to trustees appointed by the Universal House of Justice in every country or region. The obligation is similar to the Shia practice of Khums: a 20% wealth tax payable to the Imams. History Gradual implementation Baháʼu'lláh wrote down the law of Huqúqu'lláh in the Kitáb-i-Aqdas in 1873, but he did not accept any payments initially. He delayed the release of the Kitáb-i-Aqdas because of apprehension that the law of Huqúq might be difficult to implement, or that some would assume that the money was for his personal use. When copies were sent to Iran, they came with instructions that Huqúqu'lláh was not to be implemented, and it remained thus for about 5 years, during which time Baháʼu'lláh returned money to donors. In 1878 he appointed the first trustee of Huqúqu'lláh, who had the responsibility of receiving the Huqúq, as it is known, from the Baháʼís in Iran. The majority of these donations were spent caring for the poor and needy of the community, or for teaching efforts. Baháʼu'lláh and his family led an austere life. According to Baháʼí author Adib Taherzadeh, Later the practice of Huqúqu'lláh was expanded to the Baháʼís of the Middle East. In 1985 information about the Huqúq was distributed worldwide and in 1992 the law was made universally applicable. As the number of payments increased, deputies and representatives to receive the payments have been appointed. In 1991 the central office of Huqúqu'lláh was established at the Baháʼí World Centre in Haifa, Israel. Timeline The following is a basic timeline related to Ḥuqúqu'lláh, including trustees. Revelation of the Kitáb-i-Aqdas (1873) Amínu'l-Bayán (1878-1881) Hájí Amín, Amín-i-Iláhi (1881-1928) Hájí Ghulám-Ridá; Amín-i-Amín (1928-1938) Valíyu'lláh Varqá (1938-1955) ʻAlí-Muhammad Varqá (1955-2007) Compilation Ḥuqúqu'lláh (1985) Central office of Ḥuqúqu'lláh (1991) Kitáb-i-Aqdas in English, Law of Ḥuqúqu'lláh universally applicable (1992–present) Purpose The Ḥuquq'ullah is not meant to be a donation, but is rather meant to be a claim by God for support of the interests of all people. It is partly used to equalize wealth across different parts of the world. The payment of the Ḥuquq'ullah is also meant to increase the spiritual link between the religion's central institutions and the individual. This offering is to be considered separate from giving to the various Baháʼí funds and takes precedence over them. Furthermore, the Ḥuquq'ullah should not be solicited by anyone, and no payments of it can be accepted unless the individual was doing so "with the utmost joy". Calculation The payment of Ḥuqúqu'lláh is based on the calculation of the value of the individual's possessions, which includes one's merchandise, property and income, after all necessary expenses have been paid. If a person has possessions or wealth in excess of what is necessary equal in value to at least nineteen mithqáls of gold (2.2246 ounces or 69 grams) it is a spiritual obligation to pay nineteen percent of the total amount, once only, as Ḥuqúqu'lláh. Thereafter, whenever an individual acquires more possessions or wealth from income by the amount of at least nineteen mithqáls of gold, one is to pay nineteen percent of this increase, and so on for each further increase. Certain categories of possessions are exempt from the payment of the Ḥuqúqu'lláh, such as one's residence, necessary household furnishings, business or professional equipment and furnishings, and others. Baháʼu'lláh has left it to the individual to decide which items are considered necessary and which are not. Specific provisions are outlined to cover cases of financial loss, the failure of investments to yield a profit and for the payment of the Ḥuqúqu'lláh in the event of the person's death. Role in succession of authority During the lifetime of Baháʼu'lláh, the Ḥuqúqu'lláh offerings were made directly to him, and following his death, to ʻAbdu'l-Bahá. In his Will and Testament, ʻAbdu'l-Bahá indicated that payments should go to the appointed Guardian and named Shoghi Effendi as the first of potentially many Guardians, following primogeniture. After Shoghi Effendi died without appointing a successor, the custodial Hands of the Cause headed the Faith until the first election of the Universal House of Justice. See also Baháʼí laws Socio-economic development (Baháʼí) Notes References Baháʼí sources External links Kitáb-i-Aqdas Project: Comprehensive Indices - Huqúqu'lláh Redistribution of Wealth - a compilation by the Baha'i World Centre Sixteen Questions about Huququ'llah - by the Universal House of Justice (1991) Examples of Huququ'llah Transactions - by the Universal House of Justice (1991) Bahá'í practices Bahá'í terminology Philanthropy Religious taxation
Huqúqu'lláh
[ "Biology" ]
1,408
[ "Philanthropy", "Behavior", "Altruism" ]
4,072,268
https://en.wikipedia.org/wiki/Journal%20of%20Hydrometeorology
The Journal of Hydrometeorology is a scientific journal published by the American Meteorological Society. It covers the modeling, observing, and forecasting of processes related to water and energy fluxes and storage terms, including interactions with the boundary layer and lower atmosphere, and including processes related to precipitation, radiation, and other meteorological inputs. See also List of scientific journals in earth and atmospheric sciences External links Academic journals established in 2000 Bimonthly journals English-language journals American Meteorological Society academic journals Meteorology journals Hydrology journals
Journal of Hydrometeorology
[ "Environmental_science" ]
105
[ "Hydrology", "Hydrology journals" ]
4,072,443
https://en.wikipedia.org/wiki/Kosaraju%27s%20algorithm
In computer science, Kosaraju-Sharir's algorithm (also known as Kosaraju's algorithm) is a linear time algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to S. Rao Kosaraju and Micha Sharir. Kosaraju suggested it in 1978 but did not publish it, while Sharir independently discovered it and published it in 1981. It makes use of the fact that the transpose graph (the same graph with the direction of every edge reversed) has exactly the same strongly connected components as the original graph. The algorithm The primitive graph operations that the algorithm uses are to enumerate the vertices of the graph, to store data per vertex (if not in the graph data structure itself, then in some table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse edges in the backward direction); however the last can be done without, at the price of constructing a representation of the transpose graph during the forward traversal phase. The only additional data structure needed by the algorithm is an ordered list of graph vertices, that will grow to contain each vertex once. If strong components are to be represented by appointing a separate root vertex for each component, and assigning to each vertex the root vertex of its component, then Kosaraju's algorithm can be stated as follows. For each vertex of the graph, mark as unvisited. Let be empty. For each vertex of the graph do , where is the recursive subroutine: If is unvisited then: Mark as visited. For each out-neighbour of , do . Prepend to . Otherwise do nothing. For each element of in order, do where is the recursive subroutine: If has not been assigned to a component then: Assign as belonging to the component whose root is . For each in-neighbour of , do . Otherwise do nothing. Trivial variations are to instead assign a component number to each vertex, or to construct per-component lists of the vertices that belong to it. The unvisited/visited indication may share storage location with the final assignment of root for a vertex. The key point of the algorithm is that during the first (forward) traversal of the graph edges, vertices are prepended to the list in post-order relative to the search tree being explored. This means it does not matter whether a vertex was first visited because it appeared in the enumeration of all vertices or because it was the out-neighbour of another vertex that got visited; either way will be prepended to before is, so if there is a forward path from to then will appear before on the final list (unless and both belong to the same strong component, in which case their relative order in is arbitrary). This means, that each element of the list can be made to correspond to a block , where the block consists of all the vertices reachable from vertex using just outward edges at each node in the path. It is important to note that no vertex in the block beginning at has an inward link from any of the blocks beginning at some vertex to its right, i.e., the blocks corresponding to vertices in the list. This is so, because otherwise the vertex having the inward link(say from the block beginning at )would have been already visited and pre-pended to in the block of , which is a contradiction. On the other hand, vertices in the block starting at can have edges pointing to the blocks starting at some vertex in Step 3 of the algorithm, starts from , assigns all vertices which point to it, the same component as . Note that these vertices can only lie in the block beginning at as higher blocks can't have links pointing to vertices in the block of . Let the set of all vertices that point to be . Subsequently, all the vertices pointing to these vertices, are added too, and so on till no more vertices can be added. There is a path to , from all the vertices added to the component containing . And there is a path to all the vertices added from , as all those lie in the block beginning at (which contains all the vertices reachable from following outward edges at each step of path). Hence all these form a single strongly connected component. Moreover, no vertex remains, because, to be in this strongly connected component a vertex must be reachable from and must be able to reach . All vertices that are able to reach , if any, lie in the first block only, and all the vertices in first block are reachable from . So the algorithm chooses all the vertices in the connected component of . When we reach vertex , in the loop of step 3, and hasn't been assigned to any component, we can be sure that all the vertices to the left have made their connected components; that doesn't belong to any of those components; that doesn't point to any of the vertices to the left of it. Also, since, no edge from higher blocks to 's block exists, the proof remains same. As given above, the algorithm for simplicity employs depth-first search, but it could just as well use breadth-first search as long as the post-order property is preserved. The algorithm can be understood as identifying the strong component of a vertex as the set of vertices which are reachable from both by backwards and forwards traversal. Writing for the set of vertices reachable from by forward traversal, for the set of vertices reachable from by backwards traversal, and for the set of vertices which appear strictly before on the list after phase 2 of the algorithm, the strong component containing a vertex appointed as root is Set intersection is computationally costly, but it is logically equivalent to a double set difference, and since it becomes sufficient to test whether a newly encountered element of has already been assigned to a component or not. Complexity Provided the graph is described using an adjacency list, Kosaraju's algorithm performs two complete traversals of the graph and so runs in Θ(V+E) (linear) time, which is asymptotically optimal because there is a matching lower bound (any algorithm must examine all vertices and edges). It is the conceptually simplest efficient algorithm, but is not as efficient in practice as Tarjan's strongly connected components algorithm and the path-based strong component algorithm, which perform only one traversal of the graph. If the graph is represented as an adjacency matrix, the algorithm requires Ο(V2) time. References Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman. Data Structures and Algorithms. Addison-Wesley, 1983. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms, 3rd edition. The MIT Press, 2009. . Micha Sharir. A strong-connectivity algorithm and its applications to data flow analysis. Computers and Mathematics with Applications 7(1):67–72, 1981. External links Good Math, Bad Math: Computing Strongly Connected Components Graph algorithms Graph connectivity
Kosaraju's algorithm
[ "Mathematics" ]
1,489
[ "Mathematical relations", "Graph theory", "Graph connectivity" ]
4,073,095
https://en.wikipedia.org/wiki/Peripheral%20drift%20illusion
The peripheral drift illusion (PDI) refers to a motion illusion generated by the presentation of a sawtooth luminance grating in the visual periphery. This illusion was first described by Faubert and Herbert (1999), although a similar effect called the "escalator illusion" was reported by Fraser and Wilcox (1979). A variant of the PDI was created by Kitaoka Akiyoshi and Ashida (2003) who took the continuous sawtooth luminance change, and reversed the intermediate greys. Kitaoka has created numerous variants of the PDI, and one called "rotating snakes" has become very popular. The latter demonstration has kindled great interest in the PDI. The illusion is easily seen when fixating off to the side of it, and then blinking as fast as possible. Most observers can see the illusion easily when reading text with the illusion figure in the periphery. The motion of such illusions is consistently perceived in a dark-to-light direction. Two papers have been published examining the neural mechanisms involved in seeing the PDI (Backus & Oruç, 2005; Conway et al., 2005). Faubert and Herbert (1999) suggested the illusion was based on temporal differences in luminance processing producing a signal that tricks the motion system. Both of the articles from 2005 are broadly consistent with those ideas, although contrast appears to be an important factor (Backus & Oruç, 2005). Rotating snakes Rotating snakes is an optical illusion developed by Professor Akiyoshi Kitaoka in 2003. A type of peripheral drift illusion, the "snakes" consist of several bands of color which resemble coiled serpents. Although the image is static, the snakes appear to be moving in circles. The speed of perceived motion depends on the frequency of microsaccadic eye movements (Alexander & Martinez-Conde, 2019). Gallery References Inline citations General references Alexander, R. G., Martinez-Conde, S. (2019). Fixational eye movements. Eye Movement Research. Springer, Cham, 104–106, . Backus, B. T., Oruç, İ. (2005). Illusory motion from change over time in the response to contrast and luminance. Journal of Vision, 5(11), 1055–1069, , . Conway, B. R., Kitaoka, A., Yazdanbakhsh, A., Pack, C. C., Livingstone, M. S. (2005). Neural basis for a powerful static motion illusion. Journal of Neuroscience, 25, 5651–5656. Faubert, J., Herbert, A. M. (1999). The peripheral drift illusion: A motion illusion in the visual periphery. Perception, 28, 617–622. Fraser, A., Wilcox, K. J. (1979). Perception of illusory movement. Nature, 281, 565–566. Kitaoka. A., Ashida. H. (2003). Phenomenal characteristics of the peripheral drift illusion. Journal of Vision, 15, 261–262. External links Rotating snakes at Akiyoshi's illusion pages Rotating rings at Sarcone's optical illusion pattern page These patterns move, but it’s an illusion by Smithsonian Research Lab Does your pet see Peripheral drift? a slideshow designed for testing on animals Optical illusions Vision
Peripheral drift illusion
[ "Physics" ]
701
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
4,073,116
https://en.wikipedia.org/wiki/L-reduction
In computer science, particularly the study of approximation algorithms, an L-reduction ("linear reduction") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems. The term L reduction is sometimes used to refer to log-space reductions, by analogy with the complexity class L, but this is a different concept. Definition Let A and B be optimization problems and cA and cB their respective cost functions. A pair of functions f and g is an L-reduction if all of the following conditions are met: functions f and g are computable in polynomial time, if x is an instance of problem A, then f(x) is an instance of problem B, if y' is a solution to f(x), then g(y' ) is a solution to x, there exists a positive constant α such that , there exists a positive constant β such that for every solution y' to f(x) . Properties Implication of PTAS reduction An L-reduction from problem A to problem B implies an AP-reduction when A and B are minimization problems and a PTAS reduction when A and B are maximization problems. In both cases, when B has a PTAS and there is an L-reduction from A to B, then A also has a PTAS. This enables the use of L-reduction as a replacement for showing the existence of a PTAS-reduction; Crescenzi has suggested that the more natural formulation of L-reduction is actually more useful in many cases due to ease of usage. Proof (minimization case) Let the approximation ratio of B be . Begin with the approximation ratio of A, . We can remove absolute values around the third condition of the L-reduction definition since we know A and B are minimization problems. Substitute that condition to obtain Simplifying, and substituting the first condition, we have But the term in parentheses on the right-hand side actually equals . Thus, the approximation ratio of A is . This meets the conditions for AP-reduction. Proof (maximization case) Let the approximation ratio of B be . Begin with the approximation ratio of A, . We can remove absolute values around the third condition of the L-reduction definition since we know A and B are maximization problems. Substitute that condition to obtain Simplifying, and substituting the first condition, we have But the term in parentheses on the right-hand side actually equals . Thus, the approximation ratio of A is . If , then , which meets the requirements for PTAS reduction but not AP-reduction. Other properties L-reductions also imply P-reduction. One may deduce that L-reductions imply PTAS reductions from this fact and the fact that P-reductions imply PTAS reductions. L-reductions preserve membership in APX for the minimizing case only, as a result of implying AP-reductions. Examples Dominating set: an example with α = β = 1 Token reconfiguration: an example with α = 1/5, β = 2 See also MAXSNP Approximation-preserving reduction PTAS reduction References G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, M. Protasi. Complexity and Approximation. Combinatorial optimization problems and their approximability properties. 1999, Springer. Reduction (complexity) Approximation algorithms
L-reduction
[ "Mathematics" ]
739
[ "Functions and mappings", "Mathematical objects", "Approximation algorithms", "Reduction (complexity)", "Mathematical relations", "Approximations" ]
4,073,195
https://en.wikipedia.org/wiki/Hypobromous%20acid
Hypobromous acid is an inorganic compound with chemical formula of . It is a weak, unstable acid. It is mainly produced and handled in an aqueous solution. It is generated both biologically and commercially as a disinfectant. Salts of hypobromite are rarely isolated as solids. Synthesis and properties Addition of bromine to water gives hypobromous acid and hydrobromic acid (HBr(aq)) via a disproportionation reaction. HOBr + HBr In nature, hypobromous acid is produced by bromoperoxidases, which are enzymes that catalyze the oxidation of bromide with hydrogen peroxide: Hypobromous acid has a pKa of 8.65 and is therefore only partially dissociated in water at pH 7. Like the acid, hypobromite salts are unstable and undergo a slow disproportionation reaction to yield the respective bromate and bromide salts. Its chemical and physical properties are similar to those of other hypohalites. Uses HOBr is used as a bleach, an oxidizer, a deodorant, and a disinfectant, due to its ability to kill the cells of many pathogens. The compound is generated in warm-blooded vertebrate organisms especially by eosinophils, which produce it by the action of eosinophil peroxidase, an enzyme which preferentially uses bromide. Bromide is also used in hot tubs and spas as a germicidal agent, using the action of an oxidizing agent to generate hypobromite in a similar fashion to the peroxidase in eosinophils. It is especially effective when used in combination with its congener, hypochlorous acid. References Hypobromites Oxidizing acids Halogen oxoacids Triatomic molecules
Hypobromous acid
[ "Physics", "Chemistry" ]
408
[ "Acids", "Molecules", "Oxidizing agents", "Triatomic molecules", "Oxidizing acids", "Matter" ]
4,073,449
https://en.wikipedia.org/wiki/Contingent%20aftereffect
In human perception, contingent aftereffects are illusory percepts that are apparent on a test stimulus after exposure to an induction stimulus for an extended period. Contingent aftereffects can be contrasted with simple aftereffects, the latter requiring no test stimulus for the illusion/mis-perception to be apparent. Contingent aftereffects have been studied in different perceptual domains. For instance, visual contingent aftereffects, auditory contingent aftereffects and haptic contingent aftereffects have all been discovered. An example of a visual contingent aftereffect is the McCollough effect. The McCollough effect is one of a family of contingent aftereffects related to the processing of color and orientation. One can induce the aftereffect by exposure to a magenta and black vertical grating alternating with a green and black horizontal grating. After a few minutes of induction, followed by a break of a few minutes, black-and-white vertical and horizontal gratings will appear colored. The verticals will look green and horizontals pink in the example given. Therefore, the illusory color apparent on the test fields is contingent on the orientation of the lines in that test field. Furthermore, the orientation-color contingencies present in the illusion are the reverse of those present in the adapting stimulus (i.e., the magenta-vertical and green-horizontal adaptation gratings produced illusory magenta on the horizontal test gratings and illusory green on the vertical test grating). The illusion will reverse if one rotates one's head 90°. This is because the effect is retino-topic, that is, the effect is dependent on the orientation of the test lines on the retina. There are also color-contingent motion aftereffects, and other varieties of these phenomena. See also Optical illusion Visual perception References Favreau O.E., Emerson V.F., Corballis M.C. (1972) Motion Perception: A Color-Contingent Aftereffect. Science, 7, 78–79. Optical illusions
Contingent aftereffect
[ "Physics" ]
437
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
4,073,971
https://en.wikipedia.org/wiki/Model%20elimination
Model elimination is the name attached to a pair of proof procedures invented by Donald W. Loveland, the first of which was published in 1968 in the Journal of the ACM. Their primary purpose is to carry out automated theorem proving, though they can readily be extended to logic programming, including the more general disjunctive logic programming. Model elimination is closely related to resolution while also bearing characteristics of a tableaux method. It is a progenitor of the SLD resolution procedure used in the Prolog logic programming language. While somewhat eclipsed by attention to, and progress in, resolution theorem provers, model elimination has continued to attract the attention of researchers and software developers. Today there are several theorem provers under active development that are based on the model elimination procedure. References Loveland, D. W. (1968) Mechanical theorem-proving by model elimination. Journal of the ACM, 15, 236—251. Automated theorem proving Logical calculi Logic in computer science
Model elimination
[ "Mathematics" ]
200
[ "Logic in computer science", "Automated theorem proving", "Mathematical logic", "Logical calculi", "Computational mathematics" ]
4,074,422
https://en.wikipedia.org/wiki/Stochastic%20modelling%20%28insurance%29
This page is concerned with the stochastic modelling as applied to the insurance industry. For other stochastic modelling applications, please see Monte Carlo method and Stochastic asset models. For mathematical definition, please see Stochastic process. "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time-series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s). Its application initially started in physics. It is now being applied in engineering, life sciences, social sciences, and finance. See also Economic capital. Valuation Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on. So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the company's level of solvency. Deterministic approach The simplest way of doing this, and indeed the primary method used, is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the most likely rate and the most critical rate. The result provides a point estimate - the best single estimate of what the company's current solvency position is, or multiple points of estimate - depends on the problem definition. Selection and identification of parameter values are frequently a challenge to less experienced analysts. The downside of this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are more probable and some are less. Stochastic modelling A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire company. But rather than setting investment returns according to their most likely estimate, for example, the model uses random variations to look at what investment conditions might be like. Based on a set of random variables, the experience of the policy/portfolio/company is projected, and the outcome is noted. Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times. At the end, a distribution of outcomes is available which shows not only the most likely estimate but what ranges are reasonable too. The most likely estimate is given by the distribution curve's (formally known as the Probability density function) center of mass which is typically also the peak(mode) of the curve, but may be different e.g. for asymmetric distributions. This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time period or the chance that an extreme event in a particular time period leads to an investment return less than the guarantee. Stochastic modelling builds volatility and variability (randomness) into the simulation and therefore provides a better representation of real life from more angles. Numerical evaluations of quantities Stochastic models help to assess the interactions between variables, and are useful tools to numerically evaluate quantities, as they are usually implemented using Monte Carlo simulation techniques (see Monte Carlo method). While there is an advantage here, in estimating quantities that would otherwise be difficult to obtain using analytical methods, a disadvantage is that such methods are limited by computing resources as well as simulation error. Below are some examples: Means Using statistical notation, it is a well-known result that the mean of a function, f, of a random variable X is not necessarily the function of the mean of X. For example, in application, applying the best estimate (defined as the mean) of investment returns to discount a set of cash flows will not necessarily give the same result as assessing the best estimate to the discounted cash flows. A stochastic model would be able to assess this latter quantity with simulations. Percentiles This idea is seen again when one considers percentiles (see percentile). When assessing risks at specific percentiles, the factors that contribute to these levels are rarely at these percentiles themselves. Stochastic models can be simulated to assess the percentiles of the aggregated distributions. Truncations and censors Truncating and censoring of data can also be estimated using stochastic models. For instance, applying a non-proportional reinsurance layer to the best estimate losses will not necessarily give us the best estimate of the losses after the reinsurance layer. In a simulated stochastic model, the simulated losses can be made to "pass through" the layer and the resulting losses assessed appropriately. The asset model Although the text above referred to "random variations", the stochastic model does not just use any arbitrary set of values. The asset model is based on detailed studies of how markets behave, looking at averages, variations, correlations, and more. The models and underlying parameters are chosen so that they fit historical economic data, and are expected to produce meaningful future projections. There are many such models, including the Wilkie Model, the Thompson Model and the Falcon Model. The claims model The claims arising from policies or portfolios that the company has written can also be modelled using stochastic methods. This is especially important in the general insurance sector, where the claim severities can have high uncertainties. Frequency-Severity models Depending on the portfolios under investigation, a model can simulate all or some of the following factors stochastically: Number of claims Claim severities Timing of claims Claims inflations can be applied, based on the inflation simulations that are consistent with the outputs of the asset model, as are dependencies between the losses of different portfolios. The relative uniqueness of the policy portfolios written by a company in the general insurance sector means that claims models are typically tailor-made. Stochastic reserving models Estimating future claims liabilities might also involve estimating the uncertainty around the estimates of claim reserves. See J Li's article "Comparison of Stochastic Reserving Models" (published in the Australian Actuarial Journal, volume 12 issue 4) for a recent article on this topic. References Guidance on stochastic modelling for life insurance reserving (pdf) J Li's article on stochastic reserving from the Australian Actuarial Journal, 2006 (pdf) Stochastic Modelling For Dummies, Actuarial Society of South Africa Actuarial science Stochastic models Monte Carlo methods in finance
Stochastic modelling (insurance)
[ "Mathematics" ]
1,478
[ "Applied mathematics", "Actuarial science" ]
4,074,700
https://en.wikipedia.org/wiki/NACA%20airfoil
The NACA airfoil series is a set of standardized airfoil shapes developed by this agency, which became widely used in the design of aircraft wings. Origins NACA initially developed the numbered airfoil system which was further refined by the United States Air Force at Langley Research Center. According to the NASA website: Four-digit series The NACA four-digit wing sections define the profile by: First digit describing maximum camber as percentage of the chord. Second digit describing the distance of maximum camber from the airfoil leading edge in tenths of the chord. Last two digits describing maximum thickness of the airfoil as percent of the chord. For example, the NACA 2412 airfoil has a maximum camber of 2% located 40% (0.4 chords) from the leading edge with a maximum thickness of 12% of the chord. The NACA 0015 airfoil is symmetrical, the 00 indicating that it has no camber. The 15 indicates that the airfoil has a 15% thickness to chord length ratio: it is 15% as thick as it is long. Equation for a symmetrical 4-digit NACA airfoil The formula for the shape of a NACA 00xx foil, with "xx" being replaced by the percentage of thickness to chord, is where: x is the position along the chord from 0 to 1.00 (0 to 100%), is the half thickness at a given value of x (centerline to surface), t is the maximum thickness as a fraction of the chord (so t gives the last two digits in the NACA 4-digit denomination divided by 100). In this equation, at x = 1 (the trailing edge of the airfoil), the thickness is not quite zero. If a zero-thickness trailing edge is required, for example for computational work, one of the coefficients should be modified such that they sum to zero. Modifying the last coefficient (i.e. to −0.1036) results in the smallest change to the overall shape of the airfoil. The leading edge approximates a cylinder with a chord-normalized radius of Now the coordinates of the upper airfoil surface and of the lower airfoil surface are Symmetrical 4-digit series airfoils by default have maximum thickness at 30% of the chord from the leading edge. Equation for a cambered 4-digit NACA airfoil The simplest asymmetric foils are the NACA 4-digit series foils, which use the same formula as that used to generate the 00xx symmetric foils, but with the line of mean camber bent. The formula used to calculate the mean camber line is where m is the maximum camber (100 m is the first of the four digits), p is the location of maximum camber (10 p is the second digit in the NACA xxxx description). For example, a NACA 2412 airfoil uses a 2% camber (first digit) 40% (second digit) along the chord of a 0012 symmetrical airfoil having a thickness 12% (digits 3 and 4) of the chord. For this cambered airfoil, because the thickness needs to be applied perpendicular to the camber line, the coordinates and , of respectively the upper and lower airfoil surface, become where Five-digit series The NACA five-digit series describes more complex airfoil shapes. Its format is LPSTT, where: L: a single digit representing the theoretical optimal lift coefficient at ideal angle of attack CLI = 0.15 L (this is not the same as the lift coefficient CL), P: a single digit for the x coordinate of the point of maximum camber (max. camber at x = 0.05 P), S: a single digit indicating whether the camber is simple (S = 0) or reflex (S = 1), TT: the maximum thickness in percent of chord, as in a four-digit NACA airfoil code. For example, the NACA 23112 profile describes an airfoil with design lift coefficient of 0.3 (0.15 × 2), the point of maximum camber located at 15% chord (5 × 3), reflex camber (1), and maximum thickness of 12% of chord length (12). The camber line for the simple case (S = 0) is defined in two sections: where the chordwise location and the ordinate have been normalized by the chord. The constant is chosen so that the maximum camber occurs at ; for example, for the 230 camber line, and . Finally, constant is determined to give the desired lift coefficient. For a 230 camber-line profile (the first 3 numbers in the 5-digit series), is used. Non-reflexed 3 digit camber lines 3-digit camber lines provide a far forward location for the maximum camber. The camber line is defined as with the camber line gradient The following table presents the various camber-line profile coefficients for a theoretical design lift coefficient of 0.3 - the value of must be linearly scaled for a different desired design lift coefficient: Reflexed 3-digit camber lines Camber lines such as 231 makes the negative trailing edge camber of the 230 series profile to be positively cambered. This results in a theoretical pitching moment of 0. From From The following table presents the various camber-line profile coefficients for a theoretical design lift coefficient of 0.3 - the value of , and must be linearly scaled for a different desired design lift coefficient: Modifications Four- and five-digit series airfoils can be modified with a two-digit code preceded by a hyphen in the following sequence: One digit describing the roundness of the leading edge, with 0 being sharp, 6 being the same as the original airfoil, and larger values indicating a more rounded leading edge. One digit describing the distance of maximum thickness from the leading edge in tenths of the chord. For example, the NACA 1234-05 is a NACA 1234 airfoil with a sharp leading edge and maximum thickness 50% of the chord (0.5 chords) from the leading edge. In addition, for a more precise description of the airfoil all numbers can be presented as decimals. 1-series A new approach to airfoil design was pioneered in the 1930s, in which the airfoil shape was mathematically derived from the desired lift characteristics. Prior to this, airfoil shapes were first created and then had their characteristics measured in a wind tunnel. The 1-series airfoils are described by five digits in the following sequence: The number "1" indicating the series. One digit describing the distance of the minimum-pressure area in tenths of chord. A hyphen. One digit describing the lift coefficient in tenths. Two digits describing the maximum thickness in percent of chord. For example, the NACA 16-123 airfoil has minimum pressure 60% of the chord back with a lift coefficient of 0.1 and maximum thickness of 23% of the chord. 6-series An improvement over 1-series airfoils with emphasis on maximizing laminar flow. The airfoil is described using six digits in the following sequence: The number "6" indicating the series. One digit describing the distance of the minimum pressure area in tenths of the chord. The subscript digit gives the range of lift coefficient in tenths above and below the design lift coefficient in which favorable pressure gradients exist on both surfaces. A hyphen. One digit describing the design lift coefficient in tenths. Two digits describing the maximum thickness as percent of chord. "a=" followed by a decimal number describing the fraction of chord over which laminar flow is maintained. a=1 is the default if no value is given. For example, the NACA 654-415, has the minimum pressure placed at 50% of the chord, has a maximum thickness of 15% of the chord, design lift coefficient of 0.4 and maintains laminar flow for lift coefficients between 0 and 0.8. 7-series Further advancement in maximizing laminar flow achieved by separately identifying the low-pressure zones on upper and lower surfaces of the airfoil. The airfoil is described by seven digits in the following sequence: The number "7" indicating the series. One digit describing the distance of the minimum pressure area on the upper surface in tenths of the chord. One digit describing the distance of the minimum pressure area on the lower surface in tenths of the chord. One letter referring to a standard profile from the earlier NACA series. One digit describing the lift coefficient in tenths. Two digits describing the maximum thickness as percent of chord. For example, the NACA 712A315 has the area of minimum pressure 10% of the chord back on the upper surface and 20% of the chord back on the lower surface, uses the standard "A" profile, has a lift coefficient of 0.3, and has a maximum thickness of 15% of the chord. 8-series Supercritical airfoils designed to independently maximize laminar flow above and below the wing. The numbering is identical to the 7-series airfoils except that the sequence begins with an "8" to identify the series. See also Vought V-173 NACA cowling NACA duct References External links UIUC Airfoil Coordinate Database coordinates for nearly 1,600 airfoils John Dreese's NACA airfoil coordinate generation program Works on Windows XP, 7 and 8. NACA Airfoil Series NASA website feature on NACA airfoils Airfoil Interactive WebApp Aerodynamics Aircraft wing design Airfoil Numerical Analysis of NACA Airfoil 0012 at Different Attack Angles and Obtaining its Aerodynamic Coefficients NACA 4 & 5 digits, 16 series airfoil generator
NACA airfoil
[ "Chemistry", "Engineering" ]
2,015
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
4,074,964
https://en.wikipedia.org/wiki/NGC%204656%20and%20NGC%204657
NGC 4656/57 is a highly warped edge-on barred spiral galaxy located in the local universe 30 million light years away from Earth in the constellation Canes Venatici. This galaxy is sometimes called the Hockey Stick Galaxy or the Crowbar Galaxy. Its unusual shape is thought to be due to an interaction between NGC 4656, NGC 4631, and NGC 4627. The galaxy is a member of the NGC 4631 Group. A luminous blue variable in "super-outburst" was discovered in NGC 4656/57 on March 21, 2005. See also Antennae Galaxies References External links Hockey Stick (NGC 4656) SEDS – NGC 4656 Canes Venatici NGC 4631 Group 4656 07907 42863 Interacting galaxies Barred spiral galaxies
NGC 4656 and NGC 4657
[ "Astronomy" ]
156
[ "Canes Venatici", "Constellations" ]
4,075,126
https://en.wikipedia.org/wiki/Glycyrrhizol
Glycyrrhizol A is a prenylated pterocarpan and an isoflavonoid derivative. It is a compound isolated from the root of the Chinese licorice plant (Glycyrrhiza uralensis). It may has in vitro antibacterial properties. In one study, the strongest antibacterial activity was observed against Streptococcus mutans, an organism known to cause tooth decay in humans. References Pterocarpans Antibiotics Hydroxyarenes Methoxy compounds
Glycyrrhizol
[ "Biology" ]
114
[ "Antibiotics", "Biocides", "Biotechnology products" ]
4,075,340
https://en.wikipedia.org/wiki/GpsOne
gpsOne is the brand name for a cellphone chipset manufactured by Qualcomm for mobile phone tracking. It uses A-GPS or Assisted-GPS to locate the phone more quickly, accurately and reliably than by GPS alone, especially in places with poor GPS reception. Current uses gpsOne is primarily used today for Enhanced-911 E911 service, allowing a cell phone to relay its location to emergency dispatchers, thus overcoming one of the traditional shortcomings of cellular phone technology. Using a combination of GPS satellite signals and the cell sites themselves, gpsOne plots the location with greater accuracy than traditional GPS systems in areas where satellite reception is problematic due to buildings or terrain. Geotagging - addition of location information to the pictures taken with a camera phone. Location-based information delivery, (i.e. local weather and traffic alerts). Verizon Wireless uses gpsOne to support its VZ Navigator automotive navigation system. Verizon disables gpsOne in some phones for other applications as compared to AT&T and T-Mobile. gpsOne in other systems besides Verizon can be used with any third-party applications. Future uses Some vendors are also looking at GPS phone technology as a method of implementing location-based solutions, such as: Employers can track vehicles or employees, allowing quick response from the nearest representative. Restaurants, clubs, theatres and other venues could relay SMS special offers to patrons within a certain range. When using a phone as a 'wallet' and making e-payments, the user's location can be verified as an additional layer of security against cloning. For example, John Doe in AverageTown USA is most likely not purchasing a candy bar from a machine at LAX if he was logged paying for the subway token in NYC, and calling his wife from the Empire State Building. Location-based games. Functions gpsOne can operate in four modes: Standalone - The handset has no connection to the network, and uses only the GPS satellite signals it can currently receive to try to establish a location. Mobile Station Based (MSB) - The handset is connected to the network, and uses the GPS signals and a location signal from the network. Mobile Station Assisted (MSA) - The handset is connected to the network, uses GPS signals and a location signal, then relays its 'fix' to the server. Which then uses the signal strength from the phone to the network towers to further plot the user's position. Users can still maintain voice communication in this scenario, but not 'Internet/Network service', (i.e. Web browser, IM, streaming TV etc.) Mobile Station Hybrid - Same as above, but network functionality remains. Normally only in areas with exceptional coverage. Adoption Since introduction in 2000, the gpsOne chipset has been adopted by 40+ vendors, and is used in more than 250 cellphone models worldwide. More than 300 million gpsOne enabled handsets are currently on the market, making it one of the most widely deployed solutions. External links Product website The gpsOne XTRA MSB assistance data format: Vinnikov & Pshehotskaya (2020): Deciphering of the gpsOne File Format for Assisted GPS Service, Advances in Intelligent Systems and Computing 1184:377-386 Vinnikov, Pshehotskaya and Gritsevich (2021): Partial Decoding of the GPS Extended Prediction Orbit File, 2021 29th Conference of Open Innovations Association Mobile telecommunications Global Positioning System Qualcomm
GpsOne
[ "Technology", "Engineering" ]
713
[ "Wireless locating", "Mobile telecommunications", "Aircraft instruments", "Aerospace engineering", "Global Positioning System" ]
4,075,450
https://en.wikipedia.org/wiki/Actor%20model%20middle%20history
In computer science, the Actor model, first published in 1973 , is a mathematical model of concurrent computation. This article reports on the middle history of the Actor model in which major themes were initial implementations, initial applications, and development of the first proof theory and denotational model. It is the follow on article to Actor model early history which reports on the early history of the Actor model which concerned the basic development of the concepts. The article Actor model later history reports on developments after the ones reported in this article. Proving properties of Actor systems Carl Hewitt [1974] published the principle of Actor induction which is: Suppose that an Actor has property when it is created Further suppose that if has property when it processes a message, then it has property when it processes the next message. Then always has the property . In his doctoral dissertation, Aki Yonezawa developed further techniques for proving properties of Actor systems including those that make use of migration. Russ Atkinson and Carl Hewitt developed techniques for proving properties of Serializers that are guardians of shared resources. Gerry Barber's doctoral dissertation concerned reasoning about change in knowledgeable office systems. Garbage collection Garbage collection (the automatic reclamation of unused storage) was an important theme in the development of the Actor model. In his doctoral dissertation, Peter Bishop developed an algorithm for garbage collection in distributed systems. Each system kept lists of links of pointers to and from other systems. Cyclic structures were collected by incrementally migrating Actors (objects) onto other systems which had their addresses until a cyclic structure was entirely contained in a single system where the garbage collector could recover the storage. Henry Baker developed an algorithm for real-time garbage collection in his doctoral dissertation. The fundamental idea was to interleave collection activity with construction activity so that there would not have to be long pauses while collection takes place. See incremental garbage collection. Henry Lieberman and Carl Hewitt [1983] developed a real time garbage collection based on the lifetimes of Actors (Objects). The fundamental idea was to allocate Actors (objects) in generations so that only the latest generations would have to be examined during a garbage collection. See generational garbage collection. Actor programming languages Henry Lieberman, Dan Theriault, et al. developed Act1, an Actor programming language. Subsequently for his masters thesis, Dan Theriault developed Act2. These early proof of concept languages were rather inefficient and not suitable for applications. In his doctoral dissertation, Ken Kahn developed Ani, which he used to develop several animations. Bill Kornfeld developed the Ether programming language for the Scientific Community Metaphor in his doctoral dissertation. William Athas and Nanette Boden [1988] developed Cantor which is an Actor programming language for scientific computing. Jean-Pierre Briot [1988, 1999] developed means to extend Smalltalk 80 for Actor computations. Christine Tomlinson, Mark Scheevel, Greg Lavender, Greg Meredith, et al. [1995] at MCC developed an Actor programming language for InfoSleuth agents in Rosette. Carl Hewitt, Beppe Attardi, and Henry Lieberman [1979] developed proposals for delegation in message passing. This gave rise to the so-called inheritance anomaly controversy in object-oriented concurrent programming languages [Satoshi Matsuoka and Aki Yonezawa 1993, Giuseppe Milicia and Vladimiro Sassone 2004]. A denotational model of Actor systems In his doctoral dissertation, Will Clinger developed the first denotational model of Actor systems. See denotational semantics of the Actor model. References Carl Hewitt, et al. Actor Induction and Meta-evaluation Conference Record of ACM Symposium on Principles of Programming Languages, January 1974. Peter Bishop Very Large Address Space Modularly Extensible Computer Systems MIT EECS Doctoral Dissertation. June 1977. Aki Yonezawa Specification and Verification Techniques for Parallel Programs Based on Message Passing Semantics MIT EECS Doctoral Dissertation. December 1977. Henry Baker. Actor Systems for Real-Time Computation MIT EECS Doctoral Dissertation. January 1978. Ken Kahn. A Computational Theory of Animation MIT EECS Doctoral Dissertation. August 1979. Carl Hewitt, Beppe Attardi, and Henry Lieberman. Delegation in Message Passing Proceedings of First International Conference on Distributed Systems Huntsville, AL. October 1979. Carl Hewitt and Russ Atkinson. Specification and Proof Techniques for Serializers IEEE Journal on Software Engineering. January 1979. Russ Atkinson. Automatic Verification of Serializers MIT Doctoral Dissertation. June, 1980. Bill Kornfeld and Carl Hewitt. The Scientific Community Metaphor IEEE Transactions on Systems, Man, and Cybernetics. January 1981. Henry Lieberman. Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1 MIT AI memo 626. May 1981. Henry Lieberman. A Preview of Act 1 MIT AI memo 625. June 1981. Jerry Barber. Reasoning about Change in Knowledgeable Office Systems MIT EECS Doctoral Dissertation. August 1981. Bill Kornfeld. Parallelism in Problem Solving MIT EECS Doctoral Dissertation. August 1981. Will Clinger. Foundations of Actor Semantics MIT Mathematics Doctoral Dissertation. June 1981. Daniel Theriault. A Primer for the Act-1 Language MIT AI memo 672. April 1982. Henry Lieberman and Carl Hewitt. A real Time Garbage Collector Based on the Lifetimes of Objects CACM June 1983. Daniel Theriault. Issues in the Design and Implementation of Act 2 MIT AI technical report 728. June 1983. Henry Lieberman. An Object-Oriented Simulator for the Apiary Conference of the American Association for Artificial Intelligence, Washington, D. C., August 1983 Carl Hewitt and Peter de Jong. Analyzing the Roles of Descriptions and Actions in Open Systems Proceedings of the National Conference on Artificial Intelligence. August 1983. Jean-Pierre Briot. From objects to actors: Study of a limited symbiosis in Smalltalk-80 Rapport de Recherche 88-58, RXF-LITP, Paris, France, September 1988. William Athas and Nanette Boden Cantor: An Actor Programming System for Scientific Computing in Proceedings of the NSF Workshop on Object-Based Concurrent Programming. 1988. Special Issue of SIGPLAN Notices. Satoshi Matsuoka and Aki Yonezawa. Analysis of Inheritance Anomaly in Object-Oriented Concurrent Programming Languages Research Directions in Concurrent Object-Oriented Programming MIT Press. 1993. Darrell Woelk. Developing InfoSleuth Agents Using Rosette: An Actor Based Language Proceedings of the CIKM '95 Workshop on Intelligent Information Agents. 1995. Jean-Pierre Briot. Acttalk: A framework for object-oriented concurrent programming-design and experience 2nd France-Japan workshop. 1999. Giuseppe Milicia and Vladimiro Sassone. The Inheritance Anomaly: Ten Years After SAC. Nicosia, Cyprus. March 2004. Actor model (computer science) History of computing
Actor model middle history
[ "Technology" ]
1,391
[ "Computers", "History of computing" ]
4,075,543
https://en.wikipedia.org/wiki/Indiscernibles
In mathematical logic, indiscernibles are objects that cannot be distinguished by any property or relation defined by a formula. Usually only first-order formulas are considered. Examples If a, b, and c are distinct and {a, b, c} is a set of indiscernibles, then, for example, for each binary formula , we must have Historically, the identity of indiscernibles was one of the laws of thought of Gottfried Leibniz. Generalizations In some contexts one considers the more general notion of order-indiscernibles, and the term sequence of indiscernibles often refers implicitly to this weaker notion. In our example of binary formulas, to say that the triple (a, b, c) of distinct elements is a sequence of indiscernibles implies and More generally, for a structure with domain and a linear ordering , a set is said to be a set of -indiscernibles for if for any finite subsets and with and and any first-order formula of the language of with free variables, .p. 2 Applications Order-indiscernibles feature prominently in the theory of Ramsey cardinals, Erdős cardinals, and zero sharp. See also Identity of indiscernibles Rough set References Citations Model theory
Indiscernibles
[ "Mathematics" ]
268
[ "Mathematical logic", "Model theory" ]
4,075,738
https://en.wikipedia.org/wiki/Actor%20model%20later%20history
In computer science, the Actor model, first published in 1973 , is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were investigation of the basic power of the model, study of issues of compositionality, development of architectures, and application to Open systems. It is the follow on article to Actor model middle history which reports on the initial implementations, initial applications, and development of the first proof theory and denotational model. Power of the Actor Model Investigations began into the basic power of the Actor model. Carl Hewitt [1985] argued that because of the use of Arbiters that the Actor model was more powerful than logic programming (see indeterminacy in concurrent computation). A family of Prolog-like concurrent message passing systems using unification of shared variables and data structure streams for messages were developed by Keith Clark, Hervé Gallaire, Steve Gregory, Vijay Saraswat, Udi Shapiro, Kazunori Ueda, etc. Some of these authors made claims that these systems were based on mathematical logic. However, like the Actor model, the Prolog-like concurrent systems were based on message passing and consequently were subject to indeterminacy in the ordering of messages in streams that was similar to the indeterminacy in arrival ordering of messages sent to Actors. Consequently Carl Hewitt and Gul Agha [1991] concluded that the Prolog-like concurrent systems were neither deductive nor logical. They were not deductive because computational steps did not follow deductively from their predecessors and they were not logical because no system of mathematical logic was capable of deriving the facts of subsequent computational situations from their predecessors Compositionality Compositionality concerns composing systems from subsystems. Issues of compositionality had proven to be serious limitations for previous theories of computation including the lambda calculus and Petri nets. E.g., two lambda expressions are not a lambda expression and two Petri nets are not a Petri net and cannot influence each other. In his doctoral dissertation Gul Agha addressed issues of compositionality in the Actor model. Actor configurations have receptionists that can receive messages from outside and may have the addresses of the receptionists of other Actor configurations. In this way two Actor configurations can be composed into another configuration whose subconfigurations can communicate with each other. Actor configurations have the advantage that they can have multiple Actors (i.e. the receptionists) which receive messages from outside without the disadvantage of having to poll to get messages from multiple sources (see issues with getting messages from multiple channels). Open Systems Carl Hewitt [1985] pointed out that openness was becoming a fundamental challenge in software system development. Open distributed systems are required to meet the following challenges: Monotonicity Once something is published in an open distributed system, it cannot be taken back. Pluralism Different subsystems of an open distributed system include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in open distributed systems. Unbounded nondeterminism Asynchronously, different subsystems can come up and go down and communication links can come in and go out between subsystems of an open distributed system. Therefore the time that it will take to complete an operation cannot be bounded in advance (see unbounded nondeterminism). Inconsistency Large distributed systems are inevitably inconsistent concerning their information about the information system interactions of their human users Carl Hewitt and Jeff Inman [1991] worked to develop semantics for Open Systems to address issues that had arisen in Distributed Artificial Intelligence. Carl Hewitt and Carl Manning [1994] reported on the development of Participatory Semantics for Open Systems. Computer Architectures Researchers at Caltech under the leadership of Chuck Seitz developed the Cosmic Cube which was one of the first message-passing Actor architectures. Subsequently at MIT researchers under the leadership of Bill Dally developed the J Machine. Attempts to relate Actor semantics to algebra and linear logic Kohei Honda and Mario Tokoro 1991, José Meseguer 1992, Ugo Montanari and Carolyn Talcott 1998, M. Gaspari and G. Zavattaro 1999 have attempted to relate Actor semantics to algebra. Also John Darlington and Y. K. Guo 1994 have attempted to relate linear logic to Actor semantics. However, none of the above formalisms addresses the crucial property of guarantee of service (see unbounded nondeterminism). Recent developments Recent developments in the Actor model have come from several sources. Hardware development is furthering both local and nonlocal massive concurrency. Local concurrency is being enabled by new hardware for 64-bit many-core microprocessors, multi-chip modules, and high performance interconnect. Nonlocal concurrency is being enabled by new hardware for wired and wireless broadband packet switched communications. Both local and nonlocal storage capacities are growing exponentially. These hardware developments pose enormous modelling challenges. Hewitt [Hewitt 2006a, 2006b] is attempting to use the Actor model to address these challenges. References Carl Hewitt. The Challenge of Open Systems Byte Magazine. April 1985. Reprinted in The foundation of artificial intelligence---a sourcebook Cambridge University Press. 1990. Carl Manning. Traveler: the actor observatory ECOOP 1987. Also appears in Lecture Notes in Computer Science, vol. 276. William Athas and Charles Seitz Multicomputers: message-passing concurrent computers IEEE Computer August 1988. William Dally and Wills, D. Universal mechanisms for concurrency PARLE 1989. W. Horwat, A. Chien, and W. Dally. Experience with CST: Programming and Implementation PLDI. 1989. Carl Hewitt. Towards Open Information Systems Semantics Proceedings of 10th International Workshop on Distributed Artificial Intelligence. October 23–27, 1990. Bandera, Texas. Akinori Yonezawa, Ed. ABCL: An Object-Oriented Concurrent System MIT Press. 1990. K. Kahn and Vijay A. Saraswat, "Actors as a special case of concurrent constraint (logic) programming", in SIGPLAN Notices, October 1990. Describes Janus. Carl Hewitt. Open Information Systems Semantics Journal of Artificial Intelligence. January 1991. Carl Hewitt and Jeff Inman. DAI Betwixt and Between: From "Intelligent Agents" to Open Systems Science IEEE Transactions on Systems, Man, and Cybernetics. November /December 1991. Carl Hewitt and Gul Agha. Guarded Horn clause languages: are they deductive and Logical? International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991. Kohei Honda and Mario Tokoro. An Object Calculus for Asynchronous Communication ECOOP 91. José Meseguer. Conditional rewriting logic as a unified model of concurrency in Selected papers of the Second Workshop on Concurrency and compositionality. 1992. William Dally, et al. The Message-Driven Processor: A Multicomputer Processing Node with Efficient Mechanisms IEEE Micro. April 1992. S. Miriyala, G. Agha, and Y.Sami. Visulatizing actor programs using predicate transition nets Journal of Visual Programming. 1992. Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott: A Foundation for Actor ComputationJournal of Functional Programming January 1993. Carl Hewitt and Carl Manning. Negotiation Architecture for Large-Scale Crisis Management AAAI-94 Workshop on Models of Conflict Management in Cooperative Problem Solving. Seattle, WA. August 4, 1994. John Darlington and Y. K. Guo: Formalizing Actors in Linear Logic International Conference on Object-Oriented Information Systems. Springer-Verlag. 1994. Carl Hewitt and Carl Manning. Synthetic Infrastructures for Multi-Agency Systems Proceedings of ICMAS '96. Kyoto, Japan. December 8–13, 1996. S. Frolund. Coordinating Distributed Objects: An Actor-Based Approach for Synchronization MIT Press. November 1996. W. Kim. ThAL: An Actor System for Efficient and Scalable Concurrent Computing PhD thesis. University of Illinois at Urbana Champaign. 1997. Mauro Gaspari and Gianluigi Zavattaro: An Algebra of Actors, Technical Report UBLCS-97-4, University of Bologna, May 1997 Ugo Montanari and Carolyn Talcott. Can Actors and Pi-Agents Live Together? Electronic Notes in Theoretical Computer Science. 1998. M. Gaspari and G. Zavattaro: An Algebra of Actors Formal Methods for Open Object Based Systems, 1999. N. Jamali, P. Thati, and G. Agha. An actor based architecture for customizing and controlling agent ensembles IEEE Intelligent Systems. 14(2). 1999. P. Thati, R. Ziaei, and G. Agha. A Theory of May Testing for Actors Formal Methods for Open Object-based Distributed Systems. March 2002. P. Thati, R. Ziaei, and G. Agha. A theory of may testing for asynchronous calculi with locality and no name matching Algebraic Methodology and Software Technology. Springer Verlag. September 2002. LNCS 2422. Gul Agha and Prasanna Thati. An Algebraic Theory of Actors and Its Application to a Simple Object-Based Language, From OO to FM (Dahl Festschrift) LNCS 2635. Springer-Verlag. 2004. Carl Hewitt. The repeated demise of logic programming and why it will be reincarnated What Went Wrong and Why: Lessons from AI Research and Applications. Technical Report SS-06-08. AAAI Press. March 2006b. Carl Hewitt What is Commitment? Physical, Organizational, and Social COIN@AAMAS. 2006a. Actor model (computer science) History of computing
Actor model later history
[ "Technology" ]
1,995
[ "Computers", "History of computing" ]
4,075,984
https://en.wikipedia.org/wiki/Vincent%20Schaefer
Vincent Joseph Schaefer (July 4, 1906 – July 25, 1993) was an American chemist and meteorologist who developed cloud seeding. On November 13, 1946, while a researcher at the General Electric Research Laboratory, Schaefer modified clouds in the Berkshire Mountains by seeding them with dry ice. While he was self-taught and never completed high school, he was issued 14 patents. Personal life Vincent J. Schaefer was the oldest son of Peter Aloysius Schaefer and Rose Agnes (Holtslag) Schaefer. He had two younger brothers, Paul and Carl, and two younger sisters, Gertrude and Margaret. The Schaefer family lived in Schenectady, New York, and due to his mother's health, starting in 1921 the family made summer trips to the Adirondack Mountains. Vincent Schaefer had a lifelong association with the Adirondacks, as well as interests in hiking, natural history, and archeology. In his youth he was the founder of a local tribe of the Lone Scouts and with some of his tribe mates wrote and printed a tribe paper called "Archaeological Research." Schaefer credited this publication with his introduction to many prominent individuals in the Schenectady area, including Dr. Willis Rodney Whitney of the General Electric Research Laboratory. During the late 1920s and early 1930s, Schaefer built up his personal library on natural history, science, and his other areas of interest and read a great deal. He also organized groups with those who shared his many interests — the Mohawk Valley Hiking Club in 1929, the Van Epps-Hartley Chapter of the New York Archaeological Association in 1931, and the Schenectady Wintersport Club (which established snow trains to ski slopes in the Adirondacks) in 1933–34. In 1931 Schaefer began work on creating the Long Path of New York (a hiking trail beginning near New York City and ending at Whiteface Mountain in the Adirondacks). During this period Schaefer also created adult education programs on natural history topics which gave him opportunities to speak in the community. Through these many activities Schaefer continued to expand his acquaintances, including John S. Apperson, an engineer at General Electric and a devout conservationist of the Adirondacks. Apperson introduced Schaefer to Irving Langmuir, a scientist at the GE Research Laboratory who was awarded a Nobel Prize in 1932 for his work in surface chemistry. Among other things, Langmuir shared Schaefer's love of skiing and the outdoors. During his retirement, Schaefer worked with photographer John Day on A Field Guide to the Atmosphere (1981), a publication in the Peterson Field Guide series. In addition to continuing his consulting work, Schaefer was in a position to devote much more of his time to some of his lifelong interests such as environmental issues, natural and local history. This included the writing of numerous articles and the delivering of many presentations concerning the natural environment of upstate New York and the human impact on it. He also devoted much of his time to the fight for the preservation of many wilderness areas and parks, such as the Mohonk Preserve, Vroman's Nose, and the Great Flats Aquifer. Schaefer's long-term interest in Dutch barns made it possible for him to assume the editorship of Dutch Barn Miscellany for a time and to build a scale model of a Dutch barn. He also did a lot of research on the original settler families of the Schenectady and Mohawk Valley areas. During his retirement, Schaefer reflected on his extraordinary life preparing timelines, an unpublished autobiography, and indexes to some of his research notebooks and film collections. Schaefer also attended to the disposition of his papers and library. He also worked on a project he entitled "Ancient Windows of the Earth." This involved the slicing of rocks thinly so as to create a translucent effect. When he mounted such pieces on lampshades or other objects, it created a stained-glass window effect from natural rock highlighting the rock's geologic history. As part of this project, Schaefer designed and built a 6' diameter window in memory of his parents for the Saint James Church in North Creek in the Adirondacks. Schaefer married Lois Perret on July 27, 1935. Until their deaths they lived on Schermerhorn Road in Schenectady, in a house Schaefer built with his brothers, which they called Woestyne South. Woestyne North was the name the Schaefers gave to their camp in the Adirondacks. The Schaefers had three children, Susan, Katherine, and James. Professional career General Electric In 1922, Schaefer's parents asked him to leave high school and go to work to supplement the family income. On the advice of his maternal uncles, Schaefer joined a four-year apprentice machinist course at General Electric. During the second year of his apprenticeship, Schaefer was granted a one-month leave to accompany Dr. Arthur C. Parker, New York State Archaeologist, on an expedition to central New York. As Schaefer was concluding the apprentice course in 1926 he was assigned to work at the machine shop at the General Electric Research Laboratory, where he worked for a year as a journeyman toolmaker. Somewhat discouraged by the work of a toolmaker, Schaefer sought to satisfy a desire to work outdoors and to travel by joining, initially through a correspondence course, the Davey Institute of Tree Surgery in Kent, Ohio, in 1927. After a brief period working in Michigan, Schaefer asked to be transferred back to the Schenectady area and for a while worked as an independent landscape gardener. Upon the advice of Robert Palmer, Superintendent of the GE Research Laboratory, in 1929 Schaefer declined an opportunity to enter into a partnership for a plant nursery and instead rejoined the machine shop at the Research Laboratory, this time as a model maker. At the Research Laboratory machine shop, Schaefer built equipment for Langmuir and his research associate, Katharine B. Blodgett. In 1932 Langmuir asked Schaefer to become his research assistant. Schaefer accepted and in 1933 began his research work with Langmuir, Blodgett, Whitney, and others at the Research Lab and throughout the General Electric organization. With Langmuir, Blodgett and others as well as by himself, Schaefer published many reports on the areas he studied, which included surface chemistry techniques, electron microscope techniques, polarization, the affinity of ice for various surfaces, protein and other monolayers, studies of protein films, television tube brightness, and submicroscopic particulates. An example of Scaefer's lasting contribution to surface science is the description in 1938 of a technique developed by him and Langmuir (later known as the Langmuir–Schaefer method) for the controlled transfer of a monolayer to a substrate, a modification of the Langmuir–Blodgett method. After his promotion to research associate in 1938, Schaefer continued to work closely with Langmuir on the many projects Langmuir obtained through his involvement on national advisory committees, particularly related to military matters in the years immediately before and during the Second World War. This work included research on gas mask filtration of smokes, submarine detection with binaural sound, and the formation of artificial fogs using smoke generators—a project which reached fruition at Vrooman's Nose in the Schoharie Valley with a demonstration for military observers. During his years as Langmuir's assistant, Langmuir allowed and encouraged Schaefer to carry on his own research projects. As an example of this, in 1940 Schaefer became known in his own right for the development of a method to make replicas of individual snowflakes using a thin plastic coating. This discovery brought him national publicity in popular magazines and an abundance of correspondence from individuals, including many students, seeking to replicate his procedure. In 1943, the focus of Schaefer's and Langmuir's research shifted to precipitation static, aircraft icing, ice nuclei, and cloud physics, and many of their experiments were carried out at Mount Washington Observatory in New Hampshire. In the summer of 1946 Schaefer found his experimental "cold box" too warm for some laboratory tests he wanted to perform. Determined to get on with his work, he located some "dry ice" (solid CO2) and placed it into the bottom of the "cold box." Creating a cloud with his breath he observed a sudden and heretofore unseen bluish haze that suddenly turned into millions of microscopic ice crystals that dazzled him in the strobe lit chamber. He had stumbled onto the very principle that was hidden in all previous experiments—the stimulating effect of a sudden change in heat/cold, humidity, in supercooled water spontaneously producing billions of ice nuclei. Through scores of repeated experiments he quickly developed a method to "seed" supercooled clouds with dry ice. In November 1946 Schaefer conducted a successful field test seeding a natural cloud by airplane—with dramatic ice and snow effect. The resulting publicity brought an abundance of new correspondence, this time from people and businesses making requests for snow and water as well as scientists around the world also working on weather modification to change local weather conditions for the better. Schaefer's discovery also led to debates over the appropriateness of tampering with nature through cloud seeding. In addition, the successful field test enabled Langmuir to obtain federal funding to support additional research in cloud seeding and weather modification by the GE Research Laboratory. Schaefer was coordinator of the laboratory portion of Project Cirrus while the Air Force and Navy supplied the aircraft and pilots to carry out field tests and to collect the data used at the Research Laboratory. Field tests were conducted in the Schenectady area as well as in Puerto Rico and New Mexico. When the military pilots working on Project Cirrus were assigned to duties in connection with the Korean War, GE recommended that Project Cirrus be discontinued after comprehensive reports were prepared of the project and the discoveries made. The final Project Cirrus report was issued in March 1953. Munitalp Foundation While Project Cirrus was winding down, Schaefer was approached by Vernon Crudge on behalf of the trustees of the Munitalp Foundation to work on Munitalp's meteorological research program. For a time, Schaefer worked for both the Research Laboratory and Munitalp, and in 1954 he left the Research Laboratory to become the Director of Research of Munitalp. At Munitalp, Schaefer worked with the U.S. Forest Service at the Priest River Experimental Forest in northern Idaho with Harry T. Gisborne, noted fire researcher, on Project Skyfire, a program to determine the uses of cloud seeding to affect the patterns of lightning in thunderstorms (and the resulting forest fires started by lightning). Project Skyfire had its roots in an association between the Forest Service and Schaefer begun in the early days of Project Cirrus. While at Munitalp Schaefer also worked on developing a mobile atmospheric research laboratory and time-lapse films of clouds. Schaefer left Munitalp in 1958, turning down an offer to move with the Foundation to Kenya, but he remained an adviser to Munitalp for several years after that. Scientific education After leaving Munitalp, Schaefer's career turned towards scientific education, and let him put his belief in the power of experimentation and observation over book-learning into practice. He worked with the American Meteorological Society and Natural Science Foundation on an educational film program and to develop the Natural Sciences Institute summer programs which gave high school students the opportunity to work with scientists and on their own to do field research and experimentation. From 1959 to 1961 Schaefer was director of the Atmospheric Science Center at the Loomis School in Connecticut. During the 1970s he organized and led annual winter expeditions for 8-10 research scientists to Yellowstone National Park where massive amounts of supercooled clouds were produced by the many geysers, including Old Faithful. There at negative 20-50 Fahrenheit conditions enabled the assembled researchers to perform numerous experiments using dry ice, silver iodide to convert the supercooled water to ice crystals at ground level. Temperature and ice crystal formations allowed first-hand observation of the full range of halo and corona optical effects. Atmospheric Sciences Research Center (ASRC), University at Albany, State University of New York From 1962 to 1968 the NSI program was continued with Schaefer's directorship under the auspices of the Atmospheric Sciences Research Center (ASRC) at the State University of New York at Albany (as the University at Albany, State University of New York was then known). During this period Schaefer also continued his consulting work for many companies, government agencies, and universities. These consulting activities spanned most of Schaefer's career, and extended beyond his retirement from ASRC in 1976. Schaefer helped found ASRC in 1960 and served as its Director of Research until 1966 when he became Director. Schaefer brought highly qualified atmospheric science researchers to ASRC, many of whom he had met through his work at GE and Munitalp. Bernard Vonnegut, Raymond Falconer and Duncan Blanchard were all veterans of Project Cirrus who joined Schaefer at ASRC. During his years at ASRC, in addition to the NSI summer programs, Schaefer led annual research expeditions to Yellowstone National Park for atmospheric scientists to work in the outdoor laboratory it provided each January. In the 1970s Schaefer's own research interests focused on solar energy, aerosols, gases, air quality, and pollution particles in the atmosphere. His work in some of these areas culminated in a three-part report on Air Quality on the Global Scale in 1978. In addition, during the 1970s Schaefer was an instructor in the American Association for the Advancement of Science Chautauqua short courses for science teachers. Publications (selected) The presence of ozone, nitric acid, nitrogen dioxide and ammonia in the atmosphere, Atmospheric Sciences Research Center, State University of New York, 1978. The air quality patterns of aerosols on the global scale, Atmospheric Sciences Research Center, State University of New York, 1976. Hailstorms and hailstones of the western Great Plains, Smithsonian Institution, 1961. The possibilities of modifying lightning storms in the northern Rockies, Northern Rocky Mountain Forest & Range Experiment Station, 1949. Heat requirements for instruments and airfoils during icing storms on Mt. Washington, General Electric Research Laboratory, 1946. The Use of high speed model propellers for studying de-icing coatings at the Mt. Washington Observatory, General Electric Research Laboratory, 1946. The Liquid water content of summer clouds on the summit of Mt. Washington, General Electric Research Laboratory, 1946. The Preparation and use of water sensitive coatings for sampling cloud particles, General Electric Research Laboratory, 1946. A Heated, vaned pitot tube and a recorder for measuring air speed under severe icing conditions, General Electric Research Laboratory, 1946. Fossilizing snowflakes, 1941. Serendipity in Science: Twenty Years at Langmuir University, An Autobiography by Vincent J Schaefer, ScD, Compiled and Edited by Don Rittner, Square Circle Press, Voorheesville, NY 2013 (405 pages, 15 Chapters, illustrations and B/W photographs) Patents Filed Apr 12, 1935-"Treatment of Materials" Filed Dec 6, 1954-"Coating for Electric Devices" Filed Apr 12, 1941-"Light-Dividing Element" Filed Jun 27, 1941-"Method of Producing Solids of Desired Configuration" Filed Jun 21, 1944-"Cathode Ray Tube" Filed Mar 24, 1943-"Method and Apparatus for Producing Aerosols"(with Irving Langmuir) Filed Sep 18, 1947-"Cloud Moisture Meter" Filed Nov 5, 1947-"Method of Making Electrical Indicators of Mechanical Expansion"(with Katharine Blodgett) Filed Jan 21, 1948-"Method of Crystal Formation and Precipitation"(with Bernard Vonnegut) Filed Nov 18, 1947-"Electrical Moisture Meter" Filed Jan 29, 1948-"Method of Crystal Formation and Precipitation" Filed Nov 5, 1947-"Electrical Indicator of Mechanical Expansion"(with Katharine Blodgett) Filed Mar 6, 1952-"Method and Apparatus for Detecting Minute Crystal Forming Particles" Filed Dec 6, 1954-"Method of Depositing a Silver Film" References Our History, GE Global Research. Accessed February 14, 2006 Weather Services in the US: 1644-1970, National Weather Service Weather Forecast Office. <Serendipity in Science: Twenty Years at Langmuir University, and autobiography (1993), Compiled and Edited by Don Rittner, Square Circle Press, Voorheesville, NY> External links Finding Aid for the Papers of Vincent J. Schaefer, M.E. Grenander Department of Special Collections and Archives , University at Albany Libraries. Weather Modification: The Physical basis for Cloud Seeding Manipulating the weather, CBC. 1906 births 1993 deaths 20th-century American chemists American meteorologists General Electric people Scientists from Schenectady, New York University at Albany, SUNY faculty Weather modification Weather modification in North America
Vincent Schaefer
[ "Engineering" ]
3,590
[ "Planetary engineering", "Weather modification" ]
4,076,102
https://en.wikipedia.org/wiki/Flowerpot%20technique
The flowerpot technique is an animal testing technique used in sleep deprivation studies. It is designed to allow NREM sleep but prevent restful REM sleep. The test is usually performed with rodents. Technique During sleep deprivation studies, a laboratory rat is housed in a water filled enclosure with a single small, dry platform (traditionally, an upside down flowerpot in a bucket of water, from which the technique is named) just above the water line (>1 cm). While in NREM sleep, the rat retains muscle tone and can sleep on top of the platform. When the rat enters the more meaningful REM sleep, it loses muscle tone and falls off the platform into the water, then climbs back up to avoid drowning, and reenters NREM sleep, or its nose becomes submerged, shocking the rat back into an awakened state. This allows the rat to physically rest to avoid fatigue, but deprives it of REM sleep needed for normal mental function. The rat can then be subjected to physical and mental tasks and its performance is compared with the performance of rested control rodents, or its tissue (particularly the brain) be analyzed. See also Disk-over-water method References Animal testing techniques Ethically disputed research practices towards animals Sleeplessness and sleep deprivation
Flowerpot technique
[ "Chemistry", "Biology" ]
256
[ "Animal testing", "Behavior", "Sleeplessness and sleep deprivation", "Animal testing techniques", "Sleep" ]
4,076,182
https://en.wikipedia.org/wiki/System%20Contention%20Scope
In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. References Operating system kernels Processor scheduling algorithms
System Contention Scope
[ "Technology" ]
111
[ "Operating system stubs", "Computing stubs" ]
4,076,593
https://en.wikipedia.org/wiki/Emergent%20design
Emergent design is a phrase coined by David Cavallo to describe a theoretical framework for the implementation of systemic change in education and learning environments. This examines how choice of design methodology contributes to the success or failure of education reforms through studies in Thailand. It is related to the theories of situated learning and of constructionist learning. The term constructionism was coined by Seymour Papert under whom Cavallo studied. Emergent design holds that education systems cannot adapt effectively to technology change unless the education is rooted in the existing skills and needs of the local culture. Applications The most notable non-theoretical application of the principles of emergent design is in the OLPC, whose concept work is supported in Cavallo's paper "Models of growth — towards fundamental change in learning environment". Emergent design in agile software development Emergent design is a consistent topic in agile software development, as a result of the methodology's focus on delivering small pieces of working code with business value. With emergent design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile release cycle, development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a simpler design with a smaller code base, which is more easily understood and maintained and naturally has less room for defects. Emergent design for social change Emergent design is also being used in social change movements, such as a group of Canadian NGOs that are bringing together a group of civic leaders to discuss how their work scales up and scales deep. A series of events are being organized by the Carold Institute and Ashoka Canada in 2013 through to 2015. The project goals currently include, but are not limited to: Engage emerging leaders in redefining models and systems that will support a vibrant and dynamic civil society in Canada. Strengthen and broaden the impact of their leadership Discover and disseminate new knowledge related to systems change and emerging systems Share key learning, insights, innovative strategies and new models of engagement among participants and with key stakeholders and sponsoring organizations References External links Models of Growth David Cavallo bio page David Cavallo MIT Media Lab page Emergent design and learning environments: building on indigenous knowledge Technology integration models Educational psychology Learning Systems engineering
Emergent design
[ "Engineering" ]
538
[ "Systems engineering" ]
4,076,693
https://en.wikipedia.org/wiki/Hexadimethrine%20bromide
Hexadimethrine bromide (commercial brand name Polybrene) is a cationic polymer with several uses. In research, it is primarily used to increase the efficiency of transduction of certain cells with viruses (such as retrovirus or lentivirus) in cell culture. Hexadimethrine bromide acts by neutralizing the charge repulsion between virions and sialic acid on the cell surface. Use of Polybrene can improve transduction efficiency 100-1000 fold although it can be toxic to some cell types. Polybrene in combination with DMSO shock is used to transfect some cell types such as NIH-3T3 and CHO. It has other uses, including a role in protein sequencing. Hexadimethrine bromide also reverses heparin anticoagulation during open-heart surgery, and it was the original reversal agents used in the 1950s and 1960s. It was replaced by protamine sulfate in 1969, after it was shown that hexadimethrine bromide could potentially cause kidney failure in dogs when used in doses in excess of its therapeutic range. It is still used as an alternative to protamine sulfate for patients who are sensitive to protamine, and at least one surgical center has gone back to using it as their standard reversal agent, since protamine sulfate causes at least a mild hypotensive reaction in most or all patients Hexadimethrine bromide is also used in enzyme kinetic assays in order to reduce spontaneous activation of zymogens that are prone to auto activation. References Organic polymers
Hexadimethrine bromide
[ "Chemistry" ]
328
[ "Polymer stubs", "Organic polymers", "Organic compounds", "Organic chemistry stubs" ]
4,076,831
https://en.wikipedia.org/wiki/Gentzen%27s%20consistency%20proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0. Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees. Formally, ε0 is the first ordinal such that , i.e. the limit of the sequence It is a countable ordinal much smaller than large countable ordinals. To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε0. This can be done in various ways, one example provided by Cantor's normal form theorem. Gentzen's proof is based on the following assumption: for any quantifier-free formula A(x), if there is an ordinal a< ε0 for which A(a) is false, then there is a least such ordinal. Gentzen defines a notion of "reduction procedure" for proofs in Peano arithmetic. For a given proof, such a procedure produces a tree of proofs, with the given one serving as the root of the tree, and the other proofs being, in a sense, "simpler" than the given one. This increasing simplicity is formalized by attaching an ordinal < ε0 to every proof, and showing that, as one moves down the tree, these ordinals get smaller with every step. He then shows that if there were a proof of a contradiction, the reduction procedure would result in an infinite strictly descending sequence of ordinals smaller than ε0 produced by a primitive recursive operation on proofs corresponding to a quantifier-free formula. Relation to Hilbert's program and Gödel's theorem Gentzen's proof highlights one commonly missed aspect of Gödel's second incompleteness theorem. It is sometimes claimed that the consistency of a theory can only be proved in a stronger theory. Gentzen's theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order Peano arithmetic (PA) but does not contain PA. For example, it does not prove ordinary mathematical induction for all formulae, whereas PA does (since all instances of induction are axioms of PA). Gentzen's theory is not contained in PA, either, however, since it can prove a number-theoretical fact—the consistency of PA—that PA cannot. Therefore, the two theories are, in one sense, incomparable. That said, there are other, finer ways to compare the strength of theories, the most important of which is defined in terms of the notion of interpretability. It can be shown that, if one theory T is interpretable in another B, then T is consistent if B is. (Indeed, this is a large point of the notion of interpretability.) And, assuming that T is not extremely weak, T itself will be able to prove this very conditional: If B is consistent, then so is T. Hence, T cannot prove that B is consistent, by the second incompleteness theorem, whereas B may well be able to prove that T is consistent. This is what motivates the idea of using interpretability to compare theories, i.e., the thought that, if B interprets T, then B is at least as strong (in the sense of 'consistency strength') as T is. A strong form of the second incompleteness theorem, proved by Pavel Pudlák, who was building on earlier work by Solomon Feferman, states that no consistent theory T that contains Robinson arithmetic, Q, can interpret Q plus Con(T), the statement that T is consistent. By contrast, Q+Con(T) does interpret T, by a strong form of the arithmetized completeness theorem. So Q+Con(T) is always stronger (in one good sense) than T is. But Gentzen's theory trivially interprets Q+Con(PA), since it contains Q and proves Con(PA), and so Gentzen's theory interprets PA. But, by Pudlák's result, PA cannot interpret Gentzen's theory, since Gentzen's theory (as just said) interprets Q+Con(PA), and interpretability is transitive. That is: If PA did interpret Gentzen's theory, then it would also interpret Q+Con(PA) and so would be inconsistent, by Pudlák's result. So, in the sense of consistency strength, as characterized by interpretability, Gentzen's theory is stronger than Peano arithmetic. Hermann Weyl made the following comment in 1946 regarding the significance of Gentzen's consistency result following the devastating impact of Gödel's 1931 incompleteness result on Hilbert's plan to prove the consistency of mathematics. It is likely that all mathematicians ultimately would have accepted Hilbert's approach had he been able to carry it out successfully. The first steps were inspiring and promising. But then Gödel dealt it a terrific blow (1931), from which it has not yet recovered. Gödel enumerated the symbols, formulas, and sequences of formulas in Hilbert's formalism in a certain way, and thus transformed the assertion of consistency into an arithmetic proposition. He could show that this proposition can neither be proved nor disproved within the formalism. This can mean only two things: either the reasoning by which a proof of consistency is given must contain some argument that has no formal counterpart within the system, i.e., we have not succeeded in completely formalizing the procedure of mathematical induction; or hope for a strictly "finitistic" proof of consistency must be given up altogether. When G. Gentzen finally succeeded in proving the consistency of arithmetic he trespassed those limits indeed by claiming as evident a type of reasoning that penetrates into Cantor's "second class of ordinal numbers." made the following comment in 1952 on the significance of Gentzen's result, particularly in the context of the formalist program which was initiated by Hilbert. The original proposals of the formalists to make classical mathematics secure by a consistency proof did not contemplate that such a method as transfinite induction up to ε0 would have to be used. To what extent the Gentzen proof can be accepted as securing classical number theory in the sense of that problem formulation is in the present state of affairs a matter for individual judgement, depending on how ready one is to accept induction up to ε0 as a finitary method. In contrast, commented on whether Hilbert's confinement to finitary methods was too restrictive: It thus became apparent that the 'finite Standpunkt' is not the only alternative to classical ways of reasoning and is not necessarily implied by the idea of proof theory. An enlarging of the methods of proof theory was therefore suggested: instead of a reduction to finitist methods of reasoning, it was required only that the arguments be of a constructive character, allowing us to deal with more general forms of inference. Other consistency proofs of arithmetic Gentzen's first version of his consistency proof was not published during his lifetime because Paul Bernays had objected to a method implicitly used in the proof. The modified proof, described above, was published in 1936 in the Annals. Gentzen went on to publish two more consistency proofs, one in 1938 and one in 1943. All of these are contained in . Kurt Gödel reinterpreted Gentzen's 1936 proof in a lecture in 1938 in what came to be known as the no-counterexample interpretation. Both the original proof and the reformulation can be understood in game-theoretic terms. . In 1940 Wilhelm Ackermann published another consistency proof for Peano arithmetic, also using the ordinal ε0. Another proof of consistency of Arithmetic was published by I. N. Khlodovskii, in 1959. Work initiated by Gentzen's proof Gentzen's proof is the first example of what is called proof-theoretic ordinal analysis. In ordinal analysis one gauges the strength of theories by measuring how large the (constructive) ordinals are that can be proven to be well-ordered, or equivalently for how large a (constructive) ordinal can transfinite induction be proven. A constructive ordinal is the order type of a recursive well-ordering of natural numbers. In this language, Gentzen's work establishes that the proof-theoretic ordinal of first-order Peano arithmetic is ε0. Laurence Kirby and Jeff Paris proved in 1982 that Goodstein's theorem cannot be proven in Peano arithmetic. Their proof was based on Gentzen's theorem. Notes References – Translated as "The consistency of arithmetic", in . – Translated as "New version of the consistency proof for elementary number theory", in . - an English translation of papers. Metatheorems Proof theory
Gentzen's consistency proof
[ "Mathematics" ]
2,198
[ "Mathematical logic", "Proof theory" ]
4,077,356
https://en.wikipedia.org/wiki/Neem%20cake
Neem cake organic manure is the by-product obtained in the process of cold pressing of neem tree fruits and kernels, and the solvent extraction process for neem oil cake. It is a potential source of organic manure under the Bureau of Indian Standards, Specification No. 8558. Neem has demonstrated considerable potential as a fertilizer. For this purpose, neem cake and neem leaves are especially promising. Puri (1999), in his book Neem : The Divine Tree Azadirachta, has given details about neem seed cake as manure and nitrification inhibitor. The author has described that, after processing, neem cake can be used for partial replacement of poultry and cattle feed. Components Neem cake has an adequate quantity of NPK in organic form for plant growth. Being a totally botanical product it contains 100% natural NPK content and other essential micro nutrients as N (Nitrogen 2.0% to 5.0%), P (Phosphorus 0.5% to 1.0%), K (Potassium 1.0% to 2.0%), Ca (Calcium 0.5% to 3.0%), Mg (Magnesium 0.3% to 1.0%), S (Sulphur 0.2% to 3.0%), Zn (Zinc 15 ppm to 60 ppm), Cu (Copper 4 ppm to 20 ppm), Fe (Iron 500 ppm to 1200 ppm), Mn (Manganese 20 ppm to 60 ppm). It is rich in both sulphur compounds and bitter limonoids. According to research calculations, neem cake seems to make soil more fertile due to an ingredient that blocks soil bacteria from converting nitrogenous compounds into nitrogen gas. It is a nitrification inhibitor and prolongs the availability of nitrogen to both short duration and long duration crops. Use as a fertilizer Neem cake organic manure protects plant roots from nematodes, soil grubs and termites, probably due to its residual limonoid content. It also acts as a natural fertilizer with pesticidal properties. Neem cake is widely used in India to fertilize paddy, cotton and sugarcane. Usage of neem cake have shown an increase in the dry matter in Tectona grandis (teak), Acacia nilotica (gum arabic), and other forest trees. Neem seed cake can also reduce alkalinity in soil, as it produces organic acids upon decomposition. Being totally natural, it is compatible with soil microbes and rhizosphere microflora and hence ensures fertility of the soil. Neem cake improves the organic matter content of the soil, helps improve soil texture, water holding capacity, and soil aeration for better root development. Pest control Neem cake is effective in the management of insects and pests. The bitter principles of the soil and cake have been reported to have seven types of activity: (a) antifeedant, (b) attractant, (c) repellent, (d) insecticide, (e) nematicide, (f) growth disruptor and (g) antimicrobial. The cake contains salannin, nimbin, azadirachtin, meliantriol and azadiradione as the major components. Of these, azadirachtin and meliantriol are used as locust antifeedants while salannin is used as an antifeedant for the housefly. References General references Schmutterer, H. (Editor) (2002) The Neem Tree: Source of Unique Natural Products for Integrated Pest Management, Medicine, Industry And Other Purposes (Hardcover), 2nd Edition, Weinheim, Germany: VCH Verlagsgesellschaft. Tewari, D. N. (1992), Monograph on neem (Azadirachta indica A. Juss.). Dehra Dun, India: International Book Distributors. pp.123-128 Vietmeyer, N. D. (Director) (1992), Neem: A Tree for Solving Global Problems. Report of an ad hoc panel of the Board on Science and Technology for International Development, National Research Council, Washington, DC, USA: National Academy Press. pp.74-75. Puri, H.S. (1999) Neem: The Divine Tree. Azadirachta indica. Harwood Academic Publishers, Amsterdam. See also Arid Forest Research Institute (AFRI) Neem Neem oil Azadirachtin Organic farming Plant toxin insecticides
Neem cake
[ "Chemistry" ]
956
[ "Plant toxin insecticides", "Chemical ecology" ]
8,588,347
https://en.wikipedia.org/wiki/Arens%E2%80%93Fort%20space
In mathematics, the Arens–Fort space is a special example in the theory of topological spaces, named for Richard Friederich Arens and M. K. Fort, Jr. Definition The Arens–Fort space is the topological space where is the set of ordered pairs of non-negative integers A subset is open, that is, belongs to if and only if: does not contain or contains and also all but a finite number of points of all but a finite number of columns, where a column is a set with fixed. In other words, an open set is only "allowed" to contain if only a finite number of its columns contain significant gaps, where a gap in a column is significant if it omits an infinite number of points. Properties It is Hausdorff regular normal It is not: second-countable first-countable metrizable compact sequential Fréchet–Urysohn There is no sequence in that converges to However, there is a sequence in such that is a cluster point of See also References Topological spaces
Arens–Fort space
[ "Mathematics" ]
214
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
8,588,746
https://en.wikipedia.org/wiki/Reactable
The Reactable is an electronic musical instrument with a tabletop tangible user interface that was developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain by Sergi Jordà, Marcos Alonso, Martin Kaltenbrunner and Günter Geiger. In late 2010, a mobile version of the Reactable was released for iOS. Basic operation The Reactable is a round translucent table, used in a darkened room, and appears as a backlit display. By placing blocks called tangibles on the table, and interfacing with the visual display via the tangibles or fingertips, a virtual modular synthesizer is operated, creating music or sound effects. Tangibles There are various types of tangibles representing different modules of an analog synthesizer. Audio frequency VCOs, LFOs, VCFs, and sequencers are some of the commonly used tangibles. There are also tangibles that affect other modules: one called radar is a periodic trigger, and another called tonalizer limits a VCO to the notes of a musical scale. Using tangibles with the display The table itself is the display. As a tangible is placed on the table, various animated symbols appear, such as waveforms, circles, circular grids, or sweeping lines. Some symbols merely show what the particular tangible is doing, others can be used by fingertip to control the respective module. Example of operation If a VCO tangible is placed on the table, a VCO module is added to the virtual synthesizer. In the display, a waveform will appear between the tangible and the "output" (a bright spot at the center of the table), and a circle appears around the tangible which allows fingertip control of the amplitude of the waveform. Additionally, in this example the tangible can be rotated by hand to change the frequency. Placing a filter tangible between the VCO and the output causes the VCO's waveform to connect to the filter, and the filter waveform to connect to the output. If an LFO tangible is placed near the VCO, a waveform will then appear connecting those two, and the LFO will modulate the VCO. Structure The main user interface consists of a translucent table. Underneath the table is a voice recorder, aimed at the underside of the table and inputting video to a personal computer. There is also a video projector under the table, also connected to the computer, projecting video onto the underside of the table top that can be seen from the upper side as well. There is an audio engine based on Pure Data and SuperCollider Placed onto the table are the tangibles that have fiducials attached to their underside which are seen through the table by the camera. The fiducials are printed black-and-white images, consisting of circles and dots in varying patterns, optimized for use by reacTIVision. reacTIVision then uses the fiducials to understand the function of a particular tangible. Most of the tangibles are flat, with one fiducial on the underside. Some other tangibles are cubes, with fiducials attached to several sides, allowing those tangibles to serve multiple functions. Currently, there are two versions of the Reactable, Reactable Live! and the Reactable experience. Reactable Live is a smaller, more portable version designed for professional musicians. The Reactable experience is more like the original Reactable, and suited for installations in public spaces. reacTIVision The live video stream received from a digital video camera is processed by the open-source computer vision software called reacTIVision, originally developed by Martin Kaltenbrunner and Ross Bencina for the Reactable project. reacTIVision detects cartesian and rotational placement of fiducials on the table surface, then emits the specially designed Open Sound Control based network protocol called TUIO, which communicates to the actual synthesizer and visualization software that outputs to the video projector. reacTIVision is also capable of multi-touch fingertip tracking. Presentations The Reactable has been presented and performed at various festivals and conferences such as Ars Electronica, Sónar, NIME and SIGGRAPH. Over the years, the Reactable team has showcased more than 150 presentations and concerts in more than 30 countries around the world. Icelandic singer Björk is perhaps the first musician outside of the select presentations and demonstrations to use a Reactable in live performance. Björk's 2007 world tour supporting her 2007 release Volta used the instrument in several songs including "Declare Independence"; Björk's live inaugural use of the instrument took place at the Coachella Valley Music and Arts Festival on April 27, 2007. Recently splitting into two models, the Reactable Experience is on display in many exhibitions. Some of these include: INTECH, Discovery World, Museum of Science and Industry, Sub Mix Pro, ZKM, and Game Science Center Berlin. During the 2010 Winter Olympics in Vancouver, the Reactable was featured at the CODE festival. Throughout February, this event featured interactive media paired with sports and music. In March 2010, a Reactable was installed at Discovery Place in Charlotte, North Carolina. as part of the 'Think it up' exhibition which opened to the public on May 1, 2010 From July 2, 2010, until October 1, 2010, a Reactable Experience was on display in the Science Gallery in Trinity College Dublin, Ireland, as part of their 'BioRhythm' exhibition. There is Reactable at the Centre des sciences de Montréal in Montreal, Canada. In 2011 the band Nero featured a Reactable in the music video of their single "Promises". From March 2011 a Reactable is on display in Copernicus Science Centre, Warsaw, Poland. It is a part of "Re:Generation" exhibition and is available for use by visitors. British band Coldplay used a Reactable during their performance of their song "Midnight" during iTunes Festival at SXSW on March 11, 2014. Awards The Reactable received much attention from bloggers and was featured in major TV shows and popular magazines. Rolling Stone Magazine claimed that the Reactable was the hot instrument of 2007. The device has also received several awards including Prix Ars Electronica Golden Nica for Digital Musics, the MIDEM Hottest Music Biz Start-Up Award, and two D&AD Yellow Pencil Awards in 2008. See also Audiocubes Multi-touch Tenori-on Turntables References External links Reactable website Music Technology Group View Demo 1 View Demo 2 Electronic musical instruments Experimental musical instruments Surface computing User interfaces
Reactable
[ "Technology" ]
1,333
[ "User interfaces", "Interfaces" ]
8,589,701
https://en.wikipedia.org/wiki/TOTimal
A TOTimal is a drawing or picture of a fictitious animal used to stimulate tip-of-the-tongue (or TOT) events. TOTimals generally combine features of many different animals, creating a familiar feel, while still making it impossible to identify the animal. References Figures of speech Cognitive psychology
TOTimal
[ "Biology" ]
65
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
8,590,317
https://en.wikipedia.org/wiki/Data%20compression%20symmetry
Symmetry and asymmetry, in the context of data compression, refer to the time relation between compression and decompression for a given compression algorithm. If an algorithm takes the same time to compress a data archive as it does to decompress it, it is considered symmetrical. Note that compression and decompression, even for a symmetric algorithm, may not be perfectly symmetric in practice, depending on the devices the data is being copied to and from, and other factors such as latency and the fragmentation on the device. In turn, if the compression and decompression times of an algorithm are vastly different, it is considered asymmetrical. Uses Symmetric algorithms are typically used for media streaming protocols, as either the server taking too long to compress the data, or the client taking too long to decompress, would lead to delays in the viewing of the data. Asymmetrical algorithms wherein the compression is faster than the decompression can be useful for backing up or archiving data, as in these cases data is typically much more often stored than retrieved. Also asymmetrical algorithm are used in audio compression because decompression must happen in real-time, otherwise playback might get interrupted. References Further reading Data compression
Data compression symmetry
[ "Technology" ]
258
[ "Computing stubs" ]
8,590,426
https://en.wikipedia.org/wiki/Athermalization
Athermalization, in the field of optics, is the process of achieving optothermal stability in optomechanical systems. This is done by minimizing variations in optical performance over a range of temperatures. Optomechanical systems are typically made of several materials with different thermal properties. These materials compose the optics (refractive or reflective elements) and the mechanics (optical mounts and system housing). As the temperature of these materials change, the volume and index of refraction will change as well, increasing strain and aberration content (primarily defocus). Compensating for optical variations over a temperature range is known as athermalizing a system in optical engineering. Material property changes Thermal expansion is the driving phenomena for the extensive and intensive property changes in an optomechanical system. Extensive properties Extensive property changes, such as volume, alter the shape of optical and mechanical components. Systems are geometrically optimized for optical performance and are sensitive to components changing shape and orientation. While volume is a three dimensional parameter, thermal changes can be modeled in a single dimension with linear expansion, assuming an adequately small temperature range. For examples, glass manufacturer Schott provides the coefficient of linear thermal expansion for a temperature range of -30 C to 70 C. The change in length of a material is a function of the change in temperature with respect to the standard measurement temperature, . This temperature is typically room temperature or 22 degrees Celsius. Where is the length of a material at temperature , is the length of the material at temperature , is the change in temperature, and is the coefficient of thermal expansion. These equations describe how diameter, thickness, radius of curvature, and element spacing change as a function of temperature. Intensive properties The dominant intensive property change, in terms of optical performance, is the index of refraction. The refractive index of glass is a function of wavelength and temperature. There are multiple formulas that can be used to define the wavelength dependence, or dispersion, of a glass. Following the notation from Schott, the empirical Sellmeier equation is shown below. Where is wavelength and , , , , , and are the Sellmeier coefficients. These coefficients can be found in glass catalogs provided from manufacturers and are usually valid from the near-ultraviolet to the near-infrared. For wavelengths beyond this range, it is necessary to know the material's transmittance with respect to wavelength. From the dispersion formula, the temperature dependence of refractive index can be written: and Where , , , , , and are glass-dependent constants for an optic in vacuum. The power of an optic as a function of temperature can be written from the equations for extensive and intensive property changes, in addition to the lensmaker's equation. Where is optical power, is the radius of curvature, is the thickness of the lens. These equations assume spherical surfaces of curvature. If a system is not in vacuum, the index of refraction for air will vary with temperature and pressure according to the Ciddor equation, a modified version of the Edlén equation. Athermalization techniques To account for optical variations introduced by extensive and intensive property changes in materials, systems can be athermalized through material selection or feedback loops. Passive athermalization Passive athermalization works by choosing materials for a system that will compensate the overall change in system performance. The simplest way to do this is to choose materials for the optics and mechanics which have low CTE and values. This technique is not always possible as glass types are primarily chosen based on their refractive index and dispersion characteristics at operating temperature. Alternatively, mechanical materials can be chosen which have CTE values complementary to the change in focus introduced by the optics. A material with the preferred CTE is not always available, so two materials can be used in conjunction to effectively get the desired CTE value. Negative thermal expansion materials have recently increased the range of potential CTEs available, expanding passive athermalization options. Active athermalization When optical designs do not permit the selection of materials based on their thermal characteristics, passive athermalization may not be a viable technique. For example, the use of germanium in mid to long wave infrared systems is common because of its exceptional optical properties (high index of refraction and low dispersion). Unfortunately, germanium is also known for its large value, which makes it difficult to passively athermalize. Because the primary aberration induced by temperature change is defocus, an optical element, group, or focal plane can be mechanically moved to refocus a system and account for thermal changes. Actively athermalized systems are designed with a feedback loop including a motor, for the focusing mechanism, and temperature sensor, to indicate the magnitude of the focus adjustment. Temperature gradients When a system is not in thermal equilibrium, it complicates the process of determining system performance. A common temperature gradient to encounter is an axial gradient. This involves temperatures changing in a lens as a function of the thickness of the lens, or often along the optical axis. In optical lens design it is standard notation for the optical axis to be co-linear with the Z-axis in cartesian coordinates. A difference between the temperature of the first and second surface of a lens will cause the lens to bend. This affects each radius of curvature, therefor changing the optical power of the lens. The radius of curvature change is a function of the temperature gradient in the optic. Where is the thickness of the lens. Radial gradients are less predictable as they may cause the shape of curvature to change, making spherical surfaces aspherical. Determining temperature gradients in an optomechanical system can quickly become an arduous task, requiring an intimate understanding of the heat sources and sinks in a system. Temperature gradients are determined by heat flow and can be a result of conduction, convection, or radiation. Whether steady-state or transient solutions are adequate for an analysis is determined by operating requirements, system design, and the environment. It can be beneficial to leverage the computational power of the finite element method to solve the applicable heat flow equations to determine the temperature gradients of optical and mechanical components. External links Refractive index of air calculator Table of common material CTE values Information on glass from Schott Information on glass from Hoya Information on glass from Ohara Information on glass from CDGM References Optics Temperature
Athermalization
[ "Physics", "Chemistry" ]
1,315
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Optics", "SI base quantities", "Intensive quantities", " molecular", "Thermodynamics", "Atomic", "Wikipedia categories named after physical quantities", "...
8,590,459
https://en.wikipedia.org/wiki/Sahara%20pump%20theory
The Sahara pump theory is a hypothesis that explains how flora and fauna migrated between Eurasia and Africa via a land bridge in the Levant region. It posits that extended periods of abundant rainfall lasting many thousands of years (pluvial periods) in Africa are associated with a "wet-green Sahara" phase, during which larger lakes and more rivers existed. This caused changes in the flora and fauna found in the area. Migration along the river corridor was halted when, during a desert phase 1.8–0.8 million years ago (mya), the Nile ceased to flow completely and possibly flowed only temporarily in other periods due to the geologic uplift (Nubian Swell) of the Nile River region. Mechanism During periods of a wet or Green Sahara, the Sahara and Arabia become a savanna grassland and African flora and fauna become common. Following inter-pluvial arid periods, the Sahara area then reverts to desert conditions, usually as a result of the retreat of the West African Monsoon southwards. Evaporation exceeds precipitation, the level of water in lakes like Lake Chad falls, and rivers become dry wadis. Flora and fauna previously widespread as a result retreat northwards to the Atlas Mountains, southwards into West Africa, or eastwards into the Nile Valley and thence either southeast to the Ethiopian Highlands and Kenya or northeast across the Sinai into Asia. This separates populations of some of the species in areas with different climates, forcing them to adapt, possibly giving rise to allopatric speciation. Plio-Pleistocene The Plio-Pleistocene migrations to Africa included the Caprinae in two waves at 3.2 Ma and 2.7–2.5 Ma; Nyctereutes at 2.5 Ma, and Equus at 2.3 Ma. Hippotragus migrated at 2.6 Ma from Africa to the Siwaliks of the Himalayas. Asian bovids moved to Europe and to and from Africa. The primate Theropithecus experienced contraction and its fossils are found only in Europe and Asia, while Homo and Macaca settled wide ranges. 185,000–20,000 years ago Between about 133 and 122 thousand years ago (kya), the southern parts of the Saharan-Arabian Desert experienced the start of the Abbassia Pluvial, a wet period with increased monsoonal precipitation, around 100-200 mm/year. This allowed Eurasian biota to travel to Africa and vice versa. The growth of speleothems (which requires rainwater) was detected in Hol-Zakh, Ashalim, Even-Sid, Ma'ale-ha-Meyshar, Ktora Cracks, Nagev Tzavoa Cave. In Qafzeh and Es Skuhl caves, where at that time precipitation was 600–1000 mm/year, the remains of Qafzeh-Skhul type anatomically modern humans are dated from this period, but human occupation seems to end in the later arid period. The Red Sea coastal route was extremely arid before 140 and after 115 kya. Slightly wetter conditions appear at 90–87 kya, but it still was just one tenth the rainfall around 125 kya. Speleothems are detected only in Even-Sid-2. In the southern Negev Desert speleothems did not grow between 185–140 kya (MIS 6), 110–90 (MIS 5.4–5.2), nor after 85 kya nor during most of the interglacial period (MIS 5.1), the glacial period and Holocene. This suggests that the southern Negev was arid to hyper-arid in these periods. The coastal route around the western Mediterranean may have been open at times during the last glacial; speleothems grew in Hol-Zakh and in Nagev Tzavoa Caves. Comparison of speleothem formation with calcite horizons suggests that the wet periods were limited to only tens or hundreds of years. From 60–30 kya there were extremely dry conditions in many parts of Africa. Last Glacial Maximum An example of the Saharan pump has occurred after the Last Glacial Maximum (LGM). During the Last Glacial Maximum the Sahara desert was more extensive than it is now with the extent of the tropical forests being greatly reduced. During this period, the lower temperatures reduced the strength of the Hadley cell whereby rising tropical air of the Intertropical Convergence Zone (ITCZ) brings rain to the tropics, while dry descending air, at about 20 degrees north, flows back to the equator and brings desert conditions to this region. This phase is associated with high rates of wind-blown mineral dust, found in marine cores that come from the north tropical Atlantic. African humid period Around 12,500 BC, the amount of dust in the cores in the Bølling–Allerød phase suddenly plummets and shows a period of much wetter conditions in the Sahara, indicating a Dansgaard–Oeschger (DO) event (a sudden warming followed by a slower cooling of the climate). The moister Saharan conditions had begun about 12,500 BC, with the extension of the ITCZ northward in the northern hemisphere summer, bringing moist wet conditions and a savanna climate to the Sahara, which (apart from a short dry spell associated with the Younger Dryas) peaked during the Holocene thermal maximum climatic phase at 4000 BC when mid-latitude temperatures seem to have been between 2 and 3 degrees warmer than in the recent past. Analysis of Nile River deposited sediments in the delta also shows this period had a higher proportion of sediments coming from the Blue Nile, suggesting higher rainfall also in the Ethiopian Highlands. This was caused principally by a stronger monsoonal circulation throughout the sub-tropical regions, affecting India, Arabia and the Sahara. Lake Victoria only recently became the source of the White Nile and dried out almost completely around 15 kya. The sudden subsequent movement of the ITCZ southwards with a Heinrich event (a sudden cooling followed by a slower warming), linked to changes with the El Niño–Southern Oscillation cycle, led to a rapid drying out of the Saharan and Arabian regions, which quickly became desert. This is linked to a marked decline in the scale of the Nile floods between 2700 and 2100 BC. One theory proposed that humans accelerated the drying out period from 6,000–2,500 BC by pastoralists overgrazing available grassland. Human migration The Saharan pump has been used to date a number of waves of human migration from Africa, namely: Lower Paleolithic: Homo erectus (ssp. ergaster) into Southeast and East Asia, possibly twice, once with an Oldowan technology, which travelled as far as China and India to create the Chopper tradition, the second with Acheulian hand axes, only as far as the Indian Subcontinent. Middle Paleolithic: Homo heidelbergensis into the Middle East and Western Europe. Upper Paleolithic: Homo sapiens (possible early "Out of Africa" wave, receded before 80,000 years ago and eventually replaced by the "coastal migration" wave after 70,000 years ago) Epipaleolithic: Afroasiatic migration into the Levant associated with the aridity of the 8.2 kiloyear event Neolithic: 5.9 kiloyear event: sometimes associated with certain population movements of the Neolithic period Libu and Meshwesh migrations attacking Egypt at the end of the New Kingdom that ushered in the Bronze Age Collapse and saw chariots appear in the Sahara. See also Abbassia Pluvial African humid period Mousterian Pluvial North African climate cycles References Animal migration Historical geology History of the Sahara Paleoclimatology Peopling of the world Physical geography Prehistoric Africa Prehistoric migrations
Sahara pump theory
[ "Biology" ]
1,590
[ "Ethology", "Behavior", "Animal migration" ]
8,590,528
https://en.wikipedia.org/wiki/ATM25
ATM25 is an ATM (Asynchronous Transfer Mode) version wherein data is transferred at over Category 3 cable. Background ATM25 has no particular distinctions from other ATM versions. However, ATM25 chipsets were, at one time, inexpensive in comparison to faster ATM chipsets, having the result of making ATM technology available for small office/home office environments. However, these networks no longer have much potential for expansion, as Ethernet has become the first choice in this domain. ATM25 chips can typically achieve speeds of around , and typically support around 32 devices in a single loop. ATM25 is still supported today by Cat 6a cables. The WAN connection side of ATM25 systems often takes place over a fast DSL variant such as RADSL. DSL is often considered in this case, as its technology is based on an ATM core. Criticisms In March 2001, Network World described ATM25 as a "solution looking for a problem": Classified mostly as a solution looking for a problem, ATM to the desktop failed before it really got rolling. While many folks thought the idea of providing all that bandwidth to user PCs was worthwhile, the idea of paying twice as much for the luxury compared with switched Ethernet didn't fly. ATM25 was criticised for being more expensive to use than 10BASE-T Ethernet References Networking standards Asynchronous Transfer Mode
ATM25
[ "Technology", "Engineering" ]
278
[ "Networking standards", "Computer standards", "Asynchronous Transfer Mode", "Computer networks engineering" ]
8,590,799
https://en.wikipedia.org/wiki/ATTRIB
In computing, ATTRIB is a command in Intel ISIS-II, DOS, IBM OS/2, Microsoft Windows and ReactOS that allows the user to change various characteristics, or "attributes" of a computer file or directory. The command is also available in the EFI shell. History Several operating systems provided a set of modifiable file characteristics that could be accessed and changed through a low-level system call. For example, as of release MS-DOS 4.0, the first six bits of the file attribute byte indicated whether or not a file was read-only (as opposed to writeable), hidden, a system file, a volume label, a subdirectory, or if the file had been "archived" (with the bit being set if the file had changed since the last use of the BACKUP command). However, initial releases of the operating system did not provide user-level method for reading or changing these values. The initial version of the ATTRIB command for DOS was first included in version 3.0 of PC DOS, with functionality limited to changing the read-only attribute. Subsequent versions allowed the read-only, hidden, system and archive bits to be set. MS-DOS version 3.3 added the capability of recursive searching through subdirectories to display attributes of specified files. Digital Research DR DOS 6.0 and Datalight ROM-DOS also include an implementation of the command. The FreeDOS version was developed by Phil Brutsche and is licensed under the GPLv2. Uses Setting the read-only bit of a file provided only partial protection against inadvertent deletion: while commands such as del and erase would respect the attribute, other commands such as DELTREE did not. Changing the system attribute was not possible in early versions of Windows, thus requiring use of ATTRIB. Similarly, a system crash in early versions of Windows could lead to a situation where a temporary file had the read-only bit set and was additionally (and irrevocably) locked by the Windows OS; in this instance, booting into DOS (thus avoiding the Windows lock) and unsetting the read-only attribute with ATTRIB was the recommended way of deleting the file. Manipulating the archive bit allowed users to control which files were backed up using the BACKUP command. See also chattr, the equivalent on Unix and Linux cacls, the Windows NT access control list (ACL) utility List of DOS commands References Further reading External links attrib | Microsoft Docs Microsoft DOS ATTRIB command External DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands
ATTRIB
[ "Technology" ]
548
[ "Windows commands", "Computing commands", "OS/2 commands", "ReactOS commands", "MSX-DOS commands" ]
8,591,323
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Ara
This is the list of notable stars in the constellation Ara, sorted by decreasing brightness. Notes See also List of stars by constellation References Wagman, M., (2003). Lost Stars, The McDonald & Woodward Publishing Co., Blackburg, Virginia. List Ara
List of stars in Ara
[ "Astronomy" ]
56
[ "Lists of stars by constellation", "Constellations", "Ara (constellation)" ]
8,591,336
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Aries
This is the list of notable stars in the constellation Aries, sorted by decreasing brightness. See also List of stars by constellation References List Aries
List of stars in Aries
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Aries (constellation)", "Constellations" ]
8,591,345
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Auriga
This is the list of notable stars in the constellation Auriga, sorted by decreasing brightness. See also List of stars by constellation References List Auriga
List of stars in Auriga
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Auriga", "Constellations" ]
8,591,359
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Bo%C3%B6tes
This is the list of notable stars in the constellation Boötes, sorted by decreasing brightness. The genitive for stars in this constellation is Boötis and the IAU abbreviation is Boo. Hence, η Boo is Eta Boötis. See also List of stars by constellation References List Bootes
List of stars in Boötes
[ "Astronomy" ]
59
[ "Lists of stars by constellation", "Boötes", "Constellations" ]
8,591,363
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Caelum
This is the list of notable stars in the constellation Caelum, sorted by decreasing brightness. See also Lists of stars by constellation References List Caelum
List of stars in Caelum
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Caelum", "Constellations" ]
8,591,368
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Camelopardalis
This is the list of notable stars in the constellation Camelopardalis, sorted by decreasing brightness. See also List of stars by constellation References Sources List Camelopardalis
List of stars in Camelopardalis
[ "Astronomy" ]
36
[ "Lists of stars by constellation", "Camelopardalis", "Constellations" ]
8,591,372
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Cancer
This is the list of notable stars in the constellation Cancer. The 121 stars are sorted by decreasing brightness, beginning with Beta Cancri, the brightest star in Cancer. See also Lists of stars by constellation References Bibliography List Cancer
List of stars in Cancer
[ "Astronomy" ]
47
[ "Cancer (constellation)", "Lists of stars by constellation", "Constellations" ]
8,591,378
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Canes%20Venatici
This is the list of notable stars in the constellation Canes Venatici, sorted by decreasing brightness. See also List of stars by constellation References List Canes Venatici
List of stars in Canes Venatici
[ "Astronomy" ]
37
[ "Lists of stars by constellation", "Canes Venatici", "Constellations" ]
8,591,384
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Canis%20Major
This is the list of notable stars in the constellation Canis Major, sorted by decreasing brightness. List See also Lists of stars by constellation References List Canis Major
List of stars in Canis Major
[ "Astronomy" ]
34
[ "Lists of stars by constellation", "Canis Major", "Constellations" ]
8,591,387
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Canis%20Minor
This is the list of notable stars in the constellation Canis Minor, sorted by decreasing brightness. See also Lists of stars by constellation Notes References Wagman, M., Lost Stars: Lost, Missing, and Troublesome Stars from the Catalogues of Johannes Bayer, Nichoilas-Louis de Lacaille, John Flamsteed, and Sundry Others, The McDonald & Woodward Publishing Company, Blaksburg, 2003, p. 460. Flamsteed, J., (ed.) "Stellarum Inerrantium Catalogus Britannicus", Historia Coelestis Britannica, vol.3, H. Meere, London, 1725, p. 32. List Canis Minor
List of stars in Canis Minor
[ "Astronomy" ]
149
[ "Lists of stars by constellation", "Canis Minor", "Constellations" ]
8,591,394
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Capricornus
This is the list of notable stars in the constellation Capricornus, sorted by decreasing brightness. See also List of stars by constellation References List Capricornus
List of stars in Capricornus
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Capricornus", "Constellations" ]
8,591,406
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Carina
This is the list of notable stars in the constellation Carina, sorted by decreasing brightness. This constellation's Bayer designations (Greek-letter star names) were given while it was still considered part of the constellation of Argo Navis. After Argo Navis was broken up into Carina, Vela, and Puppis, these Greek-letter designations were kept, so that Carina does not have a full complement of Greek-letter designations. For example, since Argo Navis's gamma star went to Vela, there is no Gamma Carinae. See also List of stars by constellation Notes References List Carina
List of stars in Carina
[ "Astronomy" ]
129
[ "Lists of stars by constellation", "Carina (constellation)", "Constellations" ]
8,591,413
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Cassiopeia
This is the list of notable stars in the constellation Cassiopeia, sorted by decreasing brightness. References List Cassiopeia
List of stars in Cassiopeia
[ "Astronomy" ]
28
[ "Lists of stars by constellation", "Cassiopeia (constellation)", "Constellations" ]
8,591,419
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Centaurus
This is the list of notable stars in the constellation Centaurus, sorted by decreasing brightness. See also List of stars by constellation Notes References List Centaurus
List of stars in Centaurus
[ "Astronomy" ]
32
[ "Lists of stars by constellation", "Centaurus", "Constellations" ]
8,591,424
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Cepheus
This is the list of notable stars in the constellation Cepheus, sorted by decreasing brightness. See also List of stars by constellation References List Cepheus
List of stars in Cepheus
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Constellations", "Cepheus (constellation)" ]
8,591,428
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Cetus
This is the list of notable stars in the constellation Cetus, sorted by decreasing brightness. See also List of stars by constellation References List Cetus
List of stars in Cetus
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Cetus", "Constellations" ]
8,591,430
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Chamaeleon
This is the list of notable stars in the constellation Chamaeleon, sorted by decreasing brightness. See also List of stars by constellation DI Cha, a star system (not a star) where 4 stars orbit each other References List Chamaeleon
List of stars in Chamaeleon
[ "Astronomy" ]
52
[ "Lists of stars by constellation", "Chamaeleon", "Constellations" ]
8,591,434
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Circinus
This is the list of notable stars in the constellation Circinus, sorted by decreasing brightness. See also List of stars by constellation References List Circinus
List of stars in Circinus
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Circinus", "Constellations" ]
8,591,437
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Columba
This is the list of notable stars in the constellation Columba, sorted by decreasing brightness. Notes See also List of stars by constellation References List Columba
List of stars in Columba
[ "Astronomy" ]
34
[ "Lists of stars by constellation", "Columba (constellation)", "Constellations" ]
8,591,442
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Coma%20Berenices
This is the list of notable stars in the constellation Coma Berenices, sorted by decreasing brightness. See also List of stars by constellation References List Coma Berenices
List of stars in Coma Berenices
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Coma Berenices", "Constellations" ]
8,591,447
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Corona%20Australis
This is the list of notable stars in the constellation Corona Australis, sorted by decreasing brightness. See also List of stars by constellation References List Corona Australis
List of stars in Corona Australis
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Corona Australis", "Constellations" ]
8,591,451
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Corona%20Borealis
This is the list of notable stars in the constellation Corona Borealis, sorted by decreasing brightness. See also List of stars by constellation References List Corona Borealis
List of stars in Corona Borealis
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Corona Borealis", "Constellations" ]
8,591,458
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Corvus
This is the list of notable stars in the constellation Corvus, sorted by decreasing brightness. See also List of stars by constellation References List Corvus
List of stars in Corvus
[ "Astronomy" ]
33
[ "Corvus (constellation)", "Lists of stars by constellation", "Constellations" ]
8,591,462
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Crater
This is the list of notable stars in the constellation Crater, sorted by decreasing brightness. References List Crater
List of stars in Crater
[ "Astronomy" ]
22
[ "Lists of stars by constellation", "Crater (constellation)", "Constellations" ]
8,591,468
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Crux
This is the list of notable stars in the constellation Crux, sorted by decreasing brightness. See also List of star names in Crux List of stars by constellation Bandeira do Brasil: Sobre as estrelas (Portuguese) Notes References List Crux
List of stars in Crux
[ "Astronomy" ]
53
[ "Lists of stars by constellation", "Crux", "Constellations" ]
8,591,474
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Cygnus
This is the list of notable stars in the constellation of Cygnus, sorted by decreasing apparent magnitude. See also List of stars by constellation References List Cygnus
List of stars in Cygnus
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Cygnus (constellation)", "Constellations" ]
8,591,653
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Delphinus
This is the list of notable stars in the constellation Delphinus, sorted by decreasing brightness. See also List of stars by constellation References List Delphinus
List of stars in Delphinus
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Delphinus", "Constellations" ]
8,591,658
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Dorado
This is the list of notable stars in the constellation Dorado, sorted by decreasing brightness. See also Lists of stars by constellation References List Dorado
List of stars in Dorado
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Dorado", "Constellations" ]
8,591,662
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Draco
This is the list of notable stars in the constellation Draco. See also List of stars by constellation References Bibliography List Draco
List of stars in Draco
[ "Astronomy" ]
27
[ "Lists of stars by constellation", "Constellations", "Draco (constellation)" ]
8,591,665
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Equuleus
This is the list of notable stars in the constellation Equuleus, sorted by decreasing brightness. See also List of stars by constellation References List Equuleus
List of stars in Equuleus
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Equuleus", "Constellations" ]
8,591,668
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Eridanus
This is the list of notable stars in the constellation Eridanus, sorted by decreasing brightness. See also References List Eridanus
List of stars in Eridanus
[ "Astronomy" ]
28
[ "Lists of stars by constellation", "Eridanus (constellation)", "Constellations" ]
8,591,670
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Fornax
This is the list of notable stars in the constellation Fornax, sorted by decreasing brightness. See also Lists of stars by constellation List of UDF objects (1–500) References List Fornax
List of stars in Fornax
[ "Astronomy" ]
43
[ "Lists of stars by constellation", "Fornax", "Constellations" ]
8,591,675
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Gemini
This is the list of notable stars in the constellation Gemini, sorted by decreasing brightness. See also List of stars by constellation References List Gemini
List of stars in Gemini
[ "Astronomy" ]
29
[ "Lists of stars by constellation", "Gemini (constellation)", "Constellations" ]
8,591,677
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Grus
This is the list of notable stars in the constellation Grus, sorted by decreasing brightness. See also List of stars by constellation References List Grus
List of stars in Grus
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Grus (constellation)", "Constellations" ]
8,591,680
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Hercules
This is the list of notable stars in the constellation Hercules, sorted by decreasing brightness. See also List of stars by constellation References List Hercules
List of stars in Hercules
[ "Astronomy" ]
29
[ "Lists of stars by constellation", "Hercules (constellation)", "Constellations" ]
8,591,685
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Horologium
This is the list of notable stars in the constellation Horologium, sorted by decreasing brightness. See also List of stars by constellation References List Horologium
List of stars in Horologium
[ "Astronomy" ]
35
[ "Lists of stars by constellation", "Constellations", "Horologium (constellation)" ]
8,591,690
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Hydra
This is the list of notable stars in the constellation Hydra, sorted by decreasing brightness. See also List of stars by constellation References Bibliography List Hydra
List of stars in Hydra
[ "Astronomy" ]
30
[ "Lists of stars by constellation", "Hydra (constellation)", "Constellations" ]
8,591,696
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Hydrus
This is the list of notable stars in the constellation Hydrus, sorted by decreasing brightness. See also List of stars by constellation References List Hydrus
List of stars in Hydrus
[ "Astronomy" ]
33
[ "Lists of stars by constellation", "Hydrus", "Constellations" ]
8,591,700
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Indus
This is the list of notable stars in the constellation Indus, sorted by decreasing brightness. See also List of stars by constellation References List Indus
List of stars in Indus
[ "Astronomy" ]
29
[ "Lists of stars by constellation", "Indus (constellation)", "Constellations" ]
8,591,705
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Lacerta
This is the list of notable stars in the constellation Lacerta, sorted by decreasing brightness. See also List of stars by constellation References List Lacerta
List of stars in Lacerta
[ "Astronomy" ]
31
[ "Lacerta", "Lists of stars by constellation", "Constellations" ]
8,591,710
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Leo
This is the list of notable stars in the constellation Leo, sorted by decreasing brightness. See also List of stars by constellation References List Leo
List of stars in Leo
[ "Astronomy" ]
29
[ "Lists of stars by constellation", "Leo (constellation)", "Constellations" ]
8,591,713
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Leo%20Minor
This is the list of notable stars in the constellation Leo Minor, sorted by decreasing brightness. See also List of stars by constellation References List Leo Minor
List of stars in Leo Minor
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Leo Minor", "Constellations" ]
8,591,718
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Lepus
This is the list of notable stars in the constellation Lepus, sorted by decreasing brightness. References List Lepus
List of stars in Lepus
[ "Astronomy" ]
24
[ "Lists of stars by constellation", "Lepus (constellation)", "Constellations" ]
8,591,723
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Libra
This is the list of notable stars in the constellation Libra, sorted by decreasing brightness. See also List of stars by constellation Color=Blue References List Libra
List of stars in Libra
[ "Astronomy" ]
34
[ "Lists of stars by constellation", "Libra (constellation)", "Constellations" ]
8,591,733
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Lupus
This is the list of notable stars in the constellation Lupus, sorted by decreasing brightness. See also List of stars by constellation References List Lupus
List of stars in Lupus
[ "Astronomy" ]
31
[ "Lists of stars by constellation", "Constellations", "Lupus (constellation)" ]